00:00:00.000 Started by upstream project "autotest-nightly" build number 4282 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3645 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.139 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.139 The recommended git tool is: git 00:00:00.140 using credential 00000000-0000-0000-0000-000000000002 00:00:00.141 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.195 Fetching changes from the remote Git repository 00:00:00.197 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.239 Using shallow fetch with depth 1 00:00:00.239 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.239 > git --version # timeout=10 00:00:00.277 > git --version # 'git version 2.39.2' 00:00:00.278 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.313 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.313 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:08.322 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:08.333 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:08.344 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:08.344 > git config core.sparsecheckout # timeout=10 00:00:08.354 > git read-tree -mu HEAD # timeout=10 00:00:08.369 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:08.390 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:08.390 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:08.491 [Pipeline] Start of Pipeline 00:00:08.501 [Pipeline] library 00:00:08.503 Loading library shm_lib@master 00:00:08.503 Library shm_lib@master is cached. Copying from home. 00:00:08.518 [Pipeline] node 00:00:08.531 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:08.532 [Pipeline] { 00:00:08.541 [Pipeline] catchError 00:00:08.542 [Pipeline] { 00:00:08.556 [Pipeline] wrap 00:00:08.566 [Pipeline] { 00:00:08.577 [Pipeline] stage 00:00:08.579 [Pipeline] { (Prologue) 00:00:08.809 [Pipeline] sh 00:00:09.107 + logger -p user.info -t JENKINS-CI 00:00:09.127 [Pipeline] echo 00:00:09.128 Node: GP11 00:00:09.136 [Pipeline] sh 00:00:09.430 [Pipeline] setCustomBuildProperty 00:00:09.439 [Pipeline] echo 00:00:09.440 Cleanup processes 00:00:09.444 [Pipeline] sh 00:00:09.720 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.720 2737257 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.734 [Pipeline] sh 00:00:10.018 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:10.019 ++ grep -v 'sudo pgrep' 00:00:10.019 ++ awk '{print $1}' 00:00:10.019 + sudo kill -9 00:00:10.019 + true 00:00:10.030 [Pipeline] cleanWs 00:00:10.039 [WS-CLEANUP] Deleting project workspace... 00:00:10.039 [WS-CLEANUP] Deferred wipeout is used... 00:00:10.043 [WS-CLEANUP] done 00:00:10.048 [Pipeline] setCustomBuildProperty 00:00:10.063 [Pipeline] sh 00:00:10.340 + sudo git config --global --replace-all safe.directory '*' 00:00:10.458 [Pipeline] httpRequest 00:00:10.828 [Pipeline] echo 00:00:10.830 Sorcerer 10.211.164.20 is alive 00:00:10.841 [Pipeline] retry 00:00:10.843 [Pipeline] { 00:00:10.858 [Pipeline] httpRequest 00:00:10.862 HttpMethod: GET 00:00:10.863 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.863 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.878 Response Code: HTTP/1.1 200 OK 00:00:10.878 Success: Status code 200 is in the accepted range: 200,404 00:00:10.878 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:14.062 [Pipeline] } 00:00:14.080 [Pipeline] // retry 00:00:14.088 [Pipeline] sh 00:00:14.369 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:14.384 [Pipeline] httpRequest 00:00:14.688 [Pipeline] echo 00:00:14.690 Sorcerer 10.211.164.20 is alive 00:00:14.700 [Pipeline] retry 00:00:14.702 [Pipeline] { 00:00:14.716 [Pipeline] httpRequest 00:00:14.720 HttpMethod: GET 00:00:14.721 URL: http://10.211.164.20/packages/spdk_d47eb51c960b88a8c704cc184fd594dbc3abad70.tar.gz 00:00:14.721 Sending request to url: http://10.211.164.20/packages/spdk_d47eb51c960b88a8c704cc184fd594dbc3abad70.tar.gz 00:00:14.739 Response Code: HTTP/1.1 200 OK 00:00:14.739 Success: Status code 200 is in the accepted range: 200,404 00:00:14.740 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_d47eb51c960b88a8c704cc184fd594dbc3abad70.tar.gz 00:01:20.875 [Pipeline] } 00:01:20.893 [Pipeline] // retry 00:01:20.901 [Pipeline] sh 00:01:21.181 + tar --no-same-owner -xf spdk_d47eb51c960b88a8c704cc184fd594dbc3abad70.tar.gz 00:01:24.476 [Pipeline] sh 00:01:24.755 + git -C spdk log --oneline -n5 00:01:24.755 d47eb51c9 bdev: fix a race between reset start and complete 00:01:24.755 83e8405e4 nvmf/fc: Qpair disconnect callback: Serialize FC delete connection & close qpair process 00:01:24.755 0eab4c6fb nvmf/fc: Validate the ctrlr pointer inside nvmf_fc_req_bdev_abort() 00:01:24.755 4bcab9fb9 correct kick for CQ full case 00:01:24.755 8531656d3 test/nvmf: Interrupt test for local pcie nvme device 00:01:24.765 [Pipeline] } 00:01:24.779 [Pipeline] // stage 00:01:24.796 [Pipeline] stage 00:01:24.804 [Pipeline] { (Prepare) 00:01:24.821 [Pipeline] writeFile 00:01:24.836 [Pipeline] sh 00:01:25.112 + logger -p user.info -t JENKINS-CI 00:01:25.123 [Pipeline] sh 00:01:25.400 + logger -p user.info -t JENKINS-CI 00:01:25.410 [Pipeline] sh 00:01:25.688 + cat autorun-spdk.conf 00:01:25.688 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:25.688 SPDK_TEST_NVMF=1 00:01:25.688 SPDK_TEST_NVME_CLI=1 00:01:25.688 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:25.688 SPDK_TEST_NVMF_NICS=e810 00:01:25.688 SPDK_RUN_ASAN=1 00:01:25.688 SPDK_RUN_UBSAN=1 00:01:25.688 NET_TYPE=phy 00:01:25.693 RUN_NIGHTLY=1 00:01:25.697 [Pipeline] readFile 00:01:25.719 [Pipeline] withEnv 00:01:25.721 [Pipeline] { 00:01:25.732 [Pipeline] sh 00:01:26.009 + set -ex 00:01:26.009 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:26.009 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:26.009 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:26.009 ++ SPDK_TEST_NVMF=1 00:01:26.009 ++ SPDK_TEST_NVME_CLI=1 00:01:26.009 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:26.009 ++ SPDK_TEST_NVMF_NICS=e810 00:01:26.009 ++ SPDK_RUN_ASAN=1 00:01:26.009 ++ SPDK_RUN_UBSAN=1 00:01:26.009 ++ NET_TYPE=phy 00:01:26.009 ++ RUN_NIGHTLY=1 00:01:26.009 + case $SPDK_TEST_NVMF_NICS in 00:01:26.009 + DRIVERS=ice 00:01:26.009 + [[ tcp == \r\d\m\a ]] 00:01:26.009 + [[ -n ice ]] 00:01:26.009 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:26.009 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:26.009 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:26.009 rmmod: ERROR: Module irdma is not currently loaded 00:01:26.009 rmmod: ERROR: Module i40iw is not currently loaded 00:01:26.009 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:26.009 + true 00:01:26.009 + for D in $DRIVERS 00:01:26.009 + sudo modprobe ice 00:01:26.009 + exit 0 00:01:26.017 [Pipeline] } 00:01:26.033 [Pipeline] // withEnv 00:01:26.038 [Pipeline] } 00:01:26.054 [Pipeline] // stage 00:01:26.064 [Pipeline] catchError 00:01:26.066 [Pipeline] { 00:01:26.080 [Pipeline] timeout 00:01:26.080 Timeout set to expire in 1 hr 0 min 00:01:26.082 [Pipeline] { 00:01:26.096 [Pipeline] stage 00:01:26.099 [Pipeline] { (Tests) 00:01:26.115 [Pipeline] sh 00:01:26.398 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:26.398 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:26.398 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:26.398 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:26.398 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:26.398 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:26.398 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:26.398 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:26.398 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:26.398 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:26.398 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:26.398 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:26.398 + source /etc/os-release 00:01:26.398 ++ NAME='Fedora Linux' 00:01:26.398 ++ VERSION='39 (Cloud Edition)' 00:01:26.398 ++ ID=fedora 00:01:26.399 ++ VERSION_ID=39 00:01:26.399 ++ VERSION_CODENAME= 00:01:26.399 ++ PLATFORM_ID=platform:f39 00:01:26.399 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:26.399 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:26.399 ++ LOGO=fedora-logo-icon 00:01:26.399 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:26.399 ++ HOME_URL=https://fedoraproject.org/ 00:01:26.399 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:26.399 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:26.399 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:26.399 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:26.399 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:26.399 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:26.399 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:26.399 ++ SUPPORT_END=2024-11-12 00:01:26.399 ++ VARIANT='Cloud Edition' 00:01:26.399 ++ VARIANT_ID=cloud 00:01:26.399 + uname -a 00:01:26.399 Linux spdk-gp-11 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:26.399 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:27.333 Hugepages 00:01:27.333 node hugesize free / total 00:01:27.333 node0 1048576kB 0 / 0 00:01:27.333 node0 2048kB 0 / 0 00:01:27.333 node1 1048576kB 0 / 0 00:01:27.333 node1 2048kB 0 / 0 00:01:27.333 00:01:27.333 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:27.333 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:27.333 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:27.333 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:27.333 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:27.333 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:27.333 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:27.333 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:27.333 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:27.333 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:27.333 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:27.333 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:27.333 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:27.333 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:27.333 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:27.333 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:27.333 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:27.333 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:27.333 + rm -f /tmp/spdk-ld-path 00:01:27.333 + source autorun-spdk.conf 00:01:27.333 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:27.333 ++ SPDK_TEST_NVMF=1 00:01:27.333 ++ SPDK_TEST_NVME_CLI=1 00:01:27.333 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:27.333 ++ SPDK_TEST_NVMF_NICS=e810 00:01:27.333 ++ SPDK_RUN_ASAN=1 00:01:27.333 ++ SPDK_RUN_UBSAN=1 00:01:27.333 ++ NET_TYPE=phy 00:01:27.333 ++ RUN_NIGHTLY=1 00:01:27.333 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:27.333 + [[ -n '' ]] 00:01:27.333 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:27.333 + for M in /var/spdk/build-*-manifest.txt 00:01:27.333 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:27.333 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:27.333 + for M in /var/spdk/build-*-manifest.txt 00:01:27.333 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:27.333 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:27.333 + for M in /var/spdk/build-*-manifest.txt 00:01:27.333 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:27.333 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:27.333 ++ uname 00:01:27.592 + [[ Linux == \L\i\n\u\x ]] 00:01:27.592 + sudo dmesg -T 00:01:27.592 + sudo dmesg --clear 00:01:27.592 + dmesg_pid=2738482 00:01:27.592 + [[ Fedora Linux == FreeBSD ]] 00:01:27.592 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:27.592 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:27.592 + sudo dmesg -Tw 00:01:27.592 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:27.592 + [[ -x /usr/src/fio-static/fio ]] 00:01:27.592 + export FIO_BIN=/usr/src/fio-static/fio 00:01:27.592 + FIO_BIN=/usr/src/fio-static/fio 00:01:27.592 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:27.592 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:27.592 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:27.592 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:27.592 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:27.592 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:27.592 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:27.592 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:27.592 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:27.592 07:25:19 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:27.592 07:25:19 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:27.592 07:25:19 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:27.592 07:25:19 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:27.592 07:25:19 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:27.592 07:25:19 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:27.592 07:25:19 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:27.592 07:25:19 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_RUN_ASAN=1 00:01:27.592 07:25:19 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:27.592 07:25:19 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:27.592 07:25:19 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=1 00:01:27.592 07:25:19 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:27.592 07:25:19 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:27.592 07:25:19 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:27.592 07:25:19 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:27.592 07:25:19 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:27.592 07:25:19 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:27.592 07:25:19 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:27.592 07:25:19 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:27.592 07:25:19 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:27.592 07:25:19 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:27.592 07:25:19 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:27.592 07:25:19 -- paths/export.sh@5 -- $ export PATH 00:01:27.592 07:25:19 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:27.592 07:25:19 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:27.593 07:25:19 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:27.593 07:25:19 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1731997519.XXXXXX 00:01:27.593 07:25:19 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1731997519.G2egPQ 00:01:27.593 07:25:19 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:27.593 07:25:19 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:27.593 07:25:19 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:27.593 07:25:19 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:27.593 07:25:19 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:27.593 07:25:19 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:27.593 07:25:19 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:27.593 07:25:19 -- common/autotest_common.sh@10 -- $ set +x 00:01:27.593 07:25:19 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:01:27.593 07:25:19 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:27.593 07:25:19 -- pm/common@17 -- $ local monitor 00:01:27.593 07:25:19 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:27.593 07:25:19 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:27.593 07:25:19 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:27.593 07:25:19 -- pm/common@21 -- $ date +%s 00:01:27.593 07:25:19 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:27.593 07:25:19 -- pm/common@21 -- $ date +%s 00:01:27.593 07:25:19 -- pm/common@25 -- $ sleep 1 00:01:27.593 07:25:19 -- pm/common@21 -- $ date +%s 00:01:27.593 07:25:19 -- pm/common@21 -- $ date +%s 00:01:27.593 07:25:19 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731997519 00:01:27.593 07:25:19 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731997519 00:01:27.593 07:25:19 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731997519 00:01:27.593 07:25:19 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731997519 00:01:27.593 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731997519_collect-vmstat.pm.log 00:01:27.593 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731997519_collect-cpu-load.pm.log 00:01:27.593 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731997519_collect-cpu-temp.pm.log 00:01:27.593 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731997519_collect-bmc-pm.bmc.pm.log 00:01:28.525 07:25:20 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:28.525 07:25:20 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:28.525 07:25:20 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:28.525 07:25:20 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:28.525 07:25:20 -- spdk/autobuild.sh@16 -- $ date -u 00:01:28.525 Tue Nov 19 06:25:20 AM UTC 2024 00:01:28.525 07:25:20 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:28.525 v25.01-pre-190-gd47eb51c9 00:01:28.525 07:25:20 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:28.525 07:25:20 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:28.525 07:25:20 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:28.525 07:25:20 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:28.525 07:25:20 -- common/autotest_common.sh@10 -- $ set +x 00:01:28.783 ************************************ 00:01:28.783 START TEST asan 00:01:28.783 ************************************ 00:01:28.783 07:25:20 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:01:28.783 using asan 00:01:28.783 00:01:28.783 real 0m0.000s 00:01:28.783 user 0m0.000s 00:01:28.783 sys 0m0.000s 00:01:28.783 07:25:20 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:28.783 07:25:20 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:28.783 ************************************ 00:01:28.783 END TEST asan 00:01:28.783 ************************************ 00:01:28.783 07:25:20 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:28.783 07:25:20 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:28.783 07:25:20 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:28.783 07:25:20 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:28.783 07:25:20 -- common/autotest_common.sh@10 -- $ set +x 00:01:28.783 ************************************ 00:01:28.783 START TEST ubsan 00:01:28.783 ************************************ 00:01:28.783 07:25:20 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:28.783 using ubsan 00:01:28.783 00:01:28.783 real 0m0.000s 00:01:28.783 user 0m0.000s 00:01:28.784 sys 0m0.000s 00:01:28.784 07:25:20 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:28.784 07:25:20 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:28.784 ************************************ 00:01:28.784 END TEST ubsan 00:01:28.784 ************************************ 00:01:28.784 07:25:20 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:28.784 07:25:20 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:28.784 07:25:20 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:28.784 07:25:20 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:28.784 07:25:20 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:28.784 07:25:20 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:28.784 07:25:20 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:28.784 07:25:20 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:28.784 07:25:20 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-shared 00:01:28.784 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:28.784 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:29.042 Using 'verbs' RDMA provider 00:01:39.578 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:49.546 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:49.546 Creating mk/config.mk...done. 00:01:49.546 Creating mk/cc.flags.mk...done. 00:01:49.546 Type 'make' to build. 00:01:49.546 07:25:41 -- spdk/autobuild.sh@70 -- $ run_test make make -j48 00:01:49.546 07:25:41 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:49.546 07:25:41 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:49.546 07:25:41 -- common/autotest_common.sh@10 -- $ set +x 00:01:49.546 ************************************ 00:01:49.546 START TEST make 00:01:49.546 ************************************ 00:01:49.546 07:25:41 make -- common/autotest_common.sh@1129 -- $ make -j48 00:01:49.805 make[1]: Nothing to be done for 'all'. 00:01:59.810 The Meson build system 00:01:59.810 Version: 1.5.0 00:01:59.810 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:59.810 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:59.810 Build type: native build 00:01:59.810 Program cat found: YES (/usr/bin/cat) 00:01:59.810 Project name: DPDK 00:01:59.810 Project version: 24.03.0 00:01:59.810 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:59.810 C linker for the host machine: cc ld.bfd 2.40-14 00:01:59.810 Host machine cpu family: x86_64 00:01:59.810 Host machine cpu: x86_64 00:01:59.811 Message: ## Building in Developer Mode ## 00:01:59.811 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:59.811 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:59.811 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:59.811 Program python3 found: YES (/usr/bin/python3) 00:01:59.811 Program cat found: YES (/usr/bin/cat) 00:01:59.811 Compiler for C supports arguments -march=native: YES 00:01:59.811 Checking for size of "void *" : 8 00:01:59.811 Checking for size of "void *" : 8 (cached) 00:01:59.811 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:59.811 Library m found: YES 00:01:59.811 Library numa found: YES 00:01:59.811 Has header "numaif.h" : YES 00:01:59.811 Library fdt found: NO 00:01:59.811 Library execinfo found: NO 00:01:59.811 Has header "execinfo.h" : YES 00:01:59.811 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:59.811 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:59.811 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:59.811 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:59.811 Run-time dependency openssl found: YES 3.1.1 00:01:59.811 Run-time dependency libpcap found: YES 1.10.4 00:01:59.811 Has header "pcap.h" with dependency libpcap: YES 00:01:59.811 Compiler for C supports arguments -Wcast-qual: YES 00:01:59.811 Compiler for C supports arguments -Wdeprecated: YES 00:01:59.811 Compiler for C supports arguments -Wformat: YES 00:01:59.811 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:59.811 Compiler for C supports arguments -Wformat-security: NO 00:01:59.811 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:59.811 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:59.811 Compiler for C supports arguments -Wnested-externs: YES 00:01:59.811 Compiler for C supports arguments -Wold-style-definition: YES 00:01:59.811 Compiler for C supports arguments -Wpointer-arith: YES 00:01:59.811 Compiler for C supports arguments -Wsign-compare: YES 00:01:59.811 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:59.811 Compiler for C supports arguments -Wundef: YES 00:01:59.811 Compiler for C supports arguments -Wwrite-strings: YES 00:01:59.811 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:59.811 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:59.811 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:59.811 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:59.811 Program objdump found: YES (/usr/bin/objdump) 00:01:59.811 Compiler for C supports arguments -mavx512f: YES 00:01:59.811 Checking if "AVX512 checking" compiles: YES 00:01:59.811 Fetching value of define "__SSE4_2__" : 1 00:01:59.811 Fetching value of define "__AES__" : 1 00:01:59.811 Fetching value of define "__AVX__" : 1 00:01:59.811 Fetching value of define "__AVX2__" : (undefined) 00:01:59.811 Fetching value of define "__AVX512BW__" : (undefined) 00:01:59.811 Fetching value of define "__AVX512CD__" : (undefined) 00:01:59.811 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:59.811 Fetching value of define "__AVX512F__" : (undefined) 00:01:59.811 Fetching value of define "__AVX512VL__" : (undefined) 00:01:59.811 Fetching value of define "__PCLMUL__" : 1 00:01:59.811 Fetching value of define "__RDRND__" : 1 00:01:59.811 Fetching value of define "__RDSEED__" : (undefined) 00:01:59.811 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:59.811 Fetching value of define "__znver1__" : (undefined) 00:01:59.811 Fetching value of define "__znver2__" : (undefined) 00:01:59.811 Fetching value of define "__znver3__" : (undefined) 00:01:59.811 Fetching value of define "__znver4__" : (undefined) 00:01:59.811 Library asan found: YES 00:01:59.811 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:59.811 Message: lib/log: Defining dependency "log" 00:01:59.811 Message: lib/kvargs: Defining dependency "kvargs" 00:01:59.811 Message: lib/telemetry: Defining dependency "telemetry" 00:01:59.811 Library rt found: YES 00:01:59.811 Checking for function "getentropy" : NO 00:01:59.811 Message: lib/eal: Defining dependency "eal" 00:01:59.811 Message: lib/ring: Defining dependency "ring" 00:01:59.811 Message: lib/rcu: Defining dependency "rcu" 00:01:59.811 Message: lib/mempool: Defining dependency "mempool" 00:01:59.811 Message: lib/mbuf: Defining dependency "mbuf" 00:01:59.811 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:59.811 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:59.811 Compiler for C supports arguments -mpclmul: YES 00:01:59.811 Compiler for C supports arguments -maes: YES 00:01:59.811 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:59.811 Compiler for C supports arguments -mavx512bw: YES 00:01:59.811 Compiler for C supports arguments -mavx512dq: YES 00:01:59.811 Compiler for C supports arguments -mavx512vl: YES 00:01:59.811 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:59.811 Compiler for C supports arguments -mavx2: YES 00:01:59.811 Compiler for C supports arguments -mavx: YES 00:01:59.811 Message: lib/net: Defining dependency "net" 00:01:59.811 Message: lib/meter: Defining dependency "meter" 00:01:59.811 Message: lib/ethdev: Defining dependency "ethdev" 00:01:59.811 Message: lib/pci: Defining dependency "pci" 00:01:59.811 Message: lib/cmdline: Defining dependency "cmdline" 00:01:59.811 Message: lib/hash: Defining dependency "hash" 00:01:59.811 Message: lib/timer: Defining dependency "timer" 00:01:59.811 Message: lib/compressdev: Defining dependency "compressdev" 00:01:59.811 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:59.811 Message: lib/dmadev: Defining dependency "dmadev" 00:01:59.811 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:59.811 Message: lib/power: Defining dependency "power" 00:01:59.811 Message: lib/reorder: Defining dependency "reorder" 00:01:59.811 Message: lib/security: Defining dependency "security" 00:01:59.811 Has header "linux/userfaultfd.h" : YES 00:01:59.811 Has header "linux/vduse.h" : YES 00:01:59.811 Message: lib/vhost: Defining dependency "vhost" 00:01:59.811 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:59.811 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:59.811 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:59.811 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:59.811 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:59.811 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:59.811 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:59.811 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:59.811 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:59.811 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:59.811 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:59.811 Configuring doxy-api-html.conf using configuration 00:01:59.811 Configuring doxy-api-man.conf using configuration 00:01:59.811 Program mandb found: YES (/usr/bin/mandb) 00:01:59.811 Program sphinx-build found: NO 00:01:59.811 Configuring rte_build_config.h using configuration 00:01:59.811 Message: 00:01:59.811 ================= 00:01:59.811 Applications Enabled 00:01:59.811 ================= 00:01:59.811 00:01:59.811 apps: 00:01:59.811 00:01:59.811 00:01:59.811 Message: 00:01:59.811 ================= 00:01:59.811 Libraries Enabled 00:01:59.811 ================= 00:01:59.811 00:01:59.811 libs: 00:01:59.811 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:59.811 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:59.811 cryptodev, dmadev, power, reorder, security, vhost, 00:01:59.811 00:01:59.811 Message: 00:01:59.811 =============== 00:01:59.811 Drivers Enabled 00:01:59.811 =============== 00:01:59.811 00:01:59.811 common: 00:01:59.811 00:01:59.811 bus: 00:01:59.811 pci, vdev, 00:01:59.811 mempool: 00:01:59.811 ring, 00:01:59.811 dma: 00:01:59.811 00:01:59.811 net: 00:01:59.811 00:01:59.811 crypto: 00:01:59.811 00:01:59.811 compress: 00:01:59.811 00:01:59.811 vdpa: 00:01:59.811 00:01:59.811 00:01:59.811 Message: 00:01:59.811 ================= 00:01:59.811 Content Skipped 00:01:59.811 ================= 00:01:59.811 00:01:59.811 apps: 00:01:59.811 dumpcap: explicitly disabled via build config 00:01:59.811 graph: explicitly disabled via build config 00:01:59.811 pdump: explicitly disabled via build config 00:01:59.811 proc-info: explicitly disabled via build config 00:01:59.811 test-acl: explicitly disabled via build config 00:01:59.811 test-bbdev: explicitly disabled via build config 00:01:59.811 test-cmdline: explicitly disabled via build config 00:01:59.811 test-compress-perf: explicitly disabled via build config 00:01:59.811 test-crypto-perf: explicitly disabled via build config 00:01:59.811 test-dma-perf: explicitly disabled via build config 00:01:59.811 test-eventdev: explicitly disabled via build config 00:01:59.811 test-fib: explicitly disabled via build config 00:01:59.811 test-flow-perf: explicitly disabled via build config 00:01:59.811 test-gpudev: explicitly disabled via build config 00:01:59.811 test-mldev: explicitly disabled via build config 00:01:59.811 test-pipeline: explicitly disabled via build config 00:01:59.811 test-pmd: explicitly disabled via build config 00:01:59.811 test-regex: explicitly disabled via build config 00:01:59.812 test-sad: explicitly disabled via build config 00:01:59.812 test-security-perf: explicitly disabled via build config 00:01:59.812 00:01:59.812 libs: 00:01:59.812 argparse: explicitly disabled via build config 00:01:59.812 metrics: explicitly disabled via build config 00:01:59.812 acl: explicitly disabled via build config 00:01:59.812 bbdev: explicitly disabled via build config 00:01:59.812 bitratestats: explicitly disabled via build config 00:01:59.812 bpf: explicitly disabled via build config 00:01:59.812 cfgfile: explicitly disabled via build config 00:01:59.812 distributor: explicitly disabled via build config 00:01:59.812 efd: explicitly disabled via build config 00:01:59.812 eventdev: explicitly disabled via build config 00:01:59.812 dispatcher: explicitly disabled via build config 00:01:59.812 gpudev: explicitly disabled via build config 00:01:59.812 gro: explicitly disabled via build config 00:01:59.812 gso: explicitly disabled via build config 00:01:59.812 ip_frag: explicitly disabled via build config 00:01:59.812 jobstats: explicitly disabled via build config 00:01:59.812 latencystats: explicitly disabled via build config 00:01:59.812 lpm: explicitly disabled via build config 00:01:59.812 member: explicitly disabled via build config 00:01:59.812 pcapng: explicitly disabled via build config 00:01:59.812 rawdev: explicitly disabled via build config 00:01:59.812 regexdev: explicitly disabled via build config 00:01:59.812 mldev: explicitly disabled via build config 00:01:59.812 rib: explicitly disabled via build config 00:01:59.812 sched: explicitly disabled via build config 00:01:59.812 stack: explicitly disabled via build config 00:01:59.812 ipsec: explicitly disabled via build config 00:01:59.812 pdcp: explicitly disabled via build config 00:01:59.812 fib: explicitly disabled via build config 00:01:59.812 port: explicitly disabled via build config 00:01:59.812 pdump: explicitly disabled via build config 00:01:59.812 table: explicitly disabled via build config 00:01:59.812 pipeline: explicitly disabled via build config 00:01:59.812 graph: explicitly disabled via build config 00:01:59.812 node: explicitly disabled via build config 00:01:59.812 00:01:59.812 drivers: 00:01:59.812 common/cpt: not in enabled drivers build config 00:01:59.812 common/dpaax: not in enabled drivers build config 00:01:59.812 common/iavf: not in enabled drivers build config 00:01:59.812 common/idpf: not in enabled drivers build config 00:01:59.812 common/ionic: not in enabled drivers build config 00:01:59.812 common/mvep: not in enabled drivers build config 00:01:59.812 common/octeontx: not in enabled drivers build config 00:01:59.812 bus/auxiliary: not in enabled drivers build config 00:01:59.812 bus/cdx: not in enabled drivers build config 00:01:59.812 bus/dpaa: not in enabled drivers build config 00:01:59.812 bus/fslmc: not in enabled drivers build config 00:01:59.812 bus/ifpga: not in enabled drivers build config 00:01:59.812 bus/platform: not in enabled drivers build config 00:01:59.812 bus/uacce: not in enabled drivers build config 00:01:59.812 bus/vmbus: not in enabled drivers build config 00:01:59.812 common/cnxk: not in enabled drivers build config 00:01:59.812 common/mlx5: not in enabled drivers build config 00:01:59.812 common/nfp: not in enabled drivers build config 00:01:59.812 common/nitrox: not in enabled drivers build config 00:01:59.812 common/qat: not in enabled drivers build config 00:01:59.812 common/sfc_efx: not in enabled drivers build config 00:01:59.812 mempool/bucket: not in enabled drivers build config 00:01:59.812 mempool/cnxk: not in enabled drivers build config 00:01:59.812 mempool/dpaa: not in enabled drivers build config 00:01:59.812 mempool/dpaa2: not in enabled drivers build config 00:01:59.812 mempool/octeontx: not in enabled drivers build config 00:01:59.812 mempool/stack: not in enabled drivers build config 00:01:59.812 dma/cnxk: not in enabled drivers build config 00:01:59.812 dma/dpaa: not in enabled drivers build config 00:01:59.812 dma/dpaa2: not in enabled drivers build config 00:01:59.812 dma/hisilicon: not in enabled drivers build config 00:01:59.812 dma/idxd: not in enabled drivers build config 00:01:59.812 dma/ioat: not in enabled drivers build config 00:01:59.812 dma/skeleton: not in enabled drivers build config 00:01:59.812 net/af_packet: not in enabled drivers build config 00:01:59.812 net/af_xdp: not in enabled drivers build config 00:01:59.812 net/ark: not in enabled drivers build config 00:01:59.812 net/atlantic: not in enabled drivers build config 00:01:59.812 net/avp: not in enabled drivers build config 00:01:59.812 net/axgbe: not in enabled drivers build config 00:01:59.812 net/bnx2x: not in enabled drivers build config 00:01:59.812 net/bnxt: not in enabled drivers build config 00:01:59.812 net/bonding: not in enabled drivers build config 00:01:59.812 net/cnxk: not in enabled drivers build config 00:01:59.812 net/cpfl: not in enabled drivers build config 00:01:59.812 net/cxgbe: not in enabled drivers build config 00:01:59.812 net/dpaa: not in enabled drivers build config 00:01:59.812 net/dpaa2: not in enabled drivers build config 00:01:59.812 net/e1000: not in enabled drivers build config 00:01:59.812 net/ena: not in enabled drivers build config 00:01:59.812 net/enetc: not in enabled drivers build config 00:01:59.812 net/enetfec: not in enabled drivers build config 00:01:59.812 net/enic: not in enabled drivers build config 00:01:59.812 net/failsafe: not in enabled drivers build config 00:01:59.812 net/fm10k: not in enabled drivers build config 00:01:59.812 net/gve: not in enabled drivers build config 00:01:59.812 net/hinic: not in enabled drivers build config 00:01:59.812 net/hns3: not in enabled drivers build config 00:01:59.812 net/i40e: not in enabled drivers build config 00:01:59.812 net/iavf: not in enabled drivers build config 00:01:59.812 net/ice: not in enabled drivers build config 00:01:59.812 net/idpf: not in enabled drivers build config 00:01:59.812 net/igc: not in enabled drivers build config 00:01:59.812 net/ionic: not in enabled drivers build config 00:01:59.812 net/ipn3ke: not in enabled drivers build config 00:01:59.812 net/ixgbe: not in enabled drivers build config 00:01:59.812 net/mana: not in enabled drivers build config 00:01:59.812 net/memif: not in enabled drivers build config 00:01:59.812 net/mlx4: not in enabled drivers build config 00:01:59.812 net/mlx5: not in enabled drivers build config 00:01:59.812 net/mvneta: not in enabled drivers build config 00:01:59.812 net/mvpp2: not in enabled drivers build config 00:01:59.812 net/netvsc: not in enabled drivers build config 00:01:59.812 net/nfb: not in enabled drivers build config 00:01:59.812 net/nfp: not in enabled drivers build config 00:01:59.812 net/ngbe: not in enabled drivers build config 00:01:59.812 net/null: not in enabled drivers build config 00:01:59.812 net/octeontx: not in enabled drivers build config 00:01:59.812 net/octeon_ep: not in enabled drivers build config 00:01:59.812 net/pcap: not in enabled drivers build config 00:01:59.812 net/pfe: not in enabled drivers build config 00:01:59.812 net/qede: not in enabled drivers build config 00:01:59.812 net/ring: not in enabled drivers build config 00:01:59.812 net/sfc: not in enabled drivers build config 00:01:59.812 net/softnic: not in enabled drivers build config 00:01:59.812 net/tap: not in enabled drivers build config 00:01:59.812 net/thunderx: not in enabled drivers build config 00:01:59.812 net/txgbe: not in enabled drivers build config 00:01:59.812 net/vdev_netvsc: not in enabled drivers build config 00:01:59.812 net/vhost: not in enabled drivers build config 00:01:59.812 net/virtio: not in enabled drivers build config 00:01:59.812 net/vmxnet3: not in enabled drivers build config 00:01:59.812 raw/*: missing internal dependency, "rawdev" 00:01:59.812 crypto/armv8: not in enabled drivers build config 00:01:59.812 crypto/bcmfs: not in enabled drivers build config 00:01:59.812 crypto/caam_jr: not in enabled drivers build config 00:01:59.812 crypto/ccp: not in enabled drivers build config 00:01:59.812 crypto/cnxk: not in enabled drivers build config 00:01:59.812 crypto/dpaa_sec: not in enabled drivers build config 00:01:59.812 crypto/dpaa2_sec: not in enabled drivers build config 00:01:59.812 crypto/ipsec_mb: not in enabled drivers build config 00:01:59.812 crypto/mlx5: not in enabled drivers build config 00:01:59.812 crypto/mvsam: not in enabled drivers build config 00:01:59.812 crypto/nitrox: not in enabled drivers build config 00:01:59.812 crypto/null: not in enabled drivers build config 00:01:59.812 crypto/octeontx: not in enabled drivers build config 00:01:59.812 crypto/openssl: not in enabled drivers build config 00:01:59.812 crypto/scheduler: not in enabled drivers build config 00:01:59.812 crypto/uadk: not in enabled drivers build config 00:01:59.812 crypto/virtio: not in enabled drivers build config 00:01:59.812 compress/isal: not in enabled drivers build config 00:01:59.812 compress/mlx5: not in enabled drivers build config 00:01:59.812 compress/nitrox: not in enabled drivers build config 00:01:59.812 compress/octeontx: not in enabled drivers build config 00:01:59.812 compress/zlib: not in enabled drivers build config 00:01:59.812 regex/*: missing internal dependency, "regexdev" 00:01:59.812 ml/*: missing internal dependency, "mldev" 00:01:59.812 vdpa/ifc: not in enabled drivers build config 00:01:59.812 vdpa/mlx5: not in enabled drivers build config 00:01:59.812 vdpa/nfp: not in enabled drivers build config 00:01:59.812 vdpa/sfc: not in enabled drivers build config 00:01:59.812 event/*: missing internal dependency, "eventdev" 00:01:59.812 baseband/*: missing internal dependency, "bbdev" 00:01:59.812 gpu/*: missing internal dependency, "gpudev" 00:01:59.812 00:01:59.812 00:01:59.812 Build targets in project: 85 00:01:59.812 00:01:59.812 DPDK 24.03.0 00:01:59.812 00:01:59.812 User defined options 00:01:59.812 buildtype : debug 00:01:59.812 default_library : shared 00:01:59.812 libdir : lib 00:01:59.812 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:59.812 b_sanitize : address 00:01:59.812 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:59.812 c_link_args : 00:01:59.812 cpu_instruction_set: native 00:01:59.812 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:01:59.812 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:01:59.813 enable_docs : false 00:01:59.813 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:59.813 enable_kmods : false 00:01:59.813 max_lcores : 128 00:01:59.813 tests : false 00:01:59.813 00:01:59.813 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:59.813 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:59.813 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:59.813 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:59.813 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:59.813 [4/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:59.813 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:59.813 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:59.813 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:59.813 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:59.813 [9/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:59.813 [10/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:59.813 [11/268] Linking static target lib/librte_kvargs.a 00:01:59.813 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:00.072 [13/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:00.072 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:00.072 [15/268] Linking static target lib/librte_log.a 00:02:00.072 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:00.647 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.647 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:00.647 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:00.647 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:00.647 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:00.647 [22/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:00.647 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:00.647 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:00.647 [25/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:00.647 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:00.647 [27/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:00.910 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:00.910 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:00.910 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:00.910 [31/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:00.910 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:00.910 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:00.910 [34/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:00.910 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:00.910 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:00.910 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:00.910 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:00.910 [39/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:00.910 [40/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:00.910 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:00.910 [42/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:00.910 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:00.910 [44/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:00.910 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:00.910 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:00.910 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:00.910 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:00.910 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:00.910 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:00.910 [51/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:00.910 [52/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:00.910 [53/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:00.910 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:00.910 [55/268] Linking static target lib/librte_telemetry.a 00:02:00.910 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:00.910 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:01.172 [58/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:01.172 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:01.172 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:01.172 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:01.172 [62/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.172 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:01.172 [64/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:01.172 [65/268] Linking target lib/librte_log.so.24.1 00:02:01.172 [66/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:01.432 [67/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:01.693 [68/268] Linking target lib/librte_kvargs.so.24.1 00:02:01.693 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:01.693 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:01.693 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:01.693 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:01.693 [73/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:01.693 [74/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:01.693 [75/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:01.693 [76/268] Linking static target lib/librte_pci.a 00:02:01.693 [77/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:01.693 [78/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:01.956 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:01.956 [80/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:01.956 [81/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:01.956 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:01.956 [83/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:01.956 [84/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:01.956 [85/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:01.956 [86/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:01.956 [87/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:01.956 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:01.956 [89/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:01.956 [90/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:01.956 [91/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:01.956 [92/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:01.956 [93/268] Linking static target lib/librte_meter.a 00:02:01.956 [94/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:01.956 [95/268] Linking static target lib/librte_ring.a 00:02:01.956 [96/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:01.956 [97/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:01.956 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:01.956 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:01.956 [100/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:01.956 [101/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:01.956 [102/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:01.956 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:01.956 [104/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:01.956 [105/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:01.956 [106/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:02.219 [107/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:02.219 [108/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:02.219 [109/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.219 [110/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:02.219 [111/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:02.219 [112/268] Linking target lib/librte_telemetry.so.24.1 00:02:02.219 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:02.219 [114/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.219 [115/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:02.219 [116/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:02.219 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:02.219 [118/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:02.219 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:02.219 [120/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:02.219 [121/268] Linking static target lib/librte_mempool.a 00:02:02.481 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:02.481 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:02.481 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:02.481 [125/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:02.481 [126/268] Linking static target lib/librte_rcu.a 00:02:02.481 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:02.481 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:02.481 [129/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.481 [130/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:02.481 [131/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:02.481 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:02.743 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:02.743 [134/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.743 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:02.743 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:02.743 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:02.743 [138/268] Linking static target lib/librte_cmdline.a 00:02:03.005 [139/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:03.005 [140/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:03.005 [141/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:03.005 [142/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:03.005 [143/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:03.005 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:03.005 [145/268] Linking static target lib/librte_eal.a 00:02:03.005 [146/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:03.005 [147/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:03.005 [148/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:03.005 [149/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:03.005 [150/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:03.005 [151/268] Linking static target lib/librte_timer.a 00:02:03.005 [152/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.266 [153/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:03.266 [154/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:03.266 [155/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:03.266 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:03.266 [157/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:03.266 [158/268] Linking static target lib/librte_dmadev.a 00:02:03.266 [159/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:03.526 [160/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.526 [161/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:03.526 [162/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:03.526 [163/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.526 [164/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:03.526 [165/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:03.526 [166/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:03.526 [167/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:03.785 [168/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:03.785 [169/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:03.785 [170/268] Linking static target lib/librte_net.a 00:02:03.785 [171/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:03.785 [172/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:03.785 [173/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:03.785 [174/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:03.785 [175/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.785 [176/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.785 [177/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:03.785 [178/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:03.785 [179/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:04.044 [180/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:04.044 [181/268] Linking static target lib/librte_power.a 00:02:04.044 [182/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:04.044 [183/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:04.044 [184/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.044 [185/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:04.044 [186/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:04.044 [187/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:04.304 [188/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:04.304 [189/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:04.304 [190/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:04.304 [191/268] Linking static target drivers/librte_bus_vdev.a 00:02:04.304 [192/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:04.304 [193/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:04.304 [194/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:04.304 [195/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:04.304 [196/268] Linking static target lib/librte_hash.a 00:02:04.304 [197/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:04.304 [198/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:04.304 [199/268] Linking static target drivers/librte_bus_pci.a 00:02:04.304 [200/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:04.304 [201/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:04.304 [202/268] Linking static target lib/librte_compressdev.a 00:02:04.304 [203/268] Linking static target lib/librte_reorder.a 00:02:04.304 [204/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:04.562 [205/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.562 [206/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.562 [207/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:04.562 [208/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:04.562 [209/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:04.562 [210/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:04.562 [211/268] Linking static target drivers/librte_mempool_ring.a 00:02:04.562 [212/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.820 [213/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.820 [214/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.820 [215/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.110 [216/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:05.110 [217/268] Linking static target lib/librte_security.a 00:02:05.394 [218/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.653 [219/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:06.219 [220/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:06.219 [221/268] Linking static target lib/librte_mbuf.a 00:02:06.478 [222/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:06.478 [223/268] Linking static target lib/librte_cryptodev.a 00:02:06.736 [224/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.672 [225/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.672 [226/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:07.672 [227/268] Linking static target lib/librte_ethdev.a 00:02:09.046 [228/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.046 [229/268] Linking target lib/librte_eal.so.24.1 00:02:09.305 [230/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:09.305 [231/268] Linking target lib/librte_pci.so.24.1 00:02:09.305 [232/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:09.305 [233/268] Linking target lib/librte_meter.so.24.1 00:02:09.305 [234/268] Linking target lib/librte_ring.so.24.1 00:02:09.305 [235/268] Linking target lib/librte_timer.so.24.1 00:02:09.305 [236/268] Linking target lib/librte_dmadev.so.24.1 00:02:09.564 [237/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:09.564 [238/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:09.564 [239/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:09.564 [240/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:09.564 [241/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:09.564 [242/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:09.564 [243/268] Linking target lib/librte_rcu.so.24.1 00:02:09.564 [244/268] Linking target lib/librte_mempool.so.24.1 00:02:09.564 [245/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:09.564 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:09.564 [247/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:09.564 [248/268] Linking target lib/librte_mbuf.so.24.1 00:02:09.822 [249/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:09.822 [250/268] Linking target lib/librte_reorder.so.24.1 00:02:09.822 [251/268] Linking target lib/librte_compressdev.so.24.1 00:02:09.822 [252/268] Linking target lib/librte_net.so.24.1 00:02:09.822 [253/268] Linking target lib/librte_cryptodev.so.24.1 00:02:10.080 [254/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:10.080 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:10.080 [256/268] Linking target lib/librte_cmdline.so.24.1 00:02:10.080 [257/268] Linking target lib/librte_security.so.24.1 00:02:10.080 [258/268] Linking target lib/librte_hash.so.24.1 00:02:10.080 [259/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:10.338 [260/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:12.240 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.240 [262/268] Linking target lib/librte_ethdev.so.24.1 00:02:12.240 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:12.240 [264/268] Linking target lib/librte_power.so.24.1 00:02:38.787 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:38.787 [266/268] Linking static target lib/librte_vhost.a 00:02:38.787 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.787 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:38.787 INFO: autodetecting backend as ninja 00:02:38.787 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:02:38.787 CC lib/ut_mock/mock.o 00:02:38.787 CC lib/ut/ut.o 00:02:38.787 CC lib/log/log.o 00:02:38.787 CC lib/log/log_flags.o 00:02:38.787 CC lib/log/log_deprecated.o 00:02:38.787 LIB libspdk_ut.a 00:02:38.787 LIB libspdk_ut_mock.a 00:02:38.787 LIB libspdk_log.a 00:02:38.787 SO libspdk_ut_mock.so.6.0 00:02:38.787 SO libspdk_ut.so.2.0 00:02:38.787 SO libspdk_log.so.7.1 00:02:38.787 SYMLINK libspdk_ut_mock.so 00:02:38.787 SYMLINK libspdk_ut.so 00:02:38.787 SYMLINK libspdk_log.so 00:02:38.787 CXX lib/trace_parser/trace.o 00:02:38.787 CC lib/ioat/ioat.o 00:02:38.787 CC lib/util/base64.o 00:02:38.787 CC lib/dma/dma.o 00:02:38.787 CC lib/util/bit_array.o 00:02:38.787 CC lib/util/cpuset.o 00:02:38.787 CC lib/util/crc16.o 00:02:38.787 CC lib/util/crc32.o 00:02:38.787 CC lib/util/crc32c.o 00:02:38.787 CC lib/util/crc32_ieee.o 00:02:38.787 CC lib/util/crc64.o 00:02:38.787 CC lib/util/dif.o 00:02:38.787 CC lib/util/fd.o 00:02:38.787 CC lib/util/fd_group.o 00:02:38.787 CC lib/util/file.o 00:02:38.787 CC lib/util/hexlify.o 00:02:38.787 CC lib/util/iov.o 00:02:38.787 CC lib/util/math.o 00:02:38.787 CC lib/util/net.o 00:02:38.787 CC lib/util/pipe.o 00:02:38.787 CC lib/util/strerror_tls.o 00:02:38.787 CC lib/util/string.o 00:02:38.787 CC lib/util/uuid.o 00:02:38.787 CC lib/util/xor.o 00:02:38.787 CC lib/util/md5.o 00:02:38.787 CC lib/util/zipf.o 00:02:38.787 CC lib/vfio_user/host/vfio_user_pci.o 00:02:38.787 CC lib/vfio_user/host/vfio_user.o 00:02:38.787 LIB libspdk_dma.a 00:02:38.787 SO libspdk_dma.so.5.0 00:02:38.787 SYMLINK libspdk_dma.so 00:02:38.787 LIB libspdk_ioat.a 00:02:38.787 SO libspdk_ioat.so.7.0 00:02:38.787 SYMLINK libspdk_ioat.so 00:02:38.787 LIB libspdk_vfio_user.a 00:02:38.787 SO libspdk_vfio_user.so.5.0 00:02:38.787 SYMLINK libspdk_vfio_user.so 00:02:38.787 LIB libspdk_util.a 00:02:39.045 SO libspdk_util.so.10.1 00:02:39.045 SYMLINK libspdk_util.so 00:02:39.304 CC lib/rdma_utils/rdma_utils.o 00:02:39.304 CC lib/idxd/idxd.o 00:02:39.304 CC lib/json/json_parse.o 00:02:39.304 CC lib/conf/conf.o 00:02:39.304 CC lib/vmd/vmd.o 00:02:39.304 CC lib/env_dpdk/env.o 00:02:39.304 CC lib/json/json_util.o 00:02:39.304 CC lib/idxd/idxd_user.o 00:02:39.304 CC lib/env_dpdk/memory.o 00:02:39.304 CC lib/vmd/led.o 00:02:39.304 CC lib/json/json_write.o 00:02:39.304 CC lib/env_dpdk/pci.o 00:02:39.304 CC lib/idxd/idxd_kernel.o 00:02:39.304 CC lib/env_dpdk/init.o 00:02:39.304 CC lib/env_dpdk/threads.o 00:02:39.304 CC lib/env_dpdk/pci_ioat.o 00:02:39.304 CC lib/env_dpdk/pci_virtio.o 00:02:39.304 CC lib/env_dpdk/pci_vmd.o 00:02:39.304 CC lib/env_dpdk/pci_idxd.o 00:02:39.304 CC lib/env_dpdk/pci_event.o 00:02:39.304 CC lib/env_dpdk/sigbus_handler.o 00:02:39.304 CC lib/env_dpdk/pci_dpdk.o 00:02:39.304 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:39.304 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:39.304 LIB libspdk_trace_parser.a 00:02:39.304 SO libspdk_trace_parser.so.6.0 00:02:39.304 SYMLINK libspdk_trace_parser.so 00:02:39.562 LIB libspdk_conf.a 00:02:39.562 SO libspdk_conf.so.6.0 00:02:39.562 LIB libspdk_rdma_utils.a 00:02:39.562 SYMLINK libspdk_conf.so 00:02:39.562 LIB libspdk_json.a 00:02:39.562 SO libspdk_rdma_utils.so.1.0 00:02:39.562 SO libspdk_json.so.6.0 00:02:39.562 SYMLINK libspdk_rdma_utils.so 00:02:39.821 SYMLINK libspdk_json.so 00:02:39.821 CC lib/rdma_provider/common.o 00:02:39.821 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:39.821 CC lib/jsonrpc/jsonrpc_server.o 00:02:39.821 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:39.821 CC lib/jsonrpc/jsonrpc_client.o 00:02:39.821 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:40.080 LIB libspdk_rdma_provider.a 00:02:40.080 LIB libspdk_idxd.a 00:02:40.080 SO libspdk_rdma_provider.so.7.0 00:02:40.080 SO libspdk_idxd.so.12.1 00:02:40.080 LIB libspdk_vmd.a 00:02:40.080 SYMLINK libspdk_rdma_provider.so 00:02:40.080 SO libspdk_vmd.so.6.0 00:02:40.080 LIB libspdk_jsonrpc.a 00:02:40.080 SYMLINK libspdk_idxd.so 00:02:40.351 SO libspdk_jsonrpc.so.6.0 00:02:40.351 SYMLINK libspdk_vmd.so 00:02:40.351 SYMLINK libspdk_jsonrpc.so 00:02:40.351 CC lib/rpc/rpc.o 00:02:40.613 LIB libspdk_rpc.a 00:02:40.613 SO libspdk_rpc.so.6.0 00:02:40.871 SYMLINK libspdk_rpc.so 00:02:40.871 CC lib/notify/notify.o 00:02:40.871 CC lib/notify/notify_rpc.o 00:02:40.871 CC lib/keyring/keyring.o 00:02:40.871 CC lib/trace/trace.o 00:02:40.871 CC lib/keyring/keyring_rpc.o 00:02:40.871 CC lib/trace/trace_flags.o 00:02:40.871 CC lib/trace/trace_rpc.o 00:02:41.130 LIB libspdk_notify.a 00:02:41.130 SO libspdk_notify.so.6.0 00:02:41.130 SYMLINK libspdk_notify.so 00:02:41.130 LIB libspdk_keyring.a 00:02:41.130 SO libspdk_keyring.so.2.0 00:02:41.130 LIB libspdk_trace.a 00:02:41.388 SO libspdk_trace.so.11.0 00:02:41.388 SYMLINK libspdk_keyring.so 00:02:41.388 SYMLINK libspdk_trace.so 00:02:41.388 CC lib/thread/thread.o 00:02:41.388 CC lib/thread/iobuf.o 00:02:41.388 CC lib/sock/sock.o 00:02:41.388 CC lib/sock/sock_rpc.o 00:02:41.954 LIB libspdk_sock.a 00:02:41.954 SO libspdk_sock.so.10.0 00:02:41.954 SYMLINK libspdk_sock.so 00:02:42.213 LIB libspdk_env_dpdk.a 00:02:42.213 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:42.213 CC lib/nvme/nvme_ctrlr.o 00:02:42.213 CC lib/nvme/nvme_fabric.o 00:02:42.213 CC lib/nvme/nvme_ns_cmd.o 00:02:42.213 CC lib/nvme/nvme_ns.o 00:02:42.213 CC lib/nvme/nvme_pcie_common.o 00:02:42.213 CC lib/nvme/nvme_pcie.o 00:02:42.213 CC lib/nvme/nvme_qpair.o 00:02:42.213 CC lib/nvme/nvme.o 00:02:42.213 CC lib/nvme/nvme_quirks.o 00:02:42.213 CC lib/nvme/nvme_transport.o 00:02:42.213 CC lib/nvme/nvme_discovery.o 00:02:42.213 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:42.213 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:42.213 CC lib/nvme/nvme_tcp.o 00:02:42.213 CC lib/nvme/nvme_opal.o 00:02:42.213 CC lib/nvme/nvme_io_msg.o 00:02:42.213 CC lib/nvme/nvme_poll_group.o 00:02:42.213 CC lib/nvme/nvme_zns.o 00:02:42.213 CC lib/nvme/nvme_stubs.o 00:02:42.213 CC lib/nvme/nvme_auth.o 00:02:42.213 CC lib/nvme/nvme_cuse.o 00:02:42.213 CC lib/nvme/nvme_rdma.o 00:02:42.213 SO libspdk_env_dpdk.so.15.1 00:02:42.472 SYMLINK libspdk_env_dpdk.so 00:02:43.847 LIB libspdk_thread.a 00:02:43.847 SO libspdk_thread.so.11.0 00:02:43.847 SYMLINK libspdk_thread.so 00:02:43.847 CC lib/init/json_config.o 00:02:43.847 CC lib/blob/blobstore.o 00:02:43.847 CC lib/init/subsystem.o 00:02:43.847 CC lib/virtio/virtio.o 00:02:43.847 CC lib/blob/request.o 00:02:43.847 CC lib/virtio/virtio_vhost_user.o 00:02:43.847 CC lib/init/subsystem_rpc.o 00:02:43.847 CC lib/accel/accel.o 00:02:43.847 CC lib/fsdev/fsdev.o 00:02:43.847 CC lib/blob/zeroes.o 00:02:43.847 CC lib/virtio/virtio_vfio_user.o 00:02:43.847 CC lib/init/rpc.o 00:02:43.847 CC lib/accel/accel_rpc.o 00:02:43.847 CC lib/fsdev/fsdev_io.o 00:02:43.847 CC lib/accel/accel_sw.o 00:02:43.847 CC lib/fsdev/fsdev_rpc.o 00:02:43.847 CC lib/blob/blob_bs_dev.o 00:02:43.847 CC lib/virtio/virtio_pci.o 00:02:44.106 LIB libspdk_init.a 00:02:44.106 SO libspdk_init.so.6.0 00:02:44.106 SYMLINK libspdk_init.so 00:02:44.365 LIB libspdk_virtio.a 00:02:44.365 SO libspdk_virtio.so.7.0 00:02:44.365 SYMLINK libspdk_virtio.so 00:02:44.365 CC lib/event/app.o 00:02:44.365 CC lib/event/reactor.o 00:02:44.365 CC lib/event/log_rpc.o 00:02:44.365 CC lib/event/app_rpc.o 00:02:44.365 CC lib/event/scheduler_static.o 00:02:44.624 LIB libspdk_fsdev.a 00:02:44.624 SO libspdk_fsdev.so.2.0 00:02:44.882 SYMLINK libspdk_fsdev.so 00:02:44.882 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:44.882 LIB libspdk_event.a 00:02:44.882 SO libspdk_event.so.14.0 00:02:45.141 SYMLINK libspdk_event.so 00:02:45.399 LIB libspdk_nvme.a 00:02:45.399 LIB libspdk_accel.a 00:02:45.399 SO libspdk_accel.so.16.0 00:02:45.399 SYMLINK libspdk_accel.so 00:02:45.399 SO libspdk_nvme.so.15.0 00:02:45.657 CC lib/bdev/bdev.o 00:02:45.657 CC lib/bdev/bdev_rpc.o 00:02:45.657 CC lib/bdev/bdev_zone.o 00:02:45.657 CC lib/bdev/part.o 00:02:45.657 CC lib/bdev/scsi_nvme.o 00:02:45.657 SYMLINK libspdk_nvme.so 00:02:45.657 LIB libspdk_fuse_dispatcher.a 00:02:45.657 SO libspdk_fuse_dispatcher.so.1.0 00:02:45.916 SYMLINK libspdk_fuse_dispatcher.so 00:02:48.449 LIB libspdk_blob.a 00:02:48.449 SO libspdk_blob.so.11.0 00:02:48.449 SYMLINK libspdk_blob.so 00:02:48.449 CC lib/lvol/lvol.o 00:02:48.449 CC lib/blobfs/blobfs.o 00:02:48.449 CC lib/blobfs/tree.o 00:02:49.016 LIB libspdk_bdev.a 00:02:49.016 SO libspdk_bdev.so.17.0 00:02:49.281 SYMLINK libspdk_bdev.so 00:02:49.281 CC lib/nbd/nbd.o 00:02:49.281 CC lib/ftl/ftl_core.o 00:02:49.281 CC lib/scsi/dev.o 00:02:49.281 CC lib/ftl/ftl_init.o 00:02:49.281 CC lib/nbd/nbd_rpc.o 00:02:49.281 CC lib/ublk/ublk.o 00:02:49.281 CC lib/scsi/lun.o 00:02:49.281 CC lib/nvmf/ctrlr.o 00:02:49.281 CC lib/ftl/ftl_layout.o 00:02:49.281 CC lib/scsi/port.o 00:02:49.281 CC lib/nvmf/ctrlr_discovery.o 00:02:49.281 CC lib/ftl/ftl_debug.o 00:02:49.281 CC lib/ublk/ublk_rpc.o 00:02:49.281 CC lib/ftl/ftl_io.o 00:02:49.281 CC lib/scsi/scsi.o 00:02:49.281 CC lib/nvmf/ctrlr_bdev.o 00:02:49.281 CC lib/scsi/scsi_bdev.o 00:02:49.281 CC lib/nvmf/subsystem.o 00:02:49.281 CC lib/ftl/ftl_sb.o 00:02:49.281 CC lib/ftl/ftl_l2p.o 00:02:49.281 CC lib/scsi/scsi_pr.o 00:02:49.281 CC lib/nvmf/nvmf.o 00:02:49.281 CC lib/scsi/scsi_rpc.o 00:02:49.281 CC lib/ftl/ftl_l2p_flat.o 00:02:49.281 CC lib/nvmf/nvmf_rpc.o 00:02:49.281 CC lib/nvmf/transport.o 00:02:49.281 CC lib/scsi/task.o 00:02:49.281 CC lib/ftl/ftl_nv_cache.o 00:02:49.281 CC lib/nvmf/tcp.o 00:02:49.281 CC lib/ftl/ftl_band.o 00:02:49.281 CC lib/nvmf/stubs.o 00:02:49.281 CC lib/ftl/ftl_band_ops.o 00:02:49.281 CC lib/ftl/ftl_writer.o 00:02:49.281 CC lib/nvmf/mdns_server.o 00:02:49.281 CC lib/ftl/ftl_rq.o 00:02:49.281 CC lib/nvmf/rdma.o 00:02:49.281 CC lib/ftl/ftl_reloc.o 00:02:49.281 CC lib/nvmf/auth.o 00:02:49.281 CC lib/ftl/ftl_l2p_cache.o 00:02:49.281 CC lib/ftl/ftl_p2l.o 00:02:49.281 CC lib/ftl/ftl_p2l_log.o 00:02:49.281 CC lib/ftl/mngt/ftl_mngt.o 00:02:49.281 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:49.281 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:49.281 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:49.281 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:49.542 LIB libspdk_blobfs.a 00:02:49.542 SO libspdk_blobfs.so.10.0 00:02:49.542 SYMLINK libspdk_blobfs.so 00:02:49.804 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:49.804 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:49.804 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:49.804 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:49.804 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:49.804 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:49.804 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:49.804 LIB libspdk_lvol.a 00:02:49.804 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:49.804 CC lib/ftl/utils/ftl_conf.o 00:02:49.804 CC lib/ftl/utils/ftl_md.o 00:02:49.804 CC lib/ftl/utils/ftl_mempool.o 00:02:49.804 SO libspdk_lvol.so.10.0 00:02:49.805 CC lib/ftl/utils/ftl_bitmap.o 00:02:49.805 CC lib/ftl/utils/ftl_property.o 00:02:49.805 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:49.805 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:49.805 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:49.805 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:50.066 SYMLINK libspdk_lvol.so 00:02:50.066 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:50.066 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:50.066 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:50.066 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:50.066 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:50.066 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:50.066 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:50.066 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:50.066 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:50.066 CC lib/ftl/base/ftl_base_dev.o 00:02:50.066 CC lib/ftl/base/ftl_base_bdev.o 00:02:50.328 CC lib/ftl/ftl_trace.o 00:02:50.328 LIB libspdk_nbd.a 00:02:50.328 SO libspdk_nbd.so.7.0 00:02:50.328 LIB libspdk_scsi.a 00:02:50.328 SYMLINK libspdk_nbd.so 00:02:50.586 SO libspdk_scsi.so.9.0 00:02:50.586 SYMLINK libspdk_scsi.so 00:02:50.586 LIB libspdk_ublk.a 00:02:50.586 SO libspdk_ublk.so.3.0 00:02:50.845 CC lib/vhost/vhost.o 00:02:50.845 CC lib/iscsi/conn.o 00:02:50.845 CC lib/vhost/vhost_rpc.o 00:02:50.845 CC lib/iscsi/init_grp.o 00:02:50.845 CC lib/iscsi/iscsi.o 00:02:50.845 CC lib/vhost/vhost_scsi.o 00:02:50.845 CC lib/iscsi/param.o 00:02:50.845 CC lib/vhost/vhost_blk.o 00:02:50.845 CC lib/iscsi/portal_grp.o 00:02:50.845 CC lib/vhost/rte_vhost_user.o 00:02:50.845 CC lib/iscsi/tgt_node.o 00:02:50.845 CC lib/iscsi/iscsi_subsystem.o 00:02:50.845 CC lib/iscsi/iscsi_rpc.o 00:02:50.845 CC lib/iscsi/task.o 00:02:50.845 SYMLINK libspdk_ublk.so 00:02:51.104 LIB libspdk_ftl.a 00:02:51.362 SO libspdk_ftl.so.9.0 00:02:51.621 SYMLINK libspdk_ftl.so 00:02:52.188 LIB libspdk_vhost.a 00:02:52.188 SO libspdk_vhost.so.8.0 00:02:52.446 SYMLINK libspdk_vhost.so 00:02:52.704 LIB libspdk_iscsi.a 00:02:52.704 SO libspdk_iscsi.so.8.0 00:02:52.962 LIB libspdk_nvmf.a 00:02:52.962 SYMLINK libspdk_iscsi.so 00:02:52.962 SO libspdk_nvmf.so.20.0 00:02:53.221 SYMLINK libspdk_nvmf.so 00:02:53.480 CC module/env_dpdk/env_dpdk_rpc.o 00:02:53.480 CC module/accel/ioat/accel_ioat.o 00:02:53.480 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:53.480 CC module/blob/bdev/blob_bdev.o 00:02:53.480 CC module/keyring/file/keyring.o 00:02:53.480 CC module/accel/iaa/accel_iaa.o 00:02:53.480 CC module/scheduler/gscheduler/gscheduler.o 00:02:53.480 CC module/accel/dsa/accel_dsa.o 00:02:53.480 CC module/keyring/file/keyring_rpc.o 00:02:53.480 CC module/accel/iaa/accel_iaa_rpc.o 00:02:53.480 CC module/accel/dsa/accel_dsa_rpc.o 00:02:53.480 CC module/fsdev/aio/fsdev_aio.o 00:02:53.480 CC module/keyring/linux/keyring.o 00:02:53.480 CC module/accel/ioat/accel_ioat_rpc.o 00:02:53.480 CC module/accel/error/accel_error.o 00:02:53.480 CC module/sock/posix/posix.o 00:02:53.480 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:53.480 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:53.480 CC module/accel/error/accel_error_rpc.o 00:02:53.480 CC module/keyring/linux/keyring_rpc.o 00:02:53.480 CC module/fsdev/aio/linux_aio_mgr.o 00:02:53.480 LIB libspdk_env_dpdk_rpc.a 00:02:53.480 SO libspdk_env_dpdk_rpc.so.6.0 00:02:53.738 SYMLINK libspdk_env_dpdk_rpc.so 00:02:53.738 LIB libspdk_keyring_linux.a 00:02:53.738 LIB libspdk_keyring_file.a 00:02:53.738 LIB libspdk_scheduler_gscheduler.a 00:02:53.738 LIB libspdk_scheduler_dpdk_governor.a 00:02:53.738 SO libspdk_keyring_linux.so.1.0 00:02:53.738 SO libspdk_keyring_file.so.2.0 00:02:53.738 SO libspdk_scheduler_gscheduler.so.4.0 00:02:53.738 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:53.738 LIB libspdk_accel_ioat.a 00:02:53.738 LIB libspdk_scheduler_dynamic.a 00:02:53.738 SYMLINK libspdk_keyring_linux.so 00:02:53.738 LIB libspdk_accel_iaa.a 00:02:53.738 SYMLINK libspdk_keyring_file.so 00:02:53.738 SO libspdk_accel_ioat.so.6.0 00:02:53.738 LIB libspdk_accel_error.a 00:02:53.738 SYMLINK libspdk_scheduler_gscheduler.so 00:02:53.738 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:53.738 SO libspdk_scheduler_dynamic.so.4.0 00:02:53.738 SO libspdk_accel_iaa.so.3.0 00:02:53.738 SO libspdk_accel_error.so.2.0 00:02:53.738 SYMLINK libspdk_accel_ioat.so 00:02:53.738 SYMLINK libspdk_scheduler_dynamic.so 00:02:53.738 SYMLINK libspdk_accel_iaa.so 00:02:53.738 SYMLINK libspdk_accel_error.so 00:02:53.997 LIB libspdk_blob_bdev.a 00:02:53.997 LIB libspdk_accel_dsa.a 00:02:53.997 SO libspdk_blob_bdev.so.11.0 00:02:53.997 SO libspdk_accel_dsa.so.5.0 00:02:53.997 SYMLINK libspdk_blob_bdev.so 00:02:53.997 SYMLINK libspdk_accel_dsa.so 00:02:54.260 CC module/bdev/gpt/gpt.o 00:02:54.260 CC module/bdev/malloc/bdev_malloc.o 00:02:54.260 CC module/bdev/delay/vbdev_delay.o 00:02:54.260 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:54.260 CC module/bdev/error/vbdev_error.o 00:02:54.260 CC module/bdev/gpt/vbdev_gpt.o 00:02:54.260 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:54.260 CC module/bdev/error/vbdev_error_rpc.o 00:02:54.260 CC module/bdev/lvol/vbdev_lvol.o 00:02:54.260 CC module/bdev/split/vbdev_split.o 00:02:54.260 CC module/bdev/split/vbdev_split_rpc.o 00:02:54.260 CC module/blobfs/bdev/blobfs_bdev.o 00:02:54.260 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:54.260 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:54.260 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:54.260 CC module/bdev/raid/bdev_raid.o 00:02:54.260 CC module/bdev/passthru/vbdev_passthru.o 00:02:54.260 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:54.260 CC module/bdev/nvme/bdev_nvme.o 00:02:54.260 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:54.260 CC module/bdev/null/bdev_null_rpc.o 00:02:54.260 CC module/bdev/raid/bdev_raid_rpc.o 00:02:54.260 CC module/bdev/null/bdev_null.o 00:02:54.260 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:54.260 CC module/bdev/nvme/nvme_rpc.o 00:02:54.260 CC module/bdev/iscsi/bdev_iscsi.o 00:02:54.260 CC module/bdev/raid/bdev_raid_sb.o 00:02:54.260 CC module/bdev/raid/raid0.o 00:02:54.260 CC module/bdev/nvme/bdev_mdns_client.o 00:02:54.260 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:54.260 CC module/bdev/raid/raid1.o 00:02:54.260 CC module/bdev/nvme/vbdev_opal.o 00:02:54.260 CC module/bdev/raid/concat.o 00:02:54.260 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:54.260 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:54.260 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:54.260 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:54.260 CC module/bdev/aio/bdev_aio.o 00:02:54.260 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:54.260 CC module/bdev/aio/bdev_aio_rpc.o 00:02:54.260 CC module/bdev/ftl/bdev_ftl.o 00:02:54.260 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:54.519 LIB libspdk_blobfs_bdev.a 00:02:54.519 LIB libspdk_bdev_split.a 00:02:54.519 SO libspdk_blobfs_bdev.so.6.0 00:02:54.519 SO libspdk_bdev_split.so.6.0 00:02:54.519 LIB libspdk_bdev_null.a 00:02:54.519 LIB libspdk_fsdev_aio.a 00:02:54.778 SO libspdk_bdev_null.so.6.0 00:02:54.778 SO libspdk_fsdev_aio.so.1.0 00:02:54.778 SYMLINK libspdk_bdev_split.so 00:02:54.778 SYMLINK libspdk_blobfs_bdev.so 00:02:54.778 LIB libspdk_bdev_error.a 00:02:54.778 LIB libspdk_bdev_gpt.a 00:02:54.778 SYMLINK libspdk_bdev_null.so 00:02:54.778 SYMLINK libspdk_fsdev_aio.so 00:02:54.778 SO libspdk_bdev_error.so.6.0 00:02:54.778 SO libspdk_bdev_gpt.so.6.0 00:02:54.778 LIB libspdk_bdev_passthru.a 00:02:54.778 LIB libspdk_sock_posix.a 00:02:54.778 SO libspdk_bdev_passthru.so.6.0 00:02:54.778 SO libspdk_sock_posix.so.6.0 00:02:54.778 SYMLINK libspdk_bdev_error.so 00:02:54.778 SYMLINK libspdk_bdev_gpt.so 00:02:54.778 LIB libspdk_bdev_ftl.a 00:02:54.778 LIB libspdk_bdev_zone_block.a 00:02:54.778 LIB libspdk_bdev_aio.a 00:02:54.778 LIB libspdk_bdev_iscsi.a 00:02:54.778 SO libspdk_bdev_ftl.so.6.0 00:02:54.778 SYMLINK libspdk_bdev_passthru.so 00:02:54.778 SO libspdk_bdev_zone_block.so.6.0 00:02:54.778 SO libspdk_bdev_aio.so.6.0 00:02:54.778 SO libspdk_bdev_iscsi.so.6.0 00:02:54.778 SYMLINK libspdk_sock_posix.so 00:02:54.778 LIB libspdk_bdev_malloc.a 00:02:55.036 SO libspdk_bdev_malloc.so.6.0 00:02:55.036 SYMLINK libspdk_bdev_ftl.so 00:02:55.036 SYMLINK libspdk_bdev_zone_block.so 00:02:55.036 SYMLINK libspdk_bdev_aio.so 00:02:55.036 SYMLINK libspdk_bdev_iscsi.so 00:02:55.036 LIB libspdk_bdev_delay.a 00:02:55.036 SO libspdk_bdev_delay.so.6.0 00:02:55.036 SYMLINK libspdk_bdev_malloc.so 00:02:55.036 SYMLINK libspdk_bdev_delay.so 00:02:55.036 LIB libspdk_bdev_lvol.a 00:02:55.036 LIB libspdk_bdev_virtio.a 00:02:55.036 SO libspdk_bdev_lvol.so.6.0 00:02:55.295 SO libspdk_bdev_virtio.so.6.0 00:02:55.295 SYMLINK libspdk_bdev_lvol.so 00:02:55.295 SYMLINK libspdk_bdev_virtio.so 00:02:55.863 LIB libspdk_bdev_raid.a 00:02:55.863 SO libspdk_bdev_raid.so.6.0 00:02:55.863 SYMLINK libspdk_bdev_raid.so 00:02:57.770 LIB libspdk_bdev_nvme.a 00:02:57.770 SO libspdk_bdev_nvme.so.7.1 00:02:58.048 SYMLINK libspdk_bdev_nvme.so 00:02:58.329 CC module/event/subsystems/iobuf/iobuf.o 00:02:58.329 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:58.329 CC module/event/subsystems/sock/sock.o 00:02:58.329 CC module/event/subsystems/vmd/vmd.o 00:02:58.329 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:58.329 CC module/event/subsystems/fsdev/fsdev.o 00:02:58.329 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:58.329 CC module/event/subsystems/scheduler/scheduler.o 00:02:58.329 CC module/event/subsystems/keyring/keyring.o 00:02:58.588 LIB libspdk_event_keyring.a 00:02:58.588 LIB libspdk_event_fsdev.a 00:02:58.588 LIB libspdk_event_vhost_blk.a 00:02:58.588 LIB libspdk_event_sock.a 00:02:58.588 LIB libspdk_event_scheduler.a 00:02:58.588 LIB libspdk_event_vmd.a 00:02:58.588 SO libspdk_event_keyring.so.1.0 00:02:58.588 SO libspdk_event_fsdev.so.1.0 00:02:58.588 LIB libspdk_event_iobuf.a 00:02:58.588 SO libspdk_event_vhost_blk.so.3.0 00:02:58.588 SO libspdk_event_scheduler.so.4.0 00:02:58.588 SO libspdk_event_sock.so.5.0 00:02:58.588 SO libspdk_event_vmd.so.6.0 00:02:58.588 SO libspdk_event_iobuf.so.3.0 00:02:58.588 SYMLINK libspdk_event_keyring.so 00:02:58.588 SYMLINK libspdk_event_fsdev.so 00:02:58.588 SYMLINK libspdk_event_vhost_blk.so 00:02:58.588 SYMLINK libspdk_event_sock.so 00:02:58.588 SYMLINK libspdk_event_scheduler.so 00:02:58.588 SYMLINK libspdk_event_vmd.so 00:02:58.588 SYMLINK libspdk_event_iobuf.so 00:02:58.847 CC module/event/subsystems/accel/accel.o 00:02:58.847 LIB libspdk_event_accel.a 00:02:58.847 SO libspdk_event_accel.so.6.0 00:02:58.847 SYMLINK libspdk_event_accel.so 00:02:59.106 CC module/event/subsystems/bdev/bdev.o 00:02:59.365 LIB libspdk_event_bdev.a 00:02:59.365 SO libspdk_event_bdev.so.6.0 00:02:59.365 SYMLINK libspdk_event_bdev.so 00:02:59.623 CC module/event/subsystems/nbd/nbd.o 00:02:59.623 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:59.623 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:59.623 CC module/event/subsystems/scsi/scsi.o 00:02:59.623 CC module/event/subsystems/ublk/ublk.o 00:02:59.623 LIB libspdk_event_ublk.a 00:02:59.623 LIB libspdk_event_nbd.a 00:02:59.623 SO libspdk_event_ublk.so.3.0 00:02:59.623 SO libspdk_event_nbd.so.6.0 00:02:59.623 LIB libspdk_event_scsi.a 00:02:59.623 SO libspdk_event_scsi.so.6.0 00:02:59.882 SYMLINK libspdk_event_ublk.so 00:02:59.882 SYMLINK libspdk_event_nbd.so 00:02:59.882 SYMLINK libspdk_event_scsi.so 00:02:59.882 LIB libspdk_event_nvmf.a 00:02:59.882 SO libspdk_event_nvmf.so.6.0 00:02:59.882 SYMLINK libspdk_event_nvmf.so 00:02:59.882 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:59.882 CC module/event/subsystems/iscsi/iscsi.o 00:03:00.141 LIB libspdk_event_vhost_scsi.a 00:03:00.141 SO libspdk_event_vhost_scsi.so.3.0 00:03:00.141 LIB libspdk_event_iscsi.a 00:03:00.141 SO libspdk_event_iscsi.so.6.0 00:03:00.141 SYMLINK libspdk_event_vhost_scsi.so 00:03:00.141 SYMLINK libspdk_event_iscsi.so 00:03:00.401 SO libspdk.so.6.0 00:03:00.401 SYMLINK libspdk.so 00:03:00.401 CC app/spdk_nvme_identify/identify.o 00:03:00.401 CC test/rpc_client/rpc_client_test.o 00:03:00.401 CC app/spdk_top/spdk_top.o 00:03:00.401 CC app/trace_record/trace_record.o 00:03:00.401 CC app/spdk_lspci/spdk_lspci.o 00:03:00.401 TEST_HEADER include/spdk/accel.h 00:03:00.401 TEST_HEADER include/spdk/assert.h 00:03:00.401 TEST_HEADER include/spdk/accel_module.h 00:03:00.401 CC app/spdk_nvme_perf/perf.o 00:03:00.401 TEST_HEADER include/spdk/barrier.h 00:03:00.401 TEST_HEADER include/spdk/base64.h 00:03:00.401 TEST_HEADER include/spdk/bdev.h 00:03:00.401 CXX app/trace/trace.o 00:03:00.401 TEST_HEADER include/spdk/bdev_module.h 00:03:00.401 TEST_HEADER include/spdk/bdev_zone.h 00:03:00.401 TEST_HEADER include/spdk/bit_array.h 00:03:00.401 CC app/spdk_nvme_discover/discovery_aer.o 00:03:00.401 TEST_HEADER include/spdk/bit_pool.h 00:03:00.401 TEST_HEADER include/spdk/blob_bdev.h 00:03:00.401 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:00.401 TEST_HEADER include/spdk/blobfs.h 00:03:00.663 TEST_HEADER include/spdk/blob.h 00:03:00.663 TEST_HEADER include/spdk/conf.h 00:03:00.663 TEST_HEADER include/spdk/config.h 00:03:00.663 TEST_HEADER include/spdk/cpuset.h 00:03:00.663 TEST_HEADER include/spdk/crc16.h 00:03:00.663 TEST_HEADER include/spdk/crc32.h 00:03:00.663 TEST_HEADER include/spdk/crc64.h 00:03:00.663 TEST_HEADER include/spdk/dif.h 00:03:00.663 TEST_HEADER include/spdk/dma.h 00:03:00.663 TEST_HEADER include/spdk/endian.h 00:03:00.663 TEST_HEADER include/spdk/env_dpdk.h 00:03:00.663 TEST_HEADER include/spdk/env.h 00:03:00.663 TEST_HEADER include/spdk/event.h 00:03:00.663 TEST_HEADER include/spdk/fd.h 00:03:00.663 TEST_HEADER include/spdk/fd_group.h 00:03:00.663 TEST_HEADER include/spdk/fsdev.h 00:03:00.663 TEST_HEADER include/spdk/file.h 00:03:00.663 TEST_HEADER include/spdk/fsdev_module.h 00:03:00.663 TEST_HEADER include/spdk/ftl.h 00:03:00.663 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:00.663 TEST_HEADER include/spdk/gpt_spec.h 00:03:00.663 TEST_HEADER include/spdk/hexlify.h 00:03:00.663 TEST_HEADER include/spdk/histogram_data.h 00:03:00.663 TEST_HEADER include/spdk/idxd.h 00:03:00.663 TEST_HEADER include/spdk/idxd_spec.h 00:03:00.663 TEST_HEADER include/spdk/init.h 00:03:00.663 TEST_HEADER include/spdk/ioat.h 00:03:00.663 TEST_HEADER include/spdk/ioat_spec.h 00:03:00.663 TEST_HEADER include/spdk/iscsi_spec.h 00:03:00.663 TEST_HEADER include/spdk/json.h 00:03:00.663 TEST_HEADER include/spdk/jsonrpc.h 00:03:00.663 TEST_HEADER include/spdk/keyring.h 00:03:00.663 TEST_HEADER include/spdk/keyring_module.h 00:03:00.663 TEST_HEADER include/spdk/likely.h 00:03:00.663 TEST_HEADER include/spdk/log.h 00:03:00.663 TEST_HEADER include/spdk/lvol.h 00:03:00.663 TEST_HEADER include/spdk/md5.h 00:03:00.663 TEST_HEADER include/spdk/memory.h 00:03:00.663 TEST_HEADER include/spdk/mmio.h 00:03:00.663 TEST_HEADER include/spdk/nbd.h 00:03:00.663 TEST_HEADER include/spdk/net.h 00:03:00.663 TEST_HEADER include/spdk/notify.h 00:03:00.663 TEST_HEADER include/spdk/nvme_intel.h 00:03:00.663 TEST_HEADER include/spdk/nvme.h 00:03:00.663 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:00.663 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:00.663 TEST_HEADER include/spdk/nvme_spec.h 00:03:00.663 TEST_HEADER include/spdk/nvme_zns.h 00:03:00.663 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:00.663 TEST_HEADER include/spdk/nvmf.h 00:03:00.663 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:00.663 TEST_HEADER include/spdk/nvmf_spec.h 00:03:00.663 TEST_HEADER include/spdk/opal.h 00:03:00.663 TEST_HEADER include/spdk/nvmf_transport.h 00:03:00.663 TEST_HEADER include/spdk/opal_spec.h 00:03:00.663 TEST_HEADER include/spdk/pci_ids.h 00:03:00.663 TEST_HEADER include/spdk/queue.h 00:03:00.663 TEST_HEADER include/spdk/pipe.h 00:03:00.663 TEST_HEADER include/spdk/reduce.h 00:03:00.663 TEST_HEADER include/spdk/rpc.h 00:03:00.663 TEST_HEADER include/spdk/scheduler.h 00:03:00.663 TEST_HEADER include/spdk/scsi.h 00:03:00.663 TEST_HEADER include/spdk/scsi_spec.h 00:03:00.663 TEST_HEADER include/spdk/sock.h 00:03:00.663 TEST_HEADER include/spdk/string.h 00:03:00.663 TEST_HEADER include/spdk/stdinc.h 00:03:00.663 TEST_HEADER include/spdk/thread.h 00:03:00.663 TEST_HEADER include/spdk/trace.h 00:03:00.663 TEST_HEADER include/spdk/trace_parser.h 00:03:00.663 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:00.663 TEST_HEADER include/spdk/tree.h 00:03:00.663 TEST_HEADER include/spdk/ublk.h 00:03:00.663 CC app/spdk_dd/spdk_dd.o 00:03:00.663 TEST_HEADER include/spdk/util.h 00:03:00.663 TEST_HEADER include/spdk/uuid.h 00:03:00.663 TEST_HEADER include/spdk/version.h 00:03:00.663 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:00.663 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:00.663 TEST_HEADER include/spdk/vhost.h 00:03:00.663 TEST_HEADER include/spdk/vmd.h 00:03:00.663 TEST_HEADER include/spdk/xor.h 00:03:00.663 TEST_HEADER include/spdk/zipf.h 00:03:00.663 CXX test/cpp_headers/accel.o 00:03:00.663 CXX test/cpp_headers/accel_module.o 00:03:00.663 CXX test/cpp_headers/assert.o 00:03:00.663 CXX test/cpp_headers/barrier.o 00:03:00.663 CXX test/cpp_headers/base64.o 00:03:00.663 CXX test/cpp_headers/bdev.o 00:03:00.663 CXX test/cpp_headers/bdev_module.o 00:03:00.663 CXX test/cpp_headers/bdev_zone.o 00:03:00.663 CXX test/cpp_headers/bit_array.o 00:03:00.663 CXX test/cpp_headers/bit_pool.o 00:03:00.663 CXX test/cpp_headers/blob_bdev.o 00:03:00.663 CXX test/cpp_headers/blobfs_bdev.o 00:03:00.663 CXX test/cpp_headers/blobfs.o 00:03:00.663 CXX test/cpp_headers/blob.o 00:03:00.663 CXX test/cpp_headers/conf.o 00:03:00.663 CXX test/cpp_headers/config.o 00:03:00.663 CXX test/cpp_headers/cpuset.o 00:03:00.663 CXX test/cpp_headers/crc16.o 00:03:00.663 CC app/nvmf_tgt/nvmf_main.o 00:03:00.663 CC app/iscsi_tgt/iscsi_tgt.o 00:03:00.663 CC app/spdk_tgt/spdk_tgt.o 00:03:00.663 CXX test/cpp_headers/crc32.o 00:03:00.663 CC test/app/jsoncat/jsoncat.o 00:03:00.663 CC examples/ioat/perf/perf.o 00:03:00.663 CC test/app/histogram_perf/histogram_perf.o 00:03:00.663 CC test/thread/poller_perf/poller_perf.o 00:03:00.663 CC test/app/stub/stub.o 00:03:00.663 CC examples/ioat/verify/verify.o 00:03:00.663 CC examples/util/zipf/zipf.o 00:03:00.663 CC app/fio/nvme/fio_plugin.o 00:03:00.663 CC test/env/memory/memory_ut.o 00:03:00.663 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:00.663 CC test/env/vtophys/vtophys.o 00:03:00.663 CC test/env/pci/pci_ut.o 00:03:00.663 CC test/dma/test_dma/test_dma.o 00:03:00.663 CC app/fio/bdev/fio_plugin.o 00:03:00.663 CC test/app/bdev_svc/bdev_svc.o 00:03:00.928 CC test/env/mem_callbacks/mem_callbacks.o 00:03:00.928 LINK spdk_lspci 00:03:00.928 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:00.928 LINK rpc_client_test 00:03:00.928 LINK spdk_nvme_discover 00:03:00.928 LINK jsoncat 00:03:00.928 LINK histogram_perf 00:03:00.928 LINK poller_perf 00:03:00.928 LINK nvmf_tgt 00:03:00.928 LINK interrupt_tgt 00:03:00.928 LINK vtophys 00:03:00.929 CXX test/cpp_headers/crc64.o 00:03:00.929 LINK env_dpdk_post_init 00:03:01.195 CXX test/cpp_headers/dif.o 00:03:01.195 LINK zipf 00:03:01.195 CXX test/cpp_headers/dma.o 00:03:01.195 CXX test/cpp_headers/endian.o 00:03:01.195 CXX test/cpp_headers/env_dpdk.o 00:03:01.195 CXX test/cpp_headers/env.o 00:03:01.195 CXX test/cpp_headers/event.o 00:03:01.195 CXX test/cpp_headers/fd_group.o 00:03:01.195 CXX test/cpp_headers/fd.o 00:03:01.195 CXX test/cpp_headers/file.o 00:03:01.195 LINK iscsi_tgt 00:03:01.195 LINK stub 00:03:01.195 CXX test/cpp_headers/fsdev.o 00:03:01.195 CXX test/cpp_headers/fsdev_module.o 00:03:01.195 CXX test/cpp_headers/ftl.o 00:03:01.195 LINK spdk_tgt 00:03:01.195 CXX test/cpp_headers/fuse_dispatcher.o 00:03:01.195 LINK spdk_trace_record 00:03:01.195 CXX test/cpp_headers/gpt_spec.o 00:03:01.195 LINK bdev_svc 00:03:01.195 CXX test/cpp_headers/hexlify.o 00:03:01.195 CXX test/cpp_headers/histogram_data.o 00:03:01.195 LINK ioat_perf 00:03:01.195 LINK verify 00:03:01.195 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:01.195 CXX test/cpp_headers/idxd.o 00:03:01.195 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:01.195 CXX test/cpp_headers/idxd_spec.o 00:03:01.463 CXX test/cpp_headers/init.o 00:03:01.463 CXX test/cpp_headers/ioat.o 00:03:01.463 CXX test/cpp_headers/ioat_spec.o 00:03:01.463 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:01.463 CXX test/cpp_headers/iscsi_spec.o 00:03:01.463 LINK spdk_dd 00:03:01.463 CXX test/cpp_headers/json.o 00:03:01.463 CXX test/cpp_headers/jsonrpc.o 00:03:01.463 LINK spdk_trace 00:03:01.463 CXX test/cpp_headers/keyring.o 00:03:01.463 CXX test/cpp_headers/keyring_module.o 00:03:01.463 CXX test/cpp_headers/likely.o 00:03:01.463 CXX test/cpp_headers/log.o 00:03:01.463 CXX test/cpp_headers/lvol.o 00:03:01.463 CXX test/cpp_headers/md5.o 00:03:01.463 CXX test/cpp_headers/memory.o 00:03:01.463 CXX test/cpp_headers/mmio.o 00:03:01.463 CXX test/cpp_headers/nbd.o 00:03:01.463 CXX test/cpp_headers/net.o 00:03:01.463 CXX test/cpp_headers/notify.o 00:03:01.463 CXX test/cpp_headers/nvme.o 00:03:01.728 CXX test/cpp_headers/nvme_intel.o 00:03:01.728 CXX test/cpp_headers/nvme_ocssd.o 00:03:01.728 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:01.728 CXX test/cpp_headers/nvme_spec.o 00:03:01.728 CXX test/cpp_headers/nvme_zns.o 00:03:01.728 CC test/event/event_perf/event_perf.o 00:03:01.728 CC test/event/reactor_perf/reactor_perf.o 00:03:01.728 CXX test/cpp_headers/nvmf_cmd.o 00:03:01.728 CC test/event/reactor/reactor.o 00:03:01.728 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:01.728 LINK pci_ut 00:03:01.728 CC test/event/app_repeat/app_repeat.o 00:03:01.728 CXX test/cpp_headers/nvmf.o 00:03:01.728 CXX test/cpp_headers/nvmf_spec.o 00:03:01.728 CXX test/cpp_headers/nvmf_transport.o 00:03:01.728 CC examples/vmd/lsvmd/lsvmd.o 00:03:01.728 CC examples/sock/hello_world/hello_sock.o 00:03:01.728 CC examples/vmd/led/led.o 00:03:01.728 CC test/event/scheduler/scheduler.o 00:03:01.728 CC examples/idxd/perf/perf.o 00:03:01.728 CXX test/cpp_headers/opal.o 00:03:01.728 CC examples/thread/thread/thread_ex.o 00:03:01.728 CXX test/cpp_headers/opal_spec.o 00:03:01.728 CXX test/cpp_headers/pci_ids.o 00:03:01.994 CXX test/cpp_headers/pipe.o 00:03:01.994 CXX test/cpp_headers/queue.o 00:03:01.994 LINK nvme_fuzz 00:03:01.994 LINK test_dma 00:03:01.994 CXX test/cpp_headers/reduce.o 00:03:01.994 CXX test/cpp_headers/rpc.o 00:03:01.994 CXX test/cpp_headers/scheduler.o 00:03:01.994 CXX test/cpp_headers/scsi.o 00:03:01.994 CXX test/cpp_headers/scsi_spec.o 00:03:01.994 CXX test/cpp_headers/sock.o 00:03:01.994 CXX test/cpp_headers/stdinc.o 00:03:01.994 CXX test/cpp_headers/string.o 00:03:01.994 LINK spdk_bdev 00:03:01.994 LINK reactor 00:03:01.994 LINK reactor_perf 00:03:01.994 CXX test/cpp_headers/thread.o 00:03:01.994 LINK event_perf 00:03:01.994 CXX test/cpp_headers/trace.o 00:03:01.994 CXX test/cpp_headers/trace_parser.o 00:03:01.994 CXX test/cpp_headers/tree.o 00:03:01.994 LINK app_repeat 00:03:01.994 LINK spdk_nvme 00:03:01.994 CXX test/cpp_headers/ublk.o 00:03:01.994 LINK lsvmd 00:03:01.994 LINK mem_callbacks 00:03:01.994 CC app/vhost/vhost.o 00:03:01.994 CXX test/cpp_headers/util.o 00:03:01.994 LINK led 00:03:01.994 CXX test/cpp_headers/uuid.o 00:03:01.994 CXX test/cpp_headers/version.o 00:03:01.994 CXX test/cpp_headers/vfio_user_pci.o 00:03:02.254 CXX test/cpp_headers/vfio_user_spec.o 00:03:02.254 CXX test/cpp_headers/vhost.o 00:03:02.254 CXX test/cpp_headers/vmd.o 00:03:02.254 CXX test/cpp_headers/xor.o 00:03:02.254 CXX test/cpp_headers/zipf.o 00:03:02.254 LINK scheduler 00:03:02.254 LINK hello_sock 00:03:02.254 LINK thread 00:03:02.513 LINK vhost_fuzz 00:03:02.513 LINK vhost 00:03:02.513 LINK idxd_perf 00:03:02.513 CC test/nvme/aer/aer.o 00:03:02.513 CC test/nvme/e2edp/nvme_dp.o 00:03:02.513 CC test/nvme/simple_copy/simple_copy.o 00:03:02.513 CC test/nvme/reset/reset.o 00:03:02.513 CC test/nvme/reserve/reserve.o 00:03:02.513 CC test/nvme/err_injection/err_injection.o 00:03:02.513 CC test/nvme/overhead/overhead.o 00:03:02.513 CC test/nvme/cuse/cuse.o 00:03:02.513 CC test/nvme/fdp/fdp.o 00:03:02.513 CC test/nvme/connect_stress/connect_stress.o 00:03:02.513 CC test/nvme/startup/startup.o 00:03:02.513 CC test/nvme/boot_partition/boot_partition.o 00:03:02.513 CC test/nvme/sgl/sgl.o 00:03:02.513 CC test/nvme/compliance/nvme_compliance.o 00:03:02.513 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:02.513 CC test/nvme/fused_ordering/fused_ordering.o 00:03:02.513 CC test/blobfs/mkfs/mkfs.o 00:03:02.513 LINK spdk_nvme_identify 00:03:02.513 CC test/accel/dif/dif.o 00:03:02.513 LINK spdk_nvme_perf 00:03:02.513 LINK spdk_top 00:03:02.513 CC test/lvol/esnap/esnap.o 00:03:02.772 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:02.772 CC examples/nvme/hotplug/hotplug.o 00:03:02.772 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:02.772 CC examples/nvme/reconnect/reconnect.o 00:03:02.772 CC examples/nvme/abort/abort.o 00:03:02.772 CC examples/nvme/hello_world/hello_world.o 00:03:02.772 CC examples/nvme/arbitration/arbitration.o 00:03:02.772 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:02.772 LINK boot_partition 00:03:02.772 LINK connect_stress 00:03:02.772 LINK fused_ordering 00:03:02.772 CC examples/accel/perf/accel_perf.o 00:03:02.772 LINK simple_copy 00:03:02.772 LINK startup 00:03:02.772 CC examples/blob/cli/blobcli.o 00:03:03.031 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:03.031 LINK mkfs 00:03:03.031 LINK doorbell_aers 00:03:03.031 CC examples/blob/hello_world/hello_blob.o 00:03:03.031 LINK reserve 00:03:03.031 LINK err_injection 00:03:03.031 LINK nvme_dp 00:03:03.031 LINK overhead 00:03:03.031 LINK reset 00:03:03.031 LINK pmr_persistence 00:03:03.031 LINK sgl 00:03:03.031 LINK cmb_copy 00:03:03.031 LINK aer 00:03:03.031 LINK hello_world 00:03:03.290 LINK hotplug 00:03:03.290 LINK nvme_compliance 00:03:03.290 LINK arbitration 00:03:03.290 LINK memory_ut 00:03:03.290 LINK reconnect 00:03:03.290 LINK fdp 00:03:03.290 LINK hello_blob 00:03:03.290 LINK hello_fsdev 00:03:03.290 LINK abort 00:03:03.549 LINK blobcli 00:03:03.549 LINK accel_perf 00:03:03.549 LINK nvme_manage 00:03:03.807 LINK dif 00:03:04.065 CC examples/bdev/hello_world/hello_bdev.o 00:03:04.065 CC examples/bdev/bdevperf/bdevperf.o 00:03:04.065 CC test/bdev/bdevio/bdevio.o 00:03:04.324 LINK hello_bdev 00:03:04.324 LINK iscsi_fuzz 00:03:04.583 LINK bdevio 00:03:04.583 LINK cuse 00:03:04.841 LINK bdevperf 00:03:05.409 CC examples/nvmf/nvmf/nvmf.o 00:03:05.667 LINK nvmf 00:03:10.931 LINK esnap 00:03:10.931 00:03:10.931 real 1m21.224s 00:03:10.931 user 13m9.569s 00:03:10.931 sys 2m35.923s 00:03:10.931 07:27:02 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:10.931 07:27:02 make -- common/autotest_common.sh@10 -- $ set +x 00:03:10.931 ************************************ 00:03:10.931 END TEST make 00:03:10.931 ************************************ 00:03:10.931 07:27:02 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:10.931 07:27:02 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:10.931 07:27:02 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:10.931 07:27:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:10.931 07:27:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:10.931 07:27:02 -- pm/common@44 -- $ pid=2738524 00:03:10.931 07:27:02 -- pm/common@50 -- $ kill -TERM 2738524 00:03:10.931 07:27:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:10.931 07:27:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:10.931 07:27:02 -- pm/common@44 -- $ pid=2738526 00:03:10.931 07:27:02 -- pm/common@50 -- $ kill -TERM 2738526 00:03:10.931 07:27:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:10.931 07:27:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:10.931 07:27:02 -- pm/common@44 -- $ pid=2738528 00:03:10.931 07:27:02 -- pm/common@50 -- $ kill -TERM 2738528 00:03:10.931 07:27:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:10.931 07:27:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:10.931 07:27:02 -- pm/common@44 -- $ pid=2738558 00:03:10.931 07:27:02 -- pm/common@50 -- $ sudo -E kill -TERM 2738558 00:03:10.931 07:27:02 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:10.931 07:27:02 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:10.931 07:27:02 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:10.931 07:27:02 -- common/autotest_common.sh@1693 -- # lcov --version 00:03:10.931 07:27:02 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:10.931 07:27:02 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:10.931 07:27:02 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:10.931 07:27:02 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:10.931 07:27:02 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:10.931 07:27:02 -- scripts/common.sh@336 -- # IFS=.-: 00:03:10.931 07:27:02 -- scripts/common.sh@336 -- # read -ra ver1 00:03:10.931 07:27:02 -- scripts/common.sh@337 -- # IFS=.-: 00:03:10.931 07:27:02 -- scripts/common.sh@337 -- # read -ra ver2 00:03:10.931 07:27:02 -- scripts/common.sh@338 -- # local 'op=<' 00:03:10.931 07:27:02 -- scripts/common.sh@340 -- # ver1_l=2 00:03:10.931 07:27:02 -- scripts/common.sh@341 -- # ver2_l=1 00:03:10.931 07:27:02 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:10.931 07:27:02 -- scripts/common.sh@344 -- # case "$op" in 00:03:10.931 07:27:02 -- scripts/common.sh@345 -- # : 1 00:03:10.931 07:27:02 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:10.931 07:27:02 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:10.931 07:27:02 -- scripts/common.sh@365 -- # decimal 1 00:03:10.931 07:27:02 -- scripts/common.sh@353 -- # local d=1 00:03:10.931 07:27:02 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:10.931 07:27:02 -- scripts/common.sh@355 -- # echo 1 00:03:10.931 07:27:02 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:10.931 07:27:02 -- scripts/common.sh@366 -- # decimal 2 00:03:10.931 07:27:02 -- scripts/common.sh@353 -- # local d=2 00:03:10.931 07:27:02 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:10.931 07:27:02 -- scripts/common.sh@355 -- # echo 2 00:03:10.931 07:27:02 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:10.931 07:27:02 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:10.931 07:27:02 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:10.931 07:27:02 -- scripts/common.sh@368 -- # return 0 00:03:10.931 07:27:02 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:10.931 07:27:02 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:10.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:10.932 --rc genhtml_branch_coverage=1 00:03:10.932 --rc genhtml_function_coverage=1 00:03:10.932 --rc genhtml_legend=1 00:03:10.932 --rc geninfo_all_blocks=1 00:03:10.932 --rc geninfo_unexecuted_blocks=1 00:03:10.932 00:03:10.932 ' 00:03:10.932 07:27:02 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:10.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:10.932 --rc genhtml_branch_coverage=1 00:03:10.932 --rc genhtml_function_coverage=1 00:03:10.932 --rc genhtml_legend=1 00:03:10.932 --rc geninfo_all_blocks=1 00:03:10.932 --rc geninfo_unexecuted_blocks=1 00:03:10.932 00:03:10.932 ' 00:03:10.932 07:27:02 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:10.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:10.932 --rc genhtml_branch_coverage=1 00:03:10.932 --rc genhtml_function_coverage=1 00:03:10.932 --rc genhtml_legend=1 00:03:10.932 --rc geninfo_all_blocks=1 00:03:10.932 --rc geninfo_unexecuted_blocks=1 00:03:10.932 00:03:10.932 ' 00:03:10.932 07:27:02 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:10.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:10.932 --rc genhtml_branch_coverage=1 00:03:10.932 --rc genhtml_function_coverage=1 00:03:10.932 --rc genhtml_legend=1 00:03:10.932 --rc geninfo_all_blocks=1 00:03:10.932 --rc geninfo_unexecuted_blocks=1 00:03:10.932 00:03:10.932 ' 00:03:10.932 07:27:02 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:10.932 07:27:02 -- nvmf/common.sh@7 -- # uname -s 00:03:10.932 07:27:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:10.932 07:27:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:10.932 07:27:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:10.932 07:27:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:10.932 07:27:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:10.932 07:27:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:10.932 07:27:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:10.932 07:27:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:10.932 07:27:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:10.932 07:27:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:10.932 07:27:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:03:10.932 07:27:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:03:10.932 07:27:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:10.932 07:27:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:10.932 07:27:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:10.932 07:27:02 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:10.932 07:27:02 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:10.932 07:27:02 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:10.932 07:27:02 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:10.932 07:27:02 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:10.932 07:27:02 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:10.932 07:27:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:10.932 07:27:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:10.932 07:27:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:10.932 07:27:02 -- paths/export.sh@5 -- # export PATH 00:03:10.932 07:27:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:10.932 07:27:02 -- nvmf/common.sh@51 -- # : 0 00:03:10.932 07:27:02 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:10.932 07:27:02 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:10.932 07:27:02 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:10.932 07:27:02 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:10.932 07:27:02 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:10.932 07:27:02 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:10.932 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:10.932 07:27:02 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:10.932 07:27:02 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:10.932 07:27:02 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:10.932 07:27:02 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:10.932 07:27:02 -- spdk/autotest.sh@32 -- # uname -s 00:03:10.932 07:27:02 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:10.932 07:27:02 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:10.932 07:27:02 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:10.932 07:27:02 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:10.932 07:27:02 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:10.932 07:27:02 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:10.932 07:27:02 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:10.932 07:27:02 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:10.932 07:27:02 -- spdk/autotest.sh@48 -- # udevadm_pid=2798980 00:03:10.932 07:27:02 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:10.932 07:27:02 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:10.932 07:27:02 -- pm/common@17 -- # local monitor 00:03:10.932 07:27:02 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:10.932 07:27:02 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:10.932 07:27:02 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:10.932 07:27:02 -- pm/common@21 -- # date +%s 00:03:10.932 07:27:02 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:10.932 07:27:02 -- pm/common@21 -- # date +%s 00:03:10.932 07:27:02 -- pm/common@25 -- # sleep 1 00:03:10.932 07:27:02 -- pm/common@21 -- # date +%s 00:03:10.932 07:27:02 -- pm/common@21 -- # date +%s 00:03:10.932 07:27:02 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731997622 00:03:10.932 07:27:02 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731997622 00:03:10.932 07:27:02 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731997622 00:03:10.932 07:27:02 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731997622 00:03:10.932 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731997622_collect-vmstat.pm.log 00:03:10.932 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731997622_collect-cpu-load.pm.log 00:03:10.932 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731997622_collect-cpu-temp.pm.log 00:03:11.190 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731997622_collect-bmc-pm.bmc.pm.log 00:03:12.123 07:27:03 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:12.123 07:27:03 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:12.123 07:27:03 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:12.123 07:27:03 -- common/autotest_common.sh@10 -- # set +x 00:03:12.123 07:27:03 -- spdk/autotest.sh@59 -- # create_test_list 00:03:12.123 07:27:03 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:12.123 07:27:03 -- common/autotest_common.sh@10 -- # set +x 00:03:12.123 07:27:03 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:12.123 07:27:03 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:12.123 07:27:03 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:12.123 07:27:03 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:12.123 07:27:03 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:12.124 07:27:03 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:12.124 07:27:03 -- common/autotest_common.sh@1457 -- # uname 00:03:12.124 07:27:03 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:12.124 07:27:03 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:12.124 07:27:03 -- common/autotest_common.sh@1477 -- # uname 00:03:12.124 07:27:03 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:12.124 07:27:03 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:12.124 07:27:03 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:12.124 lcov: LCOV version 1.15 00:03:12.124 07:27:03 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:44.191 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:44.191 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:49.453 07:27:40 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:49.453 07:27:40 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:49.453 07:27:40 -- common/autotest_common.sh@10 -- # set +x 00:03:49.453 07:27:40 -- spdk/autotest.sh@78 -- # rm -f 00:03:49.453 07:27:40 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:50.022 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:03:50.022 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:03:50.022 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:03:50.022 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:03:50.022 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:03:50.022 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:03:50.022 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:03:50.022 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:03:50.022 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:03:50.022 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:03:50.022 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:03:50.022 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:03:50.022 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:03:50.281 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:03:50.281 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:03:50.281 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:03:50.281 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:03:50.281 07:27:42 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:50.281 07:27:42 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:50.281 07:27:42 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:50.281 07:27:42 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:03:50.281 07:27:42 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:50.281 07:27:42 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:03:50.281 07:27:42 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:50.281 07:27:42 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:50.281 07:27:42 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:50.281 07:27:42 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:50.281 07:27:42 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:50.281 07:27:42 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:50.281 07:27:42 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:50.281 07:27:42 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:50.281 07:27:42 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:50.281 No valid GPT data, bailing 00:03:50.281 07:27:42 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:50.281 07:27:42 -- scripts/common.sh@394 -- # pt= 00:03:50.281 07:27:42 -- scripts/common.sh@395 -- # return 1 00:03:50.281 07:27:42 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:50.281 1+0 records in 00:03:50.281 1+0 records out 00:03:50.281 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00227661 s, 461 MB/s 00:03:50.281 07:27:42 -- spdk/autotest.sh@105 -- # sync 00:03:50.281 07:27:42 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:50.281 07:27:42 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:50.281 07:27:42 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:52.813 07:27:44 -- spdk/autotest.sh@111 -- # uname -s 00:03:52.813 07:27:44 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:52.813 07:27:44 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:52.813 07:27:44 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:53.379 Hugepages 00:03:53.379 node hugesize free / total 00:03:53.379 node0 1048576kB 0 / 0 00:03:53.379 node0 2048kB 0 / 0 00:03:53.379 node1 1048576kB 0 / 0 00:03:53.379 node1 2048kB 0 / 0 00:03:53.379 00:03:53.380 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:53.380 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:03:53.380 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:03:53.380 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:03:53.380 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:03:53.380 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:03:53.380 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:03:53.380 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:03:53.380 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:03:53.380 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:03:53.380 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:03:53.380 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:03:53.380 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:03:53.380 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:03:53.380 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:03:53.380 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:03:53.380 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:03:53.638 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:53.638 07:27:45 -- spdk/autotest.sh@117 -- # uname -s 00:03:53.638 07:27:45 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:53.638 07:27:45 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:53.638 07:27:45 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:54.573 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:54.573 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:54.573 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:54.573 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:54.573 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:54.832 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:54.832 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:54.832 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:54.832 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:54.832 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:54.832 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:54.832 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:54.832 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:54.832 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:54.832 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:54.832 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:55.771 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:55.771 07:27:47 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:56.708 07:27:48 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:56.708 07:27:48 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:56.708 07:27:48 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:56.708 07:27:48 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:56.708 07:27:48 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:56.708 07:27:48 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:56.708 07:27:48 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:56.708 07:27:48 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:56.708 07:27:48 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:56.966 07:27:48 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:56.966 07:27:48 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:03:56.966 07:27:48 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:57.903 Waiting for block devices as requested 00:03:58.161 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:03:58.161 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:58.421 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:58.421 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:58.421 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:58.421 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:58.680 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:58.680 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:58.680 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:58.680 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:58.939 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:58.939 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:58.939 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:58.939 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:59.198 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:59.198 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:59.198 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:59.458 07:27:51 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:59.458 07:27:51 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:03:59.458 07:27:51 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:03:59.458 07:27:51 -- common/autotest_common.sh@1487 -- # grep 0000:88:00.0/nvme/nvme 00:03:59.458 07:27:51 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:59.458 07:27:51 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:03:59.458 07:27:51 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:59.458 07:27:51 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:59.458 07:27:51 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:59.458 07:27:51 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:59.458 07:27:51 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:59.458 07:27:51 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:59.458 07:27:51 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:59.458 07:27:51 -- common/autotest_common.sh@1531 -- # oacs=' 0xf' 00:03:59.458 07:27:51 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:59.458 07:27:51 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:59.458 07:27:51 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:59.458 07:27:51 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:59.458 07:27:51 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:59.458 07:27:51 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:59.458 07:27:51 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:59.458 07:27:51 -- common/autotest_common.sh@1543 -- # continue 00:03:59.458 07:27:51 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:59.458 07:27:51 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:59.458 07:27:51 -- common/autotest_common.sh@10 -- # set +x 00:03:59.458 07:27:51 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:59.458 07:27:51 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:59.458 07:27:51 -- common/autotest_common.sh@10 -- # set +x 00:03:59.458 07:27:51 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:00.394 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:00.394 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:00.394 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:00.394 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:00.670 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:00.670 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:00.670 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:00.670 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:00.670 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:00.670 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:00.670 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:00.670 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:00.670 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:00.670 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:00.670 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:00.670 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:01.700 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:01.700 07:27:53 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:01.700 07:27:53 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:01.700 07:27:53 -- common/autotest_common.sh@10 -- # set +x 00:04:01.701 07:27:53 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:01.701 07:27:53 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:01.701 07:27:53 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:01.701 07:27:53 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:01.701 07:27:53 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:01.701 07:27:53 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:01.701 07:27:53 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:01.701 07:27:53 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:01.701 07:27:53 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:01.701 07:27:53 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:01.701 07:27:53 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:01.701 07:27:53 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:01.701 07:27:53 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:01.701 07:27:53 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:01.701 07:27:53 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:04:01.701 07:27:53 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:01.701 07:27:53 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:04:01.701 07:27:53 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:04:01.701 07:27:53 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:01.701 07:27:53 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:04:01.701 07:27:53 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:04:01.701 07:27:53 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:88:00.0 00:04:01.701 07:27:53 -- common/autotest_common.sh@1579 -- # [[ -z 0000:88:00.0 ]] 00:04:01.701 07:27:53 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=2809866 00:04:01.701 07:27:53 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:01.701 07:27:53 -- common/autotest_common.sh@1585 -- # waitforlisten 2809866 00:04:01.701 07:27:53 -- common/autotest_common.sh@835 -- # '[' -z 2809866 ']' 00:04:01.701 07:27:53 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:01.701 07:27:53 -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:01.701 07:27:53 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:01.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:01.701 07:27:53 -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:01.701 07:27:53 -- common/autotest_common.sh@10 -- # set +x 00:04:01.959 [2024-11-19 07:27:53.678059] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:04:01.959 [2024-11-19 07:27:53.678200] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2809866 ] 00:04:01.959 [2024-11-19 07:27:53.809504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:02.217 [2024-11-19 07:27:53.949209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:03.151 07:27:54 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:03.151 07:27:54 -- common/autotest_common.sh@868 -- # return 0 00:04:03.151 07:27:54 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:04:03.151 07:27:54 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:04:03.151 07:27:54 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:04:06.438 nvme0n1 00:04:06.438 07:27:58 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:06.438 [2024-11-19 07:27:58.307756] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:04:06.438 [2024-11-19 07:27:58.307829] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:04:06.438 request: 00:04:06.438 { 00:04:06.438 "nvme_ctrlr_name": "nvme0", 00:04:06.438 "password": "test", 00:04:06.438 "method": "bdev_nvme_opal_revert", 00:04:06.438 "req_id": 1 00:04:06.438 } 00:04:06.438 Got JSON-RPC error response 00:04:06.438 response: 00:04:06.438 { 00:04:06.438 "code": -32603, 00:04:06.438 "message": "Internal error" 00:04:06.438 } 00:04:06.438 07:27:58 -- common/autotest_common.sh@1591 -- # true 00:04:06.438 07:27:58 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:04:06.438 07:27:58 -- common/autotest_common.sh@1595 -- # killprocess 2809866 00:04:06.438 07:27:58 -- common/autotest_common.sh@954 -- # '[' -z 2809866 ']' 00:04:06.438 07:27:58 -- common/autotest_common.sh@958 -- # kill -0 2809866 00:04:06.438 07:27:58 -- common/autotest_common.sh@959 -- # uname 00:04:06.438 07:27:58 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:06.438 07:27:58 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2809866 00:04:06.438 07:27:58 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:06.438 07:27:58 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:06.438 07:27:58 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2809866' 00:04:06.438 killing process with pid 2809866 00:04:06.438 07:27:58 -- common/autotest_common.sh@973 -- # kill 2809866 00:04:06.438 07:27:58 -- common/autotest_common.sh@978 -- # wait 2809866 00:04:10.622 07:28:02 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:10.622 07:28:02 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:10.622 07:28:02 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:10.622 07:28:02 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:10.622 07:28:02 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:10.622 07:28:02 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:10.622 07:28:02 -- common/autotest_common.sh@10 -- # set +x 00:04:10.622 07:28:02 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:10.622 07:28:02 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:10.622 07:28:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:10.622 07:28:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:10.622 07:28:02 -- common/autotest_common.sh@10 -- # set +x 00:04:10.622 ************************************ 00:04:10.622 START TEST env 00:04:10.622 ************************************ 00:04:10.622 07:28:02 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:10.622 * Looking for test storage... 00:04:10.622 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:10.622 07:28:02 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:10.622 07:28:02 env -- common/autotest_common.sh@1693 -- # lcov --version 00:04:10.622 07:28:02 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:10.622 07:28:02 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:10.622 07:28:02 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:10.622 07:28:02 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:10.622 07:28:02 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:10.622 07:28:02 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:10.622 07:28:02 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:10.622 07:28:02 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:10.622 07:28:02 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:10.622 07:28:02 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:10.622 07:28:02 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:10.622 07:28:02 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:10.622 07:28:02 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:10.622 07:28:02 env -- scripts/common.sh@344 -- # case "$op" in 00:04:10.622 07:28:02 env -- scripts/common.sh@345 -- # : 1 00:04:10.622 07:28:02 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:10.622 07:28:02 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:10.622 07:28:02 env -- scripts/common.sh@365 -- # decimal 1 00:04:10.622 07:28:02 env -- scripts/common.sh@353 -- # local d=1 00:04:10.622 07:28:02 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:10.622 07:28:02 env -- scripts/common.sh@355 -- # echo 1 00:04:10.622 07:28:02 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:10.622 07:28:02 env -- scripts/common.sh@366 -- # decimal 2 00:04:10.622 07:28:02 env -- scripts/common.sh@353 -- # local d=2 00:04:10.622 07:28:02 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:10.622 07:28:02 env -- scripts/common.sh@355 -- # echo 2 00:04:10.622 07:28:02 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:10.622 07:28:02 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:10.622 07:28:02 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:10.622 07:28:02 env -- scripts/common.sh@368 -- # return 0 00:04:10.622 07:28:02 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:10.622 07:28:02 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:10.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.622 --rc genhtml_branch_coverage=1 00:04:10.622 --rc genhtml_function_coverage=1 00:04:10.622 --rc genhtml_legend=1 00:04:10.622 --rc geninfo_all_blocks=1 00:04:10.622 --rc geninfo_unexecuted_blocks=1 00:04:10.622 00:04:10.622 ' 00:04:10.622 07:28:02 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:10.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.622 --rc genhtml_branch_coverage=1 00:04:10.622 --rc genhtml_function_coverage=1 00:04:10.622 --rc genhtml_legend=1 00:04:10.622 --rc geninfo_all_blocks=1 00:04:10.622 --rc geninfo_unexecuted_blocks=1 00:04:10.622 00:04:10.622 ' 00:04:10.622 07:28:02 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:10.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.622 --rc genhtml_branch_coverage=1 00:04:10.622 --rc genhtml_function_coverage=1 00:04:10.622 --rc genhtml_legend=1 00:04:10.622 --rc geninfo_all_blocks=1 00:04:10.622 --rc geninfo_unexecuted_blocks=1 00:04:10.622 00:04:10.622 ' 00:04:10.622 07:28:02 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:10.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.622 --rc genhtml_branch_coverage=1 00:04:10.622 --rc genhtml_function_coverage=1 00:04:10.622 --rc genhtml_legend=1 00:04:10.622 --rc geninfo_all_blocks=1 00:04:10.622 --rc geninfo_unexecuted_blocks=1 00:04:10.622 00:04:10.622 ' 00:04:10.622 07:28:02 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:10.622 07:28:02 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:10.622 07:28:02 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:10.622 07:28:02 env -- common/autotest_common.sh@10 -- # set +x 00:04:10.622 ************************************ 00:04:10.622 START TEST env_memory 00:04:10.622 ************************************ 00:04:10.622 07:28:02 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:10.622 00:04:10.622 00:04:10.622 CUnit - A unit testing framework for C - Version 2.1-3 00:04:10.622 http://cunit.sourceforge.net/ 00:04:10.622 00:04:10.622 00:04:10.622 Suite: memory 00:04:10.622 Test: alloc and free memory map ...[2024-11-19 07:28:02.315329] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:10.622 passed 00:04:10.622 Test: mem map translation ...[2024-11-19 07:28:02.361833] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:10.622 [2024-11-19 07:28:02.361882] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:10.622 [2024-11-19 07:28:02.361969] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:10.622 [2024-11-19 07:28:02.362020] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:10.622 passed 00:04:10.622 Test: mem map registration ...[2024-11-19 07:28:02.429084] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:10.622 [2024-11-19 07:28:02.429124] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:10.622 passed 00:04:10.622 Test: mem map adjacent registrations ...passed 00:04:10.622 00:04:10.622 Run Summary: Type Total Ran Passed Failed Inactive 00:04:10.622 suites 1 1 n/a 0 0 00:04:10.622 tests 4 4 4 0 0 00:04:10.622 asserts 152 152 152 0 n/a 00:04:10.622 00:04:10.622 Elapsed time = 0.237 seconds 00:04:10.622 00:04:10.622 real 0m0.258s 00:04:10.622 user 0m0.244s 00:04:10.622 sys 0m0.013s 00:04:10.622 07:28:02 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:10.622 07:28:02 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:10.622 ************************************ 00:04:10.622 END TEST env_memory 00:04:10.623 ************************************ 00:04:10.623 07:28:02 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:10.623 07:28:02 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:10.623 07:28:02 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:10.623 07:28:02 env -- common/autotest_common.sh@10 -- # set +x 00:04:10.881 ************************************ 00:04:10.881 START TEST env_vtophys 00:04:10.881 ************************************ 00:04:10.881 07:28:02 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:10.881 EAL: lib.eal log level changed from notice to debug 00:04:10.881 EAL: Detected lcore 0 as core 0 on socket 0 00:04:10.881 EAL: Detected lcore 1 as core 1 on socket 0 00:04:10.881 EAL: Detected lcore 2 as core 2 on socket 0 00:04:10.881 EAL: Detected lcore 3 as core 3 on socket 0 00:04:10.882 EAL: Detected lcore 4 as core 4 on socket 0 00:04:10.882 EAL: Detected lcore 5 as core 5 on socket 0 00:04:10.882 EAL: Detected lcore 6 as core 8 on socket 0 00:04:10.882 EAL: Detected lcore 7 as core 9 on socket 0 00:04:10.882 EAL: Detected lcore 8 as core 10 on socket 0 00:04:10.882 EAL: Detected lcore 9 as core 11 on socket 0 00:04:10.882 EAL: Detected lcore 10 as core 12 on socket 0 00:04:10.882 EAL: Detected lcore 11 as core 13 on socket 0 00:04:10.882 EAL: Detected lcore 12 as core 0 on socket 1 00:04:10.882 EAL: Detected lcore 13 as core 1 on socket 1 00:04:10.882 EAL: Detected lcore 14 as core 2 on socket 1 00:04:10.882 EAL: Detected lcore 15 as core 3 on socket 1 00:04:10.882 EAL: Detected lcore 16 as core 4 on socket 1 00:04:10.882 EAL: Detected lcore 17 as core 5 on socket 1 00:04:10.882 EAL: Detected lcore 18 as core 8 on socket 1 00:04:10.882 EAL: Detected lcore 19 as core 9 on socket 1 00:04:10.882 EAL: Detected lcore 20 as core 10 on socket 1 00:04:10.882 EAL: Detected lcore 21 as core 11 on socket 1 00:04:10.882 EAL: Detected lcore 22 as core 12 on socket 1 00:04:10.882 EAL: Detected lcore 23 as core 13 on socket 1 00:04:10.882 EAL: Detected lcore 24 as core 0 on socket 0 00:04:10.882 EAL: Detected lcore 25 as core 1 on socket 0 00:04:10.882 EAL: Detected lcore 26 as core 2 on socket 0 00:04:10.882 EAL: Detected lcore 27 as core 3 on socket 0 00:04:10.882 EAL: Detected lcore 28 as core 4 on socket 0 00:04:10.882 EAL: Detected lcore 29 as core 5 on socket 0 00:04:10.882 EAL: Detected lcore 30 as core 8 on socket 0 00:04:10.882 EAL: Detected lcore 31 as core 9 on socket 0 00:04:10.882 EAL: Detected lcore 32 as core 10 on socket 0 00:04:10.882 EAL: Detected lcore 33 as core 11 on socket 0 00:04:10.882 EAL: Detected lcore 34 as core 12 on socket 0 00:04:10.882 EAL: Detected lcore 35 as core 13 on socket 0 00:04:10.882 EAL: Detected lcore 36 as core 0 on socket 1 00:04:10.882 EAL: Detected lcore 37 as core 1 on socket 1 00:04:10.882 EAL: Detected lcore 38 as core 2 on socket 1 00:04:10.882 EAL: Detected lcore 39 as core 3 on socket 1 00:04:10.882 EAL: Detected lcore 40 as core 4 on socket 1 00:04:10.882 EAL: Detected lcore 41 as core 5 on socket 1 00:04:10.882 EAL: Detected lcore 42 as core 8 on socket 1 00:04:10.882 EAL: Detected lcore 43 as core 9 on socket 1 00:04:10.882 EAL: Detected lcore 44 as core 10 on socket 1 00:04:10.882 EAL: Detected lcore 45 as core 11 on socket 1 00:04:10.882 EAL: Detected lcore 46 as core 12 on socket 1 00:04:10.882 EAL: Detected lcore 47 as core 13 on socket 1 00:04:10.882 EAL: Maximum logical cores by configuration: 128 00:04:10.882 EAL: Detected CPU lcores: 48 00:04:10.882 EAL: Detected NUMA nodes: 2 00:04:10.882 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:10.882 EAL: Detected shared linkage of DPDK 00:04:10.882 EAL: No shared files mode enabled, IPC will be disabled 00:04:10.882 EAL: Bus pci wants IOVA as 'DC' 00:04:10.882 EAL: Buses did not request a specific IOVA mode. 00:04:10.882 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:10.882 EAL: Selected IOVA mode 'VA' 00:04:10.882 EAL: Probing VFIO support... 00:04:10.882 EAL: IOMMU type 1 (Type 1) is supported 00:04:10.882 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:10.882 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:10.882 EAL: VFIO support initialized 00:04:10.882 EAL: Ask a virtual area of 0x2e000 bytes 00:04:10.882 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:10.882 EAL: Setting up physically contiguous memory... 00:04:10.882 EAL: Setting maximum number of open files to 524288 00:04:10.882 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:10.882 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:10.882 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:10.882 EAL: Ask a virtual area of 0x61000 bytes 00:04:10.882 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:10.882 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:10.882 EAL: Ask a virtual area of 0x400000000 bytes 00:04:10.882 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:10.882 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:10.882 EAL: Ask a virtual area of 0x61000 bytes 00:04:10.882 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:10.882 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:10.882 EAL: Ask a virtual area of 0x400000000 bytes 00:04:10.882 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:10.882 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:10.882 EAL: Ask a virtual area of 0x61000 bytes 00:04:10.882 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:10.882 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:10.882 EAL: Ask a virtual area of 0x400000000 bytes 00:04:10.882 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:10.882 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:10.882 EAL: Ask a virtual area of 0x61000 bytes 00:04:10.882 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:10.882 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:10.882 EAL: Ask a virtual area of 0x400000000 bytes 00:04:10.882 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:10.882 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:10.882 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:10.882 EAL: Ask a virtual area of 0x61000 bytes 00:04:10.882 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:10.882 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:10.882 EAL: Ask a virtual area of 0x400000000 bytes 00:04:10.882 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:10.882 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:10.882 EAL: Ask a virtual area of 0x61000 bytes 00:04:10.882 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:10.882 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:10.882 EAL: Ask a virtual area of 0x400000000 bytes 00:04:10.882 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:10.882 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:10.882 EAL: Ask a virtual area of 0x61000 bytes 00:04:10.882 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:10.882 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:10.882 EAL: Ask a virtual area of 0x400000000 bytes 00:04:10.882 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:10.882 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:10.882 EAL: Ask a virtual area of 0x61000 bytes 00:04:10.882 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:10.882 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:10.882 EAL: Ask a virtual area of 0x400000000 bytes 00:04:10.882 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:10.882 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:10.882 EAL: Hugepages will be freed exactly as allocated. 00:04:10.882 EAL: No shared files mode enabled, IPC is disabled 00:04:10.882 EAL: No shared files mode enabled, IPC is disabled 00:04:10.882 EAL: TSC frequency is ~2700000 KHz 00:04:10.882 EAL: Main lcore 0 is ready (tid=7f733f282a40;cpuset=[0]) 00:04:10.882 EAL: Trying to obtain current memory policy. 00:04:10.882 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:10.882 EAL: Restoring previous memory policy: 0 00:04:10.882 EAL: request: mp_malloc_sync 00:04:10.882 EAL: No shared files mode enabled, IPC is disabled 00:04:10.882 EAL: Heap on socket 0 was expanded by 2MB 00:04:10.882 EAL: No shared files mode enabled, IPC is disabled 00:04:10.882 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:10.882 EAL: Mem event callback 'spdk:(nil)' registered 00:04:10.882 00:04:10.882 00:04:10.882 CUnit - A unit testing framework for C - Version 2.1-3 00:04:10.882 http://cunit.sourceforge.net/ 00:04:10.882 00:04:10.882 00:04:10.882 Suite: components_suite 00:04:11.448 Test: vtophys_malloc_test ...passed 00:04:11.448 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:11.448 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:11.448 EAL: Restoring previous memory policy: 4 00:04:11.448 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.448 EAL: request: mp_malloc_sync 00:04:11.448 EAL: No shared files mode enabled, IPC is disabled 00:04:11.448 EAL: Heap on socket 0 was expanded by 4MB 00:04:11.448 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.448 EAL: request: mp_malloc_sync 00:04:11.448 EAL: No shared files mode enabled, IPC is disabled 00:04:11.448 EAL: Heap on socket 0 was shrunk by 4MB 00:04:11.448 EAL: Trying to obtain current memory policy. 00:04:11.448 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:11.448 EAL: Restoring previous memory policy: 4 00:04:11.448 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.448 EAL: request: mp_malloc_sync 00:04:11.448 EAL: No shared files mode enabled, IPC is disabled 00:04:11.448 EAL: Heap on socket 0 was expanded by 6MB 00:04:11.448 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.448 EAL: request: mp_malloc_sync 00:04:11.448 EAL: No shared files mode enabled, IPC is disabled 00:04:11.448 EAL: Heap on socket 0 was shrunk by 6MB 00:04:11.448 EAL: Trying to obtain current memory policy. 00:04:11.448 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:11.448 EAL: Restoring previous memory policy: 4 00:04:11.448 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.448 EAL: request: mp_malloc_sync 00:04:11.448 EAL: No shared files mode enabled, IPC is disabled 00:04:11.448 EAL: Heap on socket 0 was expanded by 10MB 00:04:11.448 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.448 EAL: request: mp_malloc_sync 00:04:11.448 EAL: No shared files mode enabled, IPC is disabled 00:04:11.448 EAL: Heap on socket 0 was shrunk by 10MB 00:04:11.448 EAL: Trying to obtain current memory policy. 00:04:11.448 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:11.448 EAL: Restoring previous memory policy: 4 00:04:11.448 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.448 EAL: request: mp_malloc_sync 00:04:11.448 EAL: No shared files mode enabled, IPC is disabled 00:04:11.448 EAL: Heap on socket 0 was expanded by 18MB 00:04:11.448 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.448 EAL: request: mp_malloc_sync 00:04:11.448 EAL: No shared files mode enabled, IPC is disabled 00:04:11.448 EAL: Heap on socket 0 was shrunk by 18MB 00:04:11.448 EAL: Trying to obtain current memory policy. 00:04:11.448 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:11.448 EAL: Restoring previous memory policy: 4 00:04:11.448 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.448 EAL: request: mp_malloc_sync 00:04:11.448 EAL: No shared files mode enabled, IPC is disabled 00:04:11.448 EAL: Heap on socket 0 was expanded by 34MB 00:04:11.448 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.448 EAL: request: mp_malloc_sync 00:04:11.448 EAL: No shared files mode enabled, IPC is disabled 00:04:11.448 EAL: Heap on socket 0 was shrunk by 34MB 00:04:11.706 EAL: Trying to obtain current memory policy. 00:04:11.706 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:11.706 EAL: Restoring previous memory policy: 4 00:04:11.706 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.706 EAL: request: mp_malloc_sync 00:04:11.706 EAL: No shared files mode enabled, IPC is disabled 00:04:11.706 EAL: Heap on socket 0 was expanded by 66MB 00:04:11.706 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.706 EAL: request: mp_malloc_sync 00:04:11.706 EAL: No shared files mode enabled, IPC is disabled 00:04:11.706 EAL: Heap on socket 0 was shrunk by 66MB 00:04:11.965 EAL: Trying to obtain current memory policy. 00:04:11.965 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:11.965 EAL: Restoring previous memory policy: 4 00:04:11.965 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.965 EAL: request: mp_malloc_sync 00:04:11.965 EAL: No shared files mode enabled, IPC is disabled 00:04:11.965 EAL: Heap on socket 0 was expanded by 130MB 00:04:12.223 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.223 EAL: request: mp_malloc_sync 00:04:12.223 EAL: No shared files mode enabled, IPC is disabled 00:04:12.223 EAL: Heap on socket 0 was shrunk by 130MB 00:04:12.223 EAL: Trying to obtain current memory policy. 00:04:12.223 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:12.483 EAL: Restoring previous memory policy: 4 00:04:12.483 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.483 EAL: request: mp_malloc_sync 00:04:12.483 EAL: No shared files mode enabled, IPC is disabled 00:04:12.483 EAL: Heap on socket 0 was expanded by 258MB 00:04:13.050 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.050 EAL: request: mp_malloc_sync 00:04:13.050 EAL: No shared files mode enabled, IPC is disabled 00:04:13.050 EAL: Heap on socket 0 was shrunk by 258MB 00:04:13.308 EAL: Trying to obtain current memory policy. 00:04:13.308 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:13.567 EAL: Restoring previous memory policy: 4 00:04:13.567 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.567 EAL: request: mp_malloc_sync 00:04:13.567 EAL: No shared files mode enabled, IPC is disabled 00:04:13.567 EAL: Heap on socket 0 was expanded by 514MB 00:04:14.501 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.501 EAL: request: mp_malloc_sync 00:04:14.501 EAL: No shared files mode enabled, IPC is disabled 00:04:14.501 EAL: Heap on socket 0 was shrunk by 514MB 00:04:15.435 EAL: Trying to obtain current memory policy. 00:04:15.435 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.694 EAL: Restoring previous memory policy: 4 00:04:15.694 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.694 EAL: request: mp_malloc_sync 00:04:15.694 EAL: No shared files mode enabled, IPC is disabled 00:04:15.694 EAL: Heap on socket 0 was expanded by 1026MB 00:04:17.594 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.852 EAL: request: mp_malloc_sync 00:04:17.852 EAL: No shared files mode enabled, IPC is disabled 00:04:17.852 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:19.227 passed 00:04:19.227 00:04:19.227 Run Summary: Type Total Ran Passed Failed Inactive 00:04:19.227 suites 1 1 n/a 0 0 00:04:19.227 tests 2 2 2 0 0 00:04:19.227 asserts 497 497 497 0 n/a 00:04:19.227 00:04:19.227 Elapsed time = 8.283 seconds 00:04:19.227 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.227 EAL: request: mp_malloc_sync 00:04:19.227 EAL: No shared files mode enabled, IPC is disabled 00:04:19.227 EAL: Heap on socket 0 was shrunk by 2MB 00:04:19.227 EAL: No shared files mode enabled, IPC is disabled 00:04:19.227 EAL: No shared files mode enabled, IPC is disabled 00:04:19.227 EAL: No shared files mode enabled, IPC is disabled 00:04:19.227 00:04:19.227 real 0m8.569s 00:04:19.227 user 0m7.428s 00:04:19.227 sys 0m1.080s 00:04:19.227 07:28:11 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:19.227 07:28:11 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:19.227 ************************************ 00:04:19.227 END TEST env_vtophys 00:04:19.227 ************************************ 00:04:19.486 07:28:11 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:19.486 07:28:11 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:19.486 07:28:11 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:19.486 07:28:11 env -- common/autotest_common.sh@10 -- # set +x 00:04:19.486 ************************************ 00:04:19.486 START TEST env_pci 00:04:19.486 ************************************ 00:04:19.486 07:28:11 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:19.486 00:04:19.486 00:04:19.486 CUnit - A unit testing framework for C - Version 2.1-3 00:04:19.486 http://cunit.sourceforge.net/ 00:04:19.486 00:04:19.486 00:04:19.486 Suite: pci 00:04:19.486 Test: pci_hook ...[2024-11-19 07:28:11.220667] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2811968 has claimed it 00:04:19.486 EAL: Cannot find device (10000:00:01.0) 00:04:19.486 EAL: Failed to attach device on primary process 00:04:19.486 passed 00:04:19.486 00:04:19.486 Run Summary: Type Total Ran Passed Failed Inactive 00:04:19.486 suites 1 1 n/a 0 0 00:04:19.486 tests 1 1 1 0 0 00:04:19.486 asserts 25 25 25 0 n/a 00:04:19.486 00:04:19.486 Elapsed time = 0.042 seconds 00:04:19.486 00:04:19.486 real 0m0.094s 00:04:19.486 user 0m0.039s 00:04:19.486 sys 0m0.054s 00:04:19.487 07:28:11 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:19.487 07:28:11 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:19.487 ************************************ 00:04:19.487 END TEST env_pci 00:04:19.487 ************************************ 00:04:19.487 07:28:11 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:19.487 07:28:11 env -- env/env.sh@15 -- # uname 00:04:19.487 07:28:11 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:19.487 07:28:11 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:19.487 07:28:11 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:19.487 07:28:11 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:19.487 07:28:11 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:19.487 07:28:11 env -- common/autotest_common.sh@10 -- # set +x 00:04:19.487 ************************************ 00:04:19.487 START TEST env_dpdk_post_init 00:04:19.487 ************************************ 00:04:19.487 07:28:11 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:19.487 EAL: Detected CPU lcores: 48 00:04:19.487 EAL: Detected NUMA nodes: 2 00:04:19.487 EAL: Detected shared linkage of DPDK 00:04:19.487 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:19.745 EAL: Selected IOVA mode 'VA' 00:04:19.746 EAL: VFIO support initialized 00:04:19.746 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:19.746 EAL: Using IOMMU type 1 (Type 1) 00:04:19.746 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:04:19.746 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:04:19.746 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:04:19.746 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:04:19.746 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:04:19.746 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:04:19.746 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:04:20.004 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:04:20.004 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:04:20.004 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:04:20.004 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:04:20.004 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:04:20.004 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:04:20.004 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:04:20.004 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:04:20.004 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:04:20.940 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:04:24.224 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:04:24.224 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:04:24.224 Starting DPDK initialization... 00:04:24.224 Starting SPDK post initialization... 00:04:24.224 SPDK NVMe probe 00:04:24.224 Attaching to 0000:88:00.0 00:04:24.224 Attached to 0000:88:00.0 00:04:24.224 Cleaning up... 00:04:24.224 00:04:24.224 real 0m4.597s 00:04:24.224 user 0m3.122s 00:04:24.224 sys 0m0.528s 00:04:24.224 07:28:15 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:24.224 07:28:15 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:24.224 ************************************ 00:04:24.224 END TEST env_dpdk_post_init 00:04:24.224 ************************************ 00:04:24.224 07:28:15 env -- env/env.sh@26 -- # uname 00:04:24.224 07:28:15 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:24.224 07:28:15 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:24.224 07:28:15 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:24.224 07:28:15 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:24.224 07:28:15 env -- common/autotest_common.sh@10 -- # set +x 00:04:24.224 ************************************ 00:04:24.224 START TEST env_mem_callbacks 00:04:24.224 ************************************ 00:04:24.224 07:28:15 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:24.224 EAL: Detected CPU lcores: 48 00:04:24.224 EAL: Detected NUMA nodes: 2 00:04:24.224 EAL: Detected shared linkage of DPDK 00:04:24.224 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:24.224 EAL: Selected IOVA mode 'VA' 00:04:24.224 EAL: VFIO support initialized 00:04:24.224 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:24.224 00:04:24.224 00:04:24.224 CUnit - A unit testing framework for C - Version 2.1-3 00:04:24.224 http://cunit.sourceforge.net/ 00:04:24.224 00:04:24.224 00:04:24.224 Suite: memory 00:04:24.224 Test: test ... 00:04:24.224 register 0x200000200000 2097152 00:04:24.224 malloc 3145728 00:04:24.224 register 0x200000400000 4194304 00:04:24.224 buf 0x2000004fffc0 len 3145728 PASSED 00:04:24.224 malloc 64 00:04:24.224 buf 0x2000004ffec0 len 64 PASSED 00:04:24.224 malloc 4194304 00:04:24.224 register 0x200000800000 6291456 00:04:24.224 buf 0x2000009fffc0 len 4194304 PASSED 00:04:24.224 free 0x2000004fffc0 3145728 00:04:24.224 free 0x2000004ffec0 64 00:04:24.224 unregister 0x200000400000 4194304 PASSED 00:04:24.224 free 0x2000009fffc0 4194304 00:04:24.224 unregister 0x200000800000 6291456 PASSED 00:04:24.224 malloc 8388608 00:04:24.224 register 0x200000400000 10485760 00:04:24.224 buf 0x2000005fffc0 len 8388608 PASSED 00:04:24.224 free 0x2000005fffc0 8388608 00:04:24.224 unregister 0x200000400000 10485760 PASSED 00:04:24.224 passed 00:04:24.224 00:04:24.224 Run Summary: Type Total Ran Passed Failed Inactive 00:04:24.224 suites 1 1 n/a 0 0 00:04:24.224 tests 1 1 1 0 0 00:04:24.224 asserts 15 15 15 0 n/a 00:04:24.224 00:04:24.224 Elapsed time = 0.060 seconds 00:04:24.483 00:04:24.483 real 0m0.182s 00:04:24.483 user 0m0.095s 00:04:24.483 sys 0m0.086s 00:04:24.483 07:28:16 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:24.483 07:28:16 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:24.483 ************************************ 00:04:24.483 END TEST env_mem_callbacks 00:04:24.483 ************************************ 00:04:24.483 00:04:24.483 real 0m14.096s 00:04:24.483 user 0m11.113s 00:04:24.483 sys 0m1.996s 00:04:24.483 07:28:16 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:24.483 07:28:16 env -- common/autotest_common.sh@10 -- # set +x 00:04:24.483 ************************************ 00:04:24.483 END TEST env 00:04:24.483 ************************************ 00:04:24.483 07:28:16 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:24.483 07:28:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:24.483 07:28:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:24.483 07:28:16 -- common/autotest_common.sh@10 -- # set +x 00:04:24.483 ************************************ 00:04:24.483 START TEST rpc 00:04:24.483 ************************************ 00:04:24.483 07:28:16 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:24.483 * Looking for test storage... 00:04:24.483 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:24.483 07:28:16 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:24.483 07:28:16 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:24.483 07:28:16 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:24.483 07:28:16 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:24.483 07:28:16 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:24.483 07:28:16 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:24.483 07:28:16 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:24.483 07:28:16 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:24.483 07:28:16 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:24.483 07:28:16 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:24.483 07:28:16 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:24.483 07:28:16 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:24.483 07:28:16 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:24.483 07:28:16 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:24.483 07:28:16 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:24.483 07:28:16 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:24.483 07:28:16 rpc -- scripts/common.sh@345 -- # : 1 00:04:24.483 07:28:16 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:24.483 07:28:16 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:24.483 07:28:16 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:24.483 07:28:16 rpc -- scripts/common.sh@353 -- # local d=1 00:04:24.483 07:28:16 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:24.483 07:28:16 rpc -- scripts/common.sh@355 -- # echo 1 00:04:24.483 07:28:16 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:24.483 07:28:16 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:24.483 07:28:16 rpc -- scripts/common.sh@353 -- # local d=2 00:04:24.483 07:28:16 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:24.483 07:28:16 rpc -- scripts/common.sh@355 -- # echo 2 00:04:24.483 07:28:16 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:24.483 07:28:16 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:24.483 07:28:16 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:24.483 07:28:16 rpc -- scripts/common.sh@368 -- # return 0 00:04:24.483 07:28:16 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:24.484 07:28:16 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:24.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.484 --rc genhtml_branch_coverage=1 00:04:24.484 --rc genhtml_function_coverage=1 00:04:24.484 --rc genhtml_legend=1 00:04:24.484 --rc geninfo_all_blocks=1 00:04:24.484 --rc geninfo_unexecuted_blocks=1 00:04:24.484 00:04:24.484 ' 00:04:24.484 07:28:16 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:24.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.484 --rc genhtml_branch_coverage=1 00:04:24.484 --rc genhtml_function_coverage=1 00:04:24.484 --rc genhtml_legend=1 00:04:24.484 --rc geninfo_all_blocks=1 00:04:24.484 --rc geninfo_unexecuted_blocks=1 00:04:24.484 00:04:24.484 ' 00:04:24.484 07:28:16 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:24.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.484 --rc genhtml_branch_coverage=1 00:04:24.484 --rc genhtml_function_coverage=1 00:04:24.484 --rc genhtml_legend=1 00:04:24.484 --rc geninfo_all_blocks=1 00:04:24.484 --rc geninfo_unexecuted_blocks=1 00:04:24.484 00:04:24.484 ' 00:04:24.484 07:28:16 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:24.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.484 --rc genhtml_branch_coverage=1 00:04:24.484 --rc genhtml_function_coverage=1 00:04:24.484 --rc genhtml_legend=1 00:04:24.484 --rc geninfo_all_blocks=1 00:04:24.484 --rc geninfo_unexecuted_blocks=1 00:04:24.484 00:04:24.484 ' 00:04:24.484 07:28:16 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2812760 00:04:24.484 07:28:16 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:24.484 07:28:16 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:24.484 07:28:16 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2812760 00:04:24.484 07:28:16 rpc -- common/autotest_common.sh@835 -- # '[' -z 2812760 ']' 00:04:24.484 07:28:16 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:24.484 07:28:16 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:24.484 07:28:16 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:24.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:24.484 07:28:16 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:24.484 07:28:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.743 [2024-11-19 07:28:16.486074] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:04:24.743 [2024-11-19 07:28:16.486229] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2812760 ] 00:04:24.743 [2024-11-19 07:28:16.627944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:25.002 [2024-11-19 07:28:16.764510] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:25.002 [2024-11-19 07:28:16.764593] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2812760' to capture a snapshot of events at runtime. 00:04:25.002 [2024-11-19 07:28:16.764621] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:25.002 [2024-11-19 07:28:16.764644] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:25.002 [2024-11-19 07:28:16.764674] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2812760 for offline analysis/debug. 00:04:25.002 [2024-11-19 07:28:16.766253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.937 07:28:17 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:25.937 07:28:17 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:25.937 07:28:17 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:25.937 07:28:17 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:25.937 07:28:17 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:25.937 07:28:17 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:25.937 07:28:17 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:25.937 07:28:17 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:25.937 07:28:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:25.937 ************************************ 00:04:25.937 START TEST rpc_integrity 00:04:25.937 ************************************ 00:04:25.937 07:28:17 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:25.937 07:28:17 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:25.937 07:28:17 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:25.937 07:28:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.937 07:28:17 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:25.937 07:28:17 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:25.937 07:28:17 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:25.937 07:28:17 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:25.937 07:28:17 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:25.937 07:28:17 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:25.937 07:28:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.937 07:28:17 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:25.937 07:28:17 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:25.937 07:28:17 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:25.937 07:28:17 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:25.937 07:28:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.937 07:28:17 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:25.937 07:28:17 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:25.937 { 00:04:25.937 "name": "Malloc0", 00:04:25.937 "aliases": [ 00:04:25.937 "1aecbbbc-f474-4a7d-a45c-264bcd661929" 00:04:25.937 ], 00:04:25.937 "product_name": "Malloc disk", 00:04:25.937 "block_size": 512, 00:04:25.937 "num_blocks": 16384, 00:04:25.937 "uuid": "1aecbbbc-f474-4a7d-a45c-264bcd661929", 00:04:25.937 "assigned_rate_limits": { 00:04:25.937 "rw_ios_per_sec": 0, 00:04:25.937 "rw_mbytes_per_sec": 0, 00:04:25.937 "r_mbytes_per_sec": 0, 00:04:25.937 "w_mbytes_per_sec": 0 00:04:25.937 }, 00:04:25.937 "claimed": false, 00:04:25.937 "zoned": false, 00:04:25.937 "supported_io_types": { 00:04:25.937 "read": true, 00:04:25.937 "write": true, 00:04:25.937 "unmap": true, 00:04:25.937 "flush": true, 00:04:25.937 "reset": true, 00:04:25.937 "nvme_admin": false, 00:04:25.937 "nvme_io": false, 00:04:25.937 "nvme_io_md": false, 00:04:25.937 "write_zeroes": true, 00:04:25.937 "zcopy": true, 00:04:25.937 "get_zone_info": false, 00:04:25.937 "zone_management": false, 00:04:25.937 "zone_append": false, 00:04:25.937 "compare": false, 00:04:25.937 "compare_and_write": false, 00:04:25.937 "abort": true, 00:04:25.937 "seek_hole": false, 00:04:25.937 "seek_data": false, 00:04:25.937 "copy": true, 00:04:25.937 "nvme_iov_md": false 00:04:25.937 }, 00:04:25.937 "memory_domains": [ 00:04:25.937 { 00:04:25.937 "dma_device_id": "system", 00:04:25.937 "dma_device_type": 1 00:04:25.937 }, 00:04:25.937 { 00:04:25.937 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:25.937 "dma_device_type": 2 00:04:25.937 } 00:04:25.937 ], 00:04:25.937 "driver_specific": {} 00:04:25.937 } 00:04:25.937 ]' 00:04:25.937 07:28:17 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:26.196 07:28:17 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:26.196 07:28:17 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:26.196 07:28:17 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.196 07:28:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.196 [2024-11-19 07:28:17.878740] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:26.196 [2024-11-19 07:28:17.878803] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:26.196 [2024-11-19 07:28:17.878845] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000022880 00:04:26.196 [2024-11-19 07:28:17.878869] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:26.196 [2024-11-19 07:28:17.881656] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:26.196 [2024-11-19 07:28:17.881709] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:26.196 Passthru0 00:04:26.196 07:28:17 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.196 07:28:17 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:26.196 07:28:17 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.196 07:28:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.196 07:28:17 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.196 07:28:17 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:26.196 { 00:04:26.196 "name": "Malloc0", 00:04:26.196 "aliases": [ 00:04:26.196 "1aecbbbc-f474-4a7d-a45c-264bcd661929" 00:04:26.196 ], 00:04:26.196 "product_name": "Malloc disk", 00:04:26.196 "block_size": 512, 00:04:26.196 "num_blocks": 16384, 00:04:26.196 "uuid": "1aecbbbc-f474-4a7d-a45c-264bcd661929", 00:04:26.196 "assigned_rate_limits": { 00:04:26.196 "rw_ios_per_sec": 0, 00:04:26.196 "rw_mbytes_per_sec": 0, 00:04:26.196 "r_mbytes_per_sec": 0, 00:04:26.196 "w_mbytes_per_sec": 0 00:04:26.196 }, 00:04:26.196 "claimed": true, 00:04:26.196 "claim_type": "exclusive_write", 00:04:26.196 "zoned": false, 00:04:26.196 "supported_io_types": { 00:04:26.196 "read": true, 00:04:26.196 "write": true, 00:04:26.196 "unmap": true, 00:04:26.196 "flush": true, 00:04:26.196 "reset": true, 00:04:26.196 "nvme_admin": false, 00:04:26.196 "nvme_io": false, 00:04:26.196 "nvme_io_md": false, 00:04:26.196 "write_zeroes": true, 00:04:26.196 "zcopy": true, 00:04:26.196 "get_zone_info": false, 00:04:26.196 "zone_management": false, 00:04:26.196 "zone_append": false, 00:04:26.196 "compare": false, 00:04:26.196 "compare_and_write": false, 00:04:26.196 "abort": true, 00:04:26.196 "seek_hole": false, 00:04:26.196 "seek_data": false, 00:04:26.196 "copy": true, 00:04:26.196 "nvme_iov_md": false 00:04:26.196 }, 00:04:26.196 "memory_domains": [ 00:04:26.196 { 00:04:26.196 "dma_device_id": "system", 00:04:26.196 "dma_device_type": 1 00:04:26.196 }, 00:04:26.196 { 00:04:26.196 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:26.196 "dma_device_type": 2 00:04:26.196 } 00:04:26.196 ], 00:04:26.196 "driver_specific": {} 00:04:26.196 }, 00:04:26.196 { 00:04:26.196 "name": "Passthru0", 00:04:26.196 "aliases": [ 00:04:26.196 "95768a19-a398-5f4e-8bb5-2b148c06e077" 00:04:26.196 ], 00:04:26.196 "product_name": "passthru", 00:04:26.196 "block_size": 512, 00:04:26.196 "num_blocks": 16384, 00:04:26.196 "uuid": "95768a19-a398-5f4e-8bb5-2b148c06e077", 00:04:26.196 "assigned_rate_limits": { 00:04:26.196 "rw_ios_per_sec": 0, 00:04:26.196 "rw_mbytes_per_sec": 0, 00:04:26.196 "r_mbytes_per_sec": 0, 00:04:26.196 "w_mbytes_per_sec": 0 00:04:26.196 }, 00:04:26.196 "claimed": false, 00:04:26.196 "zoned": false, 00:04:26.196 "supported_io_types": { 00:04:26.196 "read": true, 00:04:26.196 "write": true, 00:04:26.196 "unmap": true, 00:04:26.196 "flush": true, 00:04:26.196 "reset": true, 00:04:26.196 "nvme_admin": false, 00:04:26.196 "nvme_io": false, 00:04:26.196 "nvme_io_md": false, 00:04:26.196 "write_zeroes": true, 00:04:26.196 "zcopy": true, 00:04:26.196 "get_zone_info": false, 00:04:26.196 "zone_management": false, 00:04:26.196 "zone_append": false, 00:04:26.196 "compare": false, 00:04:26.196 "compare_and_write": false, 00:04:26.196 "abort": true, 00:04:26.196 "seek_hole": false, 00:04:26.196 "seek_data": false, 00:04:26.196 "copy": true, 00:04:26.196 "nvme_iov_md": false 00:04:26.196 }, 00:04:26.196 "memory_domains": [ 00:04:26.196 { 00:04:26.196 "dma_device_id": "system", 00:04:26.196 "dma_device_type": 1 00:04:26.196 }, 00:04:26.196 { 00:04:26.196 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:26.196 "dma_device_type": 2 00:04:26.196 } 00:04:26.196 ], 00:04:26.196 "driver_specific": { 00:04:26.196 "passthru": { 00:04:26.196 "name": "Passthru0", 00:04:26.196 "base_bdev_name": "Malloc0" 00:04:26.196 } 00:04:26.196 } 00:04:26.196 } 00:04:26.196 ]' 00:04:26.196 07:28:17 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:26.196 07:28:17 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:26.196 07:28:17 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:26.196 07:28:17 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.196 07:28:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.196 07:28:17 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.196 07:28:17 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:26.196 07:28:17 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.196 07:28:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.196 07:28:17 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.196 07:28:17 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:26.196 07:28:17 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.196 07:28:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.196 07:28:17 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.196 07:28:17 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:26.196 07:28:17 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:26.196 07:28:18 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:26.196 00:04:26.196 real 0m0.263s 00:04:26.196 user 0m0.150s 00:04:26.196 sys 0m0.022s 00:04:26.196 07:28:18 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:26.196 07:28:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.196 ************************************ 00:04:26.196 END TEST rpc_integrity 00:04:26.196 ************************************ 00:04:26.196 07:28:18 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:26.196 07:28:18 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:26.196 07:28:18 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:26.196 07:28:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.196 ************************************ 00:04:26.196 START TEST rpc_plugins 00:04:26.196 ************************************ 00:04:26.196 07:28:18 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:26.196 07:28:18 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:26.196 07:28:18 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.196 07:28:18 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:26.196 07:28:18 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.196 07:28:18 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:26.196 07:28:18 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:26.196 07:28:18 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.196 07:28:18 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:26.196 07:28:18 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.196 07:28:18 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:26.196 { 00:04:26.196 "name": "Malloc1", 00:04:26.196 "aliases": [ 00:04:26.196 "0b21e3a6-1027-43db-8d84-0cf9bee349c3" 00:04:26.196 ], 00:04:26.196 "product_name": "Malloc disk", 00:04:26.196 "block_size": 4096, 00:04:26.196 "num_blocks": 256, 00:04:26.196 "uuid": "0b21e3a6-1027-43db-8d84-0cf9bee349c3", 00:04:26.196 "assigned_rate_limits": { 00:04:26.196 "rw_ios_per_sec": 0, 00:04:26.196 "rw_mbytes_per_sec": 0, 00:04:26.196 "r_mbytes_per_sec": 0, 00:04:26.196 "w_mbytes_per_sec": 0 00:04:26.196 }, 00:04:26.196 "claimed": false, 00:04:26.196 "zoned": false, 00:04:26.196 "supported_io_types": { 00:04:26.196 "read": true, 00:04:26.196 "write": true, 00:04:26.196 "unmap": true, 00:04:26.196 "flush": true, 00:04:26.196 "reset": true, 00:04:26.196 "nvme_admin": false, 00:04:26.196 "nvme_io": false, 00:04:26.196 "nvme_io_md": false, 00:04:26.196 "write_zeroes": true, 00:04:26.197 "zcopy": true, 00:04:26.197 "get_zone_info": false, 00:04:26.197 "zone_management": false, 00:04:26.197 "zone_append": false, 00:04:26.197 "compare": false, 00:04:26.197 "compare_and_write": false, 00:04:26.197 "abort": true, 00:04:26.197 "seek_hole": false, 00:04:26.197 "seek_data": false, 00:04:26.197 "copy": true, 00:04:26.197 "nvme_iov_md": false 00:04:26.197 }, 00:04:26.197 "memory_domains": [ 00:04:26.197 { 00:04:26.197 "dma_device_id": "system", 00:04:26.197 "dma_device_type": 1 00:04:26.197 }, 00:04:26.197 { 00:04:26.197 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:26.197 "dma_device_type": 2 00:04:26.197 } 00:04:26.197 ], 00:04:26.197 "driver_specific": {} 00:04:26.197 } 00:04:26.197 ]' 00:04:26.197 07:28:18 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:26.197 07:28:18 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:26.197 07:28:18 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:26.197 07:28:18 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.197 07:28:18 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:26.455 07:28:18 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.455 07:28:18 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:26.455 07:28:18 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.455 07:28:18 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:26.455 07:28:18 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.455 07:28:18 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:26.455 07:28:18 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:26.455 07:28:18 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:26.455 00:04:26.455 real 0m0.117s 00:04:26.455 user 0m0.072s 00:04:26.455 sys 0m0.011s 00:04:26.455 07:28:18 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:26.455 07:28:18 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:26.455 ************************************ 00:04:26.455 END TEST rpc_plugins 00:04:26.455 ************************************ 00:04:26.455 07:28:18 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:26.455 07:28:18 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:26.455 07:28:18 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:26.455 07:28:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.455 ************************************ 00:04:26.455 START TEST rpc_trace_cmd_test 00:04:26.455 ************************************ 00:04:26.455 07:28:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:26.455 07:28:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:26.455 07:28:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:26.455 07:28:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.455 07:28:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:26.455 07:28:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.455 07:28:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:26.455 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2812760", 00:04:26.455 "tpoint_group_mask": "0x8", 00:04:26.455 "iscsi_conn": { 00:04:26.455 "mask": "0x2", 00:04:26.455 "tpoint_mask": "0x0" 00:04:26.455 }, 00:04:26.455 "scsi": { 00:04:26.455 "mask": "0x4", 00:04:26.455 "tpoint_mask": "0x0" 00:04:26.455 }, 00:04:26.455 "bdev": { 00:04:26.455 "mask": "0x8", 00:04:26.455 "tpoint_mask": "0xffffffffffffffff" 00:04:26.455 }, 00:04:26.455 "nvmf_rdma": { 00:04:26.455 "mask": "0x10", 00:04:26.455 "tpoint_mask": "0x0" 00:04:26.455 }, 00:04:26.455 "nvmf_tcp": { 00:04:26.455 "mask": "0x20", 00:04:26.455 "tpoint_mask": "0x0" 00:04:26.455 }, 00:04:26.455 "ftl": { 00:04:26.455 "mask": "0x40", 00:04:26.455 "tpoint_mask": "0x0" 00:04:26.455 }, 00:04:26.455 "blobfs": { 00:04:26.455 "mask": "0x80", 00:04:26.455 "tpoint_mask": "0x0" 00:04:26.455 }, 00:04:26.455 "dsa": { 00:04:26.455 "mask": "0x200", 00:04:26.455 "tpoint_mask": "0x0" 00:04:26.455 }, 00:04:26.455 "thread": { 00:04:26.455 "mask": "0x400", 00:04:26.455 "tpoint_mask": "0x0" 00:04:26.455 }, 00:04:26.455 "nvme_pcie": { 00:04:26.455 "mask": "0x800", 00:04:26.455 "tpoint_mask": "0x0" 00:04:26.455 }, 00:04:26.455 "iaa": { 00:04:26.455 "mask": "0x1000", 00:04:26.455 "tpoint_mask": "0x0" 00:04:26.455 }, 00:04:26.455 "nvme_tcp": { 00:04:26.455 "mask": "0x2000", 00:04:26.455 "tpoint_mask": "0x0" 00:04:26.455 }, 00:04:26.455 "bdev_nvme": { 00:04:26.455 "mask": "0x4000", 00:04:26.455 "tpoint_mask": "0x0" 00:04:26.455 }, 00:04:26.455 "sock": { 00:04:26.455 "mask": "0x8000", 00:04:26.455 "tpoint_mask": "0x0" 00:04:26.455 }, 00:04:26.455 "blob": { 00:04:26.455 "mask": "0x10000", 00:04:26.455 "tpoint_mask": "0x0" 00:04:26.455 }, 00:04:26.455 "bdev_raid": { 00:04:26.455 "mask": "0x20000", 00:04:26.455 "tpoint_mask": "0x0" 00:04:26.455 }, 00:04:26.455 "scheduler": { 00:04:26.455 "mask": "0x40000", 00:04:26.455 "tpoint_mask": "0x0" 00:04:26.455 } 00:04:26.455 }' 00:04:26.455 07:28:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:26.455 07:28:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:26.455 07:28:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:26.455 07:28:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:26.455 07:28:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:26.455 07:28:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:26.455 07:28:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:26.455 07:28:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:26.714 07:28:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:26.714 07:28:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:26.714 00:04:26.714 real 0m0.204s 00:04:26.714 user 0m0.171s 00:04:26.714 sys 0m0.023s 00:04:26.714 07:28:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:26.714 07:28:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:26.714 ************************************ 00:04:26.714 END TEST rpc_trace_cmd_test 00:04:26.714 ************************************ 00:04:26.714 07:28:18 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:26.714 07:28:18 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:26.714 07:28:18 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:26.714 07:28:18 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:26.714 07:28:18 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:26.714 07:28:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.714 ************************************ 00:04:26.714 START TEST rpc_daemon_integrity 00:04:26.714 ************************************ 00:04:26.714 07:28:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:26.714 07:28:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:26.714 07:28:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.714 07:28:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.714 07:28:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.714 07:28:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:26.714 07:28:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:26.714 07:28:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:26.714 07:28:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:26.714 07:28:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.714 07:28:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.714 07:28:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.714 07:28:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:26.714 07:28:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:26.714 07:28:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.714 07:28:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.714 07:28:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.714 07:28:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:26.714 { 00:04:26.714 "name": "Malloc2", 00:04:26.714 "aliases": [ 00:04:26.714 "0bdbe676-b682-461d-a97c-570d9d5466e9" 00:04:26.714 ], 00:04:26.714 "product_name": "Malloc disk", 00:04:26.714 "block_size": 512, 00:04:26.714 "num_blocks": 16384, 00:04:26.714 "uuid": "0bdbe676-b682-461d-a97c-570d9d5466e9", 00:04:26.714 "assigned_rate_limits": { 00:04:26.714 "rw_ios_per_sec": 0, 00:04:26.714 "rw_mbytes_per_sec": 0, 00:04:26.714 "r_mbytes_per_sec": 0, 00:04:26.714 "w_mbytes_per_sec": 0 00:04:26.714 }, 00:04:26.714 "claimed": false, 00:04:26.714 "zoned": false, 00:04:26.714 "supported_io_types": { 00:04:26.714 "read": true, 00:04:26.714 "write": true, 00:04:26.714 "unmap": true, 00:04:26.714 "flush": true, 00:04:26.714 "reset": true, 00:04:26.714 "nvme_admin": false, 00:04:26.714 "nvme_io": false, 00:04:26.714 "nvme_io_md": false, 00:04:26.714 "write_zeroes": true, 00:04:26.714 "zcopy": true, 00:04:26.714 "get_zone_info": false, 00:04:26.714 "zone_management": false, 00:04:26.714 "zone_append": false, 00:04:26.714 "compare": false, 00:04:26.714 "compare_and_write": false, 00:04:26.714 "abort": true, 00:04:26.714 "seek_hole": false, 00:04:26.714 "seek_data": false, 00:04:26.714 "copy": true, 00:04:26.714 "nvme_iov_md": false 00:04:26.714 }, 00:04:26.714 "memory_domains": [ 00:04:26.714 { 00:04:26.714 "dma_device_id": "system", 00:04:26.714 "dma_device_type": 1 00:04:26.714 }, 00:04:26.714 { 00:04:26.714 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:26.714 "dma_device_type": 2 00:04:26.714 } 00:04:26.714 ], 00:04:26.714 "driver_specific": {} 00:04:26.714 } 00:04:26.714 ]' 00:04:26.714 07:28:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:26.714 07:28:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:26.714 07:28:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:26.714 07:28:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.714 07:28:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.714 [2024-11-19 07:28:18.588936] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:26.714 [2024-11-19 07:28:18.589006] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:26.714 [2024-11-19 07:28:18.589062] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000023a80 00:04:26.714 [2024-11-19 07:28:18.589089] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:26.714 [2024-11-19 07:28:18.591866] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:26.714 [2024-11-19 07:28:18.591899] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:26.714 Passthru0 00:04:26.714 07:28:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.714 07:28:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:26.714 07:28:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.714 07:28:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.714 07:28:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.714 07:28:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:26.714 { 00:04:26.714 "name": "Malloc2", 00:04:26.714 "aliases": [ 00:04:26.714 "0bdbe676-b682-461d-a97c-570d9d5466e9" 00:04:26.714 ], 00:04:26.714 "product_name": "Malloc disk", 00:04:26.714 "block_size": 512, 00:04:26.714 "num_blocks": 16384, 00:04:26.714 "uuid": "0bdbe676-b682-461d-a97c-570d9d5466e9", 00:04:26.714 "assigned_rate_limits": { 00:04:26.714 "rw_ios_per_sec": 0, 00:04:26.714 "rw_mbytes_per_sec": 0, 00:04:26.714 "r_mbytes_per_sec": 0, 00:04:26.714 "w_mbytes_per_sec": 0 00:04:26.714 }, 00:04:26.714 "claimed": true, 00:04:26.714 "claim_type": "exclusive_write", 00:04:26.714 "zoned": false, 00:04:26.714 "supported_io_types": { 00:04:26.714 "read": true, 00:04:26.714 "write": true, 00:04:26.714 "unmap": true, 00:04:26.714 "flush": true, 00:04:26.714 "reset": true, 00:04:26.714 "nvme_admin": false, 00:04:26.714 "nvme_io": false, 00:04:26.714 "nvme_io_md": false, 00:04:26.714 "write_zeroes": true, 00:04:26.714 "zcopy": true, 00:04:26.714 "get_zone_info": false, 00:04:26.714 "zone_management": false, 00:04:26.714 "zone_append": false, 00:04:26.714 "compare": false, 00:04:26.714 "compare_and_write": false, 00:04:26.714 "abort": true, 00:04:26.714 "seek_hole": false, 00:04:26.714 "seek_data": false, 00:04:26.714 "copy": true, 00:04:26.714 "nvme_iov_md": false 00:04:26.714 }, 00:04:26.714 "memory_domains": [ 00:04:26.714 { 00:04:26.714 "dma_device_id": "system", 00:04:26.714 "dma_device_type": 1 00:04:26.714 }, 00:04:26.714 { 00:04:26.714 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:26.714 "dma_device_type": 2 00:04:26.714 } 00:04:26.714 ], 00:04:26.714 "driver_specific": {} 00:04:26.714 }, 00:04:26.714 { 00:04:26.714 "name": "Passthru0", 00:04:26.714 "aliases": [ 00:04:26.714 "1d629779-5f2d-5fd6-ac58-b6029c300e3d" 00:04:26.714 ], 00:04:26.714 "product_name": "passthru", 00:04:26.714 "block_size": 512, 00:04:26.714 "num_blocks": 16384, 00:04:26.714 "uuid": "1d629779-5f2d-5fd6-ac58-b6029c300e3d", 00:04:26.714 "assigned_rate_limits": { 00:04:26.714 "rw_ios_per_sec": 0, 00:04:26.714 "rw_mbytes_per_sec": 0, 00:04:26.714 "r_mbytes_per_sec": 0, 00:04:26.714 "w_mbytes_per_sec": 0 00:04:26.714 }, 00:04:26.714 "claimed": false, 00:04:26.714 "zoned": false, 00:04:26.714 "supported_io_types": { 00:04:26.714 "read": true, 00:04:26.714 "write": true, 00:04:26.714 "unmap": true, 00:04:26.714 "flush": true, 00:04:26.714 "reset": true, 00:04:26.714 "nvme_admin": false, 00:04:26.714 "nvme_io": false, 00:04:26.714 "nvme_io_md": false, 00:04:26.714 "write_zeroes": true, 00:04:26.714 "zcopy": true, 00:04:26.714 "get_zone_info": false, 00:04:26.714 "zone_management": false, 00:04:26.714 "zone_append": false, 00:04:26.714 "compare": false, 00:04:26.714 "compare_and_write": false, 00:04:26.714 "abort": true, 00:04:26.714 "seek_hole": false, 00:04:26.714 "seek_data": false, 00:04:26.714 "copy": true, 00:04:26.714 "nvme_iov_md": false 00:04:26.714 }, 00:04:26.714 "memory_domains": [ 00:04:26.714 { 00:04:26.714 "dma_device_id": "system", 00:04:26.714 "dma_device_type": 1 00:04:26.714 }, 00:04:26.714 { 00:04:26.714 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:26.714 "dma_device_type": 2 00:04:26.714 } 00:04:26.714 ], 00:04:26.714 "driver_specific": { 00:04:26.714 "passthru": { 00:04:26.714 "name": "Passthru0", 00:04:26.714 "base_bdev_name": "Malloc2" 00:04:26.714 } 00:04:26.714 } 00:04:26.714 } 00:04:26.714 ]' 00:04:26.714 07:28:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:26.973 07:28:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:26.973 07:28:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:26.973 07:28:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.973 07:28:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.973 07:28:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.973 07:28:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:26.973 07:28:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.973 07:28:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.973 07:28:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.973 07:28:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:26.973 07:28:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.973 07:28:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.973 07:28:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.973 07:28:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:26.973 07:28:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:26.973 07:28:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:26.973 00:04:26.973 real 0m0.262s 00:04:26.973 user 0m0.155s 00:04:26.973 sys 0m0.019s 00:04:26.973 07:28:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:26.973 07:28:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.973 ************************************ 00:04:26.973 END TEST rpc_daemon_integrity 00:04:26.973 ************************************ 00:04:26.973 07:28:18 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:26.973 07:28:18 rpc -- rpc/rpc.sh@84 -- # killprocess 2812760 00:04:26.973 07:28:18 rpc -- common/autotest_common.sh@954 -- # '[' -z 2812760 ']' 00:04:26.973 07:28:18 rpc -- common/autotest_common.sh@958 -- # kill -0 2812760 00:04:26.973 07:28:18 rpc -- common/autotest_common.sh@959 -- # uname 00:04:26.973 07:28:18 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:26.973 07:28:18 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2812760 00:04:26.973 07:28:18 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:26.973 07:28:18 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:26.973 07:28:18 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2812760' 00:04:26.973 killing process with pid 2812760 00:04:26.973 07:28:18 rpc -- common/autotest_common.sh@973 -- # kill 2812760 00:04:26.973 07:28:18 rpc -- common/autotest_common.sh@978 -- # wait 2812760 00:04:29.503 00:04:29.503 real 0m4.977s 00:04:29.503 user 0m5.498s 00:04:29.503 sys 0m0.835s 00:04:29.503 07:28:21 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:29.503 07:28:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.503 ************************************ 00:04:29.504 END TEST rpc 00:04:29.504 ************************************ 00:04:29.504 07:28:21 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:29.504 07:28:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:29.504 07:28:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.504 07:28:21 -- common/autotest_common.sh@10 -- # set +x 00:04:29.504 ************************************ 00:04:29.504 START TEST skip_rpc 00:04:29.504 ************************************ 00:04:29.504 07:28:21 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:29.504 * Looking for test storage... 00:04:29.504 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:29.504 07:28:21 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:29.504 07:28:21 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:29.504 07:28:21 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:29.504 07:28:21 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:29.504 07:28:21 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:29.504 07:28:21 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:29.504 07:28:21 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:29.504 07:28:21 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:29.504 07:28:21 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:29.504 07:28:21 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:29.504 07:28:21 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:29.504 07:28:21 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:29.504 07:28:21 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:29.504 07:28:21 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:29.504 07:28:21 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:29.504 07:28:21 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:29.504 07:28:21 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:29.504 07:28:21 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:29.504 07:28:21 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:29.504 07:28:21 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:29.504 07:28:21 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:29.504 07:28:21 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:29.504 07:28:21 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:29.504 07:28:21 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:29.504 07:28:21 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:29.504 07:28:21 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:29.504 07:28:21 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:29.504 07:28:21 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:29.504 07:28:21 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:29.504 07:28:21 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:29.504 07:28:21 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:29.504 07:28:21 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:29.504 07:28:21 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:29.504 07:28:21 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:29.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.504 --rc genhtml_branch_coverage=1 00:04:29.504 --rc genhtml_function_coverage=1 00:04:29.504 --rc genhtml_legend=1 00:04:29.504 --rc geninfo_all_blocks=1 00:04:29.504 --rc geninfo_unexecuted_blocks=1 00:04:29.504 00:04:29.504 ' 00:04:29.504 07:28:21 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:29.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.504 --rc genhtml_branch_coverage=1 00:04:29.504 --rc genhtml_function_coverage=1 00:04:29.504 --rc genhtml_legend=1 00:04:29.504 --rc geninfo_all_blocks=1 00:04:29.504 --rc geninfo_unexecuted_blocks=1 00:04:29.504 00:04:29.504 ' 00:04:29.504 07:28:21 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:29.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.504 --rc genhtml_branch_coverage=1 00:04:29.504 --rc genhtml_function_coverage=1 00:04:29.504 --rc genhtml_legend=1 00:04:29.504 --rc geninfo_all_blocks=1 00:04:29.504 --rc geninfo_unexecuted_blocks=1 00:04:29.504 00:04:29.504 ' 00:04:29.504 07:28:21 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:29.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.504 --rc genhtml_branch_coverage=1 00:04:29.504 --rc genhtml_function_coverage=1 00:04:29.504 --rc genhtml_legend=1 00:04:29.504 --rc geninfo_all_blocks=1 00:04:29.504 --rc geninfo_unexecuted_blocks=1 00:04:29.504 00:04:29.504 ' 00:04:29.504 07:28:21 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:29.504 07:28:21 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:29.504 07:28:21 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:29.504 07:28:21 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:29.504 07:28:21 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.504 07:28:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.762 ************************************ 00:04:29.763 START TEST skip_rpc 00:04:29.763 ************************************ 00:04:29.763 07:28:21 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:29.763 07:28:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2813488 00:04:29.763 07:28:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:29.763 07:28:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:29.763 07:28:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:29.763 [2024-11-19 07:28:21.529486] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:04:29.763 [2024-11-19 07:28:21.529631] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2813488 ] 00:04:29.763 [2024-11-19 07:28:21.671296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.021 [2024-11-19 07:28:21.810337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.287 07:28:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:35.287 07:28:26 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:35.287 07:28:26 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:35.287 07:28:26 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:35.287 07:28:26 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:35.288 07:28:26 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:35.288 07:28:26 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:35.288 07:28:26 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:35.288 07:28:26 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.288 07:28:26 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.288 07:28:26 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:35.288 07:28:26 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:35.288 07:28:26 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:35.288 07:28:26 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:35.288 07:28:26 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:35.288 07:28:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:35.288 07:28:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2813488 00:04:35.288 07:28:26 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 2813488 ']' 00:04:35.288 07:28:26 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 2813488 00:04:35.288 07:28:26 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:35.288 07:28:26 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:35.288 07:28:26 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2813488 00:04:35.288 07:28:26 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:35.288 07:28:26 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:35.288 07:28:26 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2813488' 00:04:35.288 killing process with pid 2813488 00:04:35.288 07:28:26 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 2813488 00:04:35.288 07:28:26 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 2813488 00:04:37.190 00:04:37.190 real 0m7.462s 00:04:37.190 user 0m6.973s 00:04:37.190 sys 0m0.476s 00:04:37.190 07:28:28 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:37.190 07:28:28 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.190 ************************************ 00:04:37.190 END TEST skip_rpc 00:04:37.190 ************************************ 00:04:37.190 07:28:28 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:37.190 07:28:28 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:37.190 07:28:28 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:37.190 07:28:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.190 ************************************ 00:04:37.190 START TEST skip_rpc_with_json 00:04:37.190 ************************************ 00:04:37.190 07:28:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:37.190 07:28:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:37.190 07:28:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2814438 00:04:37.190 07:28:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:37.190 07:28:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:37.190 07:28:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2814438 00:04:37.190 07:28:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 2814438 ']' 00:04:37.190 07:28:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:37.190 07:28:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:37.190 07:28:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:37.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:37.190 07:28:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:37.190 07:28:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:37.190 [2024-11-19 07:28:29.046293] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:04:37.190 [2024-11-19 07:28:29.046455] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2814438 ] 00:04:37.460 [2024-11-19 07:28:29.182033] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.460 [2024-11-19 07:28:29.314812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.399 07:28:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:38.399 07:28:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:38.399 07:28:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:38.399 07:28:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:38.399 07:28:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:38.399 [2024-11-19 07:28:30.276670] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:38.399 request: 00:04:38.399 { 00:04:38.399 "trtype": "tcp", 00:04:38.399 "method": "nvmf_get_transports", 00:04:38.399 "req_id": 1 00:04:38.399 } 00:04:38.399 Got JSON-RPC error response 00:04:38.399 response: 00:04:38.399 { 00:04:38.399 "code": -19, 00:04:38.399 "message": "No such device" 00:04:38.399 } 00:04:38.399 07:28:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:38.399 07:28:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:38.399 07:28:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:38.399 07:28:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:38.399 [2024-11-19 07:28:30.284829] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:38.399 07:28:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:38.399 07:28:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:38.399 07:28:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:38.399 07:28:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:38.658 07:28:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:38.658 07:28:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:38.658 { 00:04:38.658 "subsystems": [ 00:04:38.658 { 00:04:38.658 "subsystem": "fsdev", 00:04:38.658 "config": [ 00:04:38.658 { 00:04:38.658 "method": "fsdev_set_opts", 00:04:38.658 "params": { 00:04:38.658 "fsdev_io_pool_size": 65535, 00:04:38.658 "fsdev_io_cache_size": 256 00:04:38.658 } 00:04:38.658 } 00:04:38.658 ] 00:04:38.658 }, 00:04:38.658 { 00:04:38.658 "subsystem": "keyring", 00:04:38.658 "config": [] 00:04:38.658 }, 00:04:38.658 { 00:04:38.658 "subsystem": "iobuf", 00:04:38.658 "config": [ 00:04:38.658 { 00:04:38.658 "method": "iobuf_set_options", 00:04:38.658 "params": { 00:04:38.658 "small_pool_count": 8192, 00:04:38.658 "large_pool_count": 1024, 00:04:38.658 "small_bufsize": 8192, 00:04:38.658 "large_bufsize": 135168, 00:04:38.658 "enable_numa": false 00:04:38.658 } 00:04:38.658 } 00:04:38.658 ] 00:04:38.658 }, 00:04:38.658 { 00:04:38.658 "subsystem": "sock", 00:04:38.658 "config": [ 00:04:38.658 { 00:04:38.658 "method": "sock_set_default_impl", 00:04:38.658 "params": { 00:04:38.658 "impl_name": "posix" 00:04:38.658 } 00:04:38.658 }, 00:04:38.658 { 00:04:38.658 "method": "sock_impl_set_options", 00:04:38.658 "params": { 00:04:38.658 "impl_name": "ssl", 00:04:38.658 "recv_buf_size": 4096, 00:04:38.658 "send_buf_size": 4096, 00:04:38.658 "enable_recv_pipe": true, 00:04:38.658 "enable_quickack": false, 00:04:38.658 "enable_placement_id": 0, 00:04:38.658 "enable_zerocopy_send_server": true, 00:04:38.658 "enable_zerocopy_send_client": false, 00:04:38.658 "zerocopy_threshold": 0, 00:04:38.658 "tls_version": 0, 00:04:38.658 "enable_ktls": false 00:04:38.658 } 00:04:38.658 }, 00:04:38.658 { 00:04:38.658 "method": "sock_impl_set_options", 00:04:38.658 "params": { 00:04:38.658 "impl_name": "posix", 00:04:38.658 "recv_buf_size": 2097152, 00:04:38.658 "send_buf_size": 2097152, 00:04:38.658 "enable_recv_pipe": true, 00:04:38.658 "enable_quickack": false, 00:04:38.658 "enable_placement_id": 0, 00:04:38.658 "enable_zerocopy_send_server": true, 00:04:38.658 "enable_zerocopy_send_client": false, 00:04:38.658 "zerocopy_threshold": 0, 00:04:38.658 "tls_version": 0, 00:04:38.658 "enable_ktls": false 00:04:38.658 } 00:04:38.658 } 00:04:38.658 ] 00:04:38.658 }, 00:04:38.658 { 00:04:38.658 "subsystem": "vmd", 00:04:38.658 "config": [] 00:04:38.658 }, 00:04:38.658 { 00:04:38.658 "subsystem": "accel", 00:04:38.658 "config": [ 00:04:38.658 { 00:04:38.658 "method": "accel_set_options", 00:04:38.658 "params": { 00:04:38.658 "small_cache_size": 128, 00:04:38.658 "large_cache_size": 16, 00:04:38.658 "task_count": 2048, 00:04:38.658 "sequence_count": 2048, 00:04:38.658 "buf_count": 2048 00:04:38.658 } 00:04:38.658 } 00:04:38.658 ] 00:04:38.658 }, 00:04:38.659 { 00:04:38.659 "subsystem": "bdev", 00:04:38.659 "config": [ 00:04:38.659 { 00:04:38.659 "method": "bdev_set_options", 00:04:38.659 "params": { 00:04:38.659 "bdev_io_pool_size": 65535, 00:04:38.659 "bdev_io_cache_size": 256, 00:04:38.659 "bdev_auto_examine": true, 00:04:38.659 "iobuf_small_cache_size": 128, 00:04:38.659 "iobuf_large_cache_size": 16 00:04:38.659 } 00:04:38.659 }, 00:04:38.659 { 00:04:38.659 "method": "bdev_raid_set_options", 00:04:38.659 "params": { 00:04:38.659 "process_window_size_kb": 1024, 00:04:38.659 "process_max_bandwidth_mb_sec": 0 00:04:38.659 } 00:04:38.659 }, 00:04:38.659 { 00:04:38.659 "method": "bdev_iscsi_set_options", 00:04:38.659 "params": { 00:04:38.659 "timeout_sec": 30 00:04:38.659 } 00:04:38.659 }, 00:04:38.659 { 00:04:38.659 "method": "bdev_nvme_set_options", 00:04:38.659 "params": { 00:04:38.659 "action_on_timeout": "none", 00:04:38.659 "timeout_us": 0, 00:04:38.659 "timeout_admin_us": 0, 00:04:38.659 "keep_alive_timeout_ms": 10000, 00:04:38.659 "arbitration_burst": 0, 00:04:38.659 "low_priority_weight": 0, 00:04:38.659 "medium_priority_weight": 0, 00:04:38.659 "high_priority_weight": 0, 00:04:38.659 "nvme_adminq_poll_period_us": 10000, 00:04:38.659 "nvme_ioq_poll_period_us": 0, 00:04:38.659 "io_queue_requests": 0, 00:04:38.659 "delay_cmd_submit": true, 00:04:38.659 "transport_retry_count": 4, 00:04:38.659 "bdev_retry_count": 3, 00:04:38.659 "transport_ack_timeout": 0, 00:04:38.659 "ctrlr_loss_timeout_sec": 0, 00:04:38.659 "reconnect_delay_sec": 0, 00:04:38.659 "fast_io_fail_timeout_sec": 0, 00:04:38.659 "disable_auto_failback": false, 00:04:38.659 "generate_uuids": false, 00:04:38.659 "transport_tos": 0, 00:04:38.659 "nvme_error_stat": false, 00:04:38.659 "rdma_srq_size": 0, 00:04:38.659 "io_path_stat": false, 00:04:38.659 "allow_accel_sequence": false, 00:04:38.659 "rdma_max_cq_size": 0, 00:04:38.659 "rdma_cm_event_timeout_ms": 0, 00:04:38.659 "dhchap_digests": [ 00:04:38.659 "sha256", 00:04:38.659 "sha384", 00:04:38.659 "sha512" 00:04:38.659 ], 00:04:38.659 "dhchap_dhgroups": [ 00:04:38.659 "null", 00:04:38.659 "ffdhe2048", 00:04:38.659 "ffdhe3072", 00:04:38.659 "ffdhe4096", 00:04:38.659 "ffdhe6144", 00:04:38.659 "ffdhe8192" 00:04:38.659 ] 00:04:38.659 } 00:04:38.659 }, 00:04:38.659 { 00:04:38.659 "method": "bdev_nvme_set_hotplug", 00:04:38.659 "params": { 00:04:38.659 "period_us": 100000, 00:04:38.659 "enable": false 00:04:38.659 } 00:04:38.659 }, 00:04:38.659 { 00:04:38.659 "method": "bdev_wait_for_examine" 00:04:38.659 } 00:04:38.659 ] 00:04:38.659 }, 00:04:38.659 { 00:04:38.659 "subsystem": "scsi", 00:04:38.659 "config": null 00:04:38.659 }, 00:04:38.659 { 00:04:38.659 "subsystem": "scheduler", 00:04:38.659 "config": [ 00:04:38.659 { 00:04:38.659 "method": "framework_set_scheduler", 00:04:38.659 "params": { 00:04:38.659 "name": "static" 00:04:38.659 } 00:04:38.659 } 00:04:38.659 ] 00:04:38.659 }, 00:04:38.659 { 00:04:38.659 "subsystem": "vhost_scsi", 00:04:38.659 "config": [] 00:04:38.659 }, 00:04:38.659 { 00:04:38.659 "subsystem": "vhost_blk", 00:04:38.659 "config": [] 00:04:38.659 }, 00:04:38.659 { 00:04:38.659 "subsystem": "ublk", 00:04:38.659 "config": [] 00:04:38.659 }, 00:04:38.659 { 00:04:38.659 "subsystem": "nbd", 00:04:38.659 "config": [] 00:04:38.659 }, 00:04:38.659 { 00:04:38.659 "subsystem": "nvmf", 00:04:38.659 "config": [ 00:04:38.659 { 00:04:38.659 "method": "nvmf_set_config", 00:04:38.659 "params": { 00:04:38.659 "discovery_filter": "match_any", 00:04:38.659 "admin_cmd_passthru": { 00:04:38.659 "identify_ctrlr": false 00:04:38.659 }, 00:04:38.659 "dhchap_digests": [ 00:04:38.659 "sha256", 00:04:38.659 "sha384", 00:04:38.659 "sha512" 00:04:38.659 ], 00:04:38.659 "dhchap_dhgroups": [ 00:04:38.659 "null", 00:04:38.659 "ffdhe2048", 00:04:38.659 "ffdhe3072", 00:04:38.659 "ffdhe4096", 00:04:38.659 "ffdhe6144", 00:04:38.659 "ffdhe8192" 00:04:38.659 ] 00:04:38.659 } 00:04:38.659 }, 00:04:38.659 { 00:04:38.659 "method": "nvmf_set_max_subsystems", 00:04:38.659 "params": { 00:04:38.659 "max_subsystems": 1024 00:04:38.659 } 00:04:38.659 }, 00:04:38.659 { 00:04:38.659 "method": "nvmf_set_crdt", 00:04:38.659 "params": { 00:04:38.659 "crdt1": 0, 00:04:38.659 "crdt2": 0, 00:04:38.659 "crdt3": 0 00:04:38.659 } 00:04:38.659 }, 00:04:38.659 { 00:04:38.659 "method": "nvmf_create_transport", 00:04:38.659 "params": { 00:04:38.659 "trtype": "TCP", 00:04:38.659 "max_queue_depth": 128, 00:04:38.659 "max_io_qpairs_per_ctrlr": 127, 00:04:38.659 "in_capsule_data_size": 4096, 00:04:38.659 "max_io_size": 131072, 00:04:38.659 "io_unit_size": 131072, 00:04:38.659 "max_aq_depth": 128, 00:04:38.659 "num_shared_buffers": 511, 00:04:38.659 "buf_cache_size": 4294967295, 00:04:38.659 "dif_insert_or_strip": false, 00:04:38.659 "zcopy": false, 00:04:38.659 "c2h_success": true, 00:04:38.659 "sock_priority": 0, 00:04:38.659 "abort_timeout_sec": 1, 00:04:38.659 "ack_timeout": 0, 00:04:38.659 "data_wr_pool_size": 0 00:04:38.659 } 00:04:38.659 } 00:04:38.659 ] 00:04:38.659 }, 00:04:38.659 { 00:04:38.659 "subsystem": "iscsi", 00:04:38.659 "config": [ 00:04:38.659 { 00:04:38.659 "method": "iscsi_set_options", 00:04:38.659 "params": { 00:04:38.659 "node_base": "iqn.2016-06.io.spdk", 00:04:38.659 "max_sessions": 128, 00:04:38.659 "max_connections_per_session": 2, 00:04:38.659 "max_queue_depth": 64, 00:04:38.659 "default_time2wait": 2, 00:04:38.659 "default_time2retain": 20, 00:04:38.659 "first_burst_length": 8192, 00:04:38.659 "immediate_data": true, 00:04:38.659 "allow_duplicated_isid": false, 00:04:38.659 "error_recovery_level": 0, 00:04:38.659 "nop_timeout": 60, 00:04:38.659 "nop_in_interval": 30, 00:04:38.659 "disable_chap": false, 00:04:38.659 "require_chap": false, 00:04:38.659 "mutual_chap": false, 00:04:38.659 "chap_group": 0, 00:04:38.659 "max_large_datain_per_connection": 64, 00:04:38.659 "max_r2t_per_connection": 4, 00:04:38.659 "pdu_pool_size": 36864, 00:04:38.659 "immediate_data_pool_size": 16384, 00:04:38.659 "data_out_pool_size": 2048 00:04:38.659 } 00:04:38.659 } 00:04:38.659 ] 00:04:38.659 } 00:04:38.659 ] 00:04:38.659 } 00:04:38.659 07:28:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:38.659 07:28:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2814438 00:04:38.659 07:28:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2814438 ']' 00:04:38.659 07:28:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2814438 00:04:38.659 07:28:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:38.659 07:28:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:38.659 07:28:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2814438 00:04:38.659 07:28:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:38.659 07:28:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:38.659 07:28:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2814438' 00:04:38.659 killing process with pid 2814438 00:04:38.659 07:28:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2814438 00:04:38.659 07:28:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2814438 00:04:41.258 07:28:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2814856 00:04:41.258 07:28:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:41.258 07:28:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:46.523 07:28:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2814856 00:04:46.523 07:28:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2814856 ']' 00:04:46.523 07:28:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2814856 00:04:46.523 07:28:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:46.523 07:28:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:46.523 07:28:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2814856 00:04:46.523 07:28:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:46.523 07:28:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:46.523 07:28:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2814856' 00:04:46.523 killing process with pid 2814856 00:04:46.523 07:28:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2814856 00:04:46.523 07:28:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2814856 00:04:49.051 07:28:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:49.051 07:28:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:49.051 00:04:49.051 real 0m11.426s 00:04:49.051 user 0m10.940s 00:04:49.051 sys 0m1.102s 00:04:49.051 07:28:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:49.051 07:28:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:49.051 ************************************ 00:04:49.051 END TEST skip_rpc_with_json 00:04:49.051 ************************************ 00:04:49.051 07:28:40 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:49.051 07:28:40 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:49.051 07:28:40 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.051 07:28:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.051 ************************************ 00:04:49.051 START TEST skip_rpc_with_delay 00:04:49.051 ************************************ 00:04:49.051 07:28:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:49.051 07:28:40 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:49.051 07:28:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:49.051 07:28:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:49.052 07:28:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:49.052 07:28:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:49.052 07:28:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:49.052 07:28:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:49.052 07:28:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:49.052 07:28:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:49.052 07:28:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:49.052 07:28:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:49.052 07:28:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:49.052 [2024-11-19 07:28:40.517913] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:49.052 07:28:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:49.052 07:28:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:49.052 07:28:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:49.052 07:28:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:49.052 00:04:49.052 real 0m0.152s 00:04:49.052 user 0m0.075s 00:04:49.052 sys 0m0.075s 00:04:49.052 07:28:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:49.052 07:28:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:49.052 ************************************ 00:04:49.052 END TEST skip_rpc_with_delay 00:04:49.052 ************************************ 00:04:49.052 07:28:40 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:49.052 07:28:40 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:49.052 07:28:40 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:49.052 07:28:40 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:49.052 07:28:40 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.052 07:28:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.052 ************************************ 00:04:49.052 START TEST exit_on_failed_rpc_init 00:04:49.052 ************************************ 00:04:49.052 07:28:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:49.052 07:28:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2815841 00:04:49.052 07:28:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:49.052 07:28:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2815841 00:04:49.052 07:28:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 2815841 ']' 00:04:49.052 07:28:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.052 07:28:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:49.052 07:28:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.052 07:28:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:49.052 07:28:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:49.052 [2024-11-19 07:28:40.718937] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:04:49.052 [2024-11-19 07:28:40.719114] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2815841 ] 00:04:49.052 [2024-11-19 07:28:40.856080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.310 [2024-11-19 07:28:40.987150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.246 07:28:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:50.246 07:28:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:50.246 07:28:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:50.246 07:28:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:50.246 07:28:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:50.246 07:28:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:50.246 07:28:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:50.246 07:28:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:50.246 07:28:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:50.246 07:28:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:50.246 07:28:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:50.246 07:28:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:50.246 07:28:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:50.246 07:28:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:50.246 07:28:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:50.246 [2024-11-19 07:28:42.010577] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:04:50.246 [2024-11-19 07:28:42.010735] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2815984 ] 00:04:50.246 [2024-11-19 07:28:42.157135] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.505 [2024-11-19 07:28:42.295092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:50.505 [2024-11-19 07:28:42.295253] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:50.505 [2024-11-19 07:28:42.295294] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:50.505 [2024-11-19 07:28:42.295317] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:50.763 07:28:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:50.763 07:28:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:50.763 07:28:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:50.763 07:28:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:50.763 07:28:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:50.763 07:28:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:50.763 07:28:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:50.763 07:28:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2815841 00:04:50.763 07:28:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 2815841 ']' 00:04:50.764 07:28:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 2815841 00:04:50.764 07:28:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:50.764 07:28:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:50.764 07:28:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2815841 00:04:50.764 07:28:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:50.764 07:28:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:50.764 07:28:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2815841' 00:04:50.764 killing process with pid 2815841 00:04:50.764 07:28:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 2815841 00:04:50.764 07:28:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 2815841 00:04:53.296 00:04:53.296 real 0m4.414s 00:04:53.296 user 0m4.877s 00:04:53.296 sys 0m0.768s 00:04:53.296 07:28:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:53.296 07:28:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:53.296 ************************************ 00:04:53.296 END TEST exit_on_failed_rpc_init 00:04:53.296 ************************************ 00:04:53.296 07:28:45 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:53.296 00:04:53.296 real 0m23.779s 00:04:53.296 user 0m23.020s 00:04:53.296 sys 0m2.610s 00:04:53.296 07:28:45 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:53.296 07:28:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.296 ************************************ 00:04:53.296 END TEST skip_rpc 00:04:53.296 ************************************ 00:04:53.296 07:28:45 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:53.296 07:28:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:53.296 07:28:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:53.296 07:28:45 -- common/autotest_common.sh@10 -- # set +x 00:04:53.296 ************************************ 00:04:53.296 START TEST rpc_client 00:04:53.296 ************************************ 00:04:53.296 07:28:45 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:53.296 * Looking for test storage... 00:04:53.296 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:53.296 07:28:45 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:53.296 07:28:45 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:04:53.296 07:28:45 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:53.555 07:28:45 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:53.555 07:28:45 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:53.555 07:28:45 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:53.555 07:28:45 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:53.555 07:28:45 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:53.555 07:28:45 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:53.555 07:28:45 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:53.555 07:28:45 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:53.555 07:28:45 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:53.555 07:28:45 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:53.555 07:28:45 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:53.555 07:28:45 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:53.555 07:28:45 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:53.555 07:28:45 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:53.555 07:28:45 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:53.555 07:28:45 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:53.555 07:28:45 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:53.555 07:28:45 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:53.555 07:28:45 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:53.555 07:28:45 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:53.555 07:28:45 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:53.556 07:28:45 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:53.556 07:28:45 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:53.556 07:28:45 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:53.556 07:28:45 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:53.556 07:28:45 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:53.556 07:28:45 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:53.556 07:28:45 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:53.556 07:28:45 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:53.556 07:28:45 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:53.556 07:28:45 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:53.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.556 --rc genhtml_branch_coverage=1 00:04:53.556 --rc genhtml_function_coverage=1 00:04:53.556 --rc genhtml_legend=1 00:04:53.556 --rc geninfo_all_blocks=1 00:04:53.556 --rc geninfo_unexecuted_blocks=1 00:04:53.556 00:04:53.556 ' 00:04:53.556 07:28:45 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:53.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.556 --rc genhtml_branch_coverage=1 00:04:53.556 --rc genhtml_function_coverage=1 00:04:53.556 --rc genhtml_legend=1 00:04:53.556 --rc geninfo_all_blocks=1 00:04:53.556 --rc geninfo_unexecuted_blocks=1 00:04:53.556 00:04:53.556 ' 00:04:53.556 07:28:45 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:53.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.556 --rc genhtml_branch_coverage=1 00:04:53.556 --rc genhtml_function_coverage=1 00:04:53.556 --rc genhtml_legend=1 00:04:53.556 --rc geninfo_all_blocks=1 00:04:53.556 --rc geninfo_unexecuted_blocks=1 00:04:53.556 00:04:53.556 ' 00:04:53.556 07:28:45 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:53.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.556 --rc genhtml_branch_coverage=1 00:04:53.556 --rc genhtml_function_coverage=1 00:04:53.556 --rc genhtml_legend=1 00:04:53.556 --rc geninfo_all_blocks=1 00:04:53.556 --rc geninfo_unexecuted_blocks=1 00:04:53.556 00:04:53.556 ' 00:04:53.556 07:28:45 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:53.556 OK 00:04:53.556 07:28:45 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:53.556 00:04:53.556 real 0m0.180s 00:04:53.556 user 0m0.107s 00:04:53.556 sys 0m0.081s 00:04:53.556 07:28:45 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:53.556 07:28:45 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:53.556 ************************************ 00:04:53.556 END TEST rpc_client 00:04:53.556 ************************************ 00:04:53.556 07:28:45 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:53.556 07:28:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:53.556 07:28:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:53.556 07:28:45 -- common/autotest_common.sh@10 -- # set +x 00:04:53.556 ************************************ 00:04:53.556 START TEST json_config 00:04:53.556 ************************************ 00:04:53.556 07:28:45 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:53.556 07:28:45 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:53.556 07:28:45 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:04:53.556 07:28:45 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:53.556 07:28:45 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:53.556 07:28:45 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:53.556 07:28:45 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:53.556 07:28:45 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:53.556 07:28:45 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:53.556 07:28:45 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:53.556 07:28:45 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:53.556 07:28:45 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:53.556 07:28:45 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:53.556 07:28:45 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:53.556 07:28:45 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:53.556 07:28:45 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:53.556 07:28:45 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:53.556 07:28:45 json_config -- scripts/common.sh@345 -- # : 1 00:04:53.556 07:28:45 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:53.556 07:28:45 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:53.556 07:28:45 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:53.556 07:28:45 json_config -- scripts/common.sh@353 -- # local d=1 00:04:53.556 07:28:45 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:53.556 07:28:45 json_config -- scripts/common.sh@355 -- # echo 1 00:04:53.556 07:28:45 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:53.556 07:28:45 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:53.556 07:28:45 json_config -- scripts/common.sh@353 -- # local d=2 00:04:53.556 07:28:45 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:53.556 07:28:45 json_config -- scripts/common.sh@355 -- # echo 2 00:04:53.556 07:28:45 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:53.556 07:28:45 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:53.556 07:28:45 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:53.556 07:28:45 json_config -- scripts/common.sh@368 -- # return 0 00:04:53.556 07:28:45 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:53.556 07:28:45 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:53.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.556 --rc genhtml_branch_coverage=1 00:04:53.556 --rc genhtml_function_coverage=1 00:04:53.556 --rc genhtml_legend=1 00:04:53.556 --rc geninfo_all_blocks=1 00:04:53.556 --rc geninfo_unexecuted_blocks=1 00:04:53.556 00:04:53.556 ' 00:04:53.556 07:28:45 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:53.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.556 --rc genhtml_branch_coverage=1 00:04:53.556 --rc genhtml_function_coverage=1 00:04:53.556 --rc genhtml_legend=1 00:04:53.556 --rc geninfo_all_blocks=1 00:04:53.556 --rc geninfo_unexecuted_blocks=1 00:04:53.556 00:04:53.556 ' 00:04:53.556 07:28:45 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:53.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.556 --rc genhtml_branch_coverage=1 00:04:53.556 --rc genhtml_function_coverage=1 00:04:53.556 --rc genhtml_legend=1 00:04:53.556 --rc geninfo_all_blocks=1 00:04:53.556 --rc geninfo_unexecuted_blocks=1 00:04:53.556 00:04:53.556 ' 00:04:53.556 07:28:45 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:53.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.556 --rc genhtml_branch_coverage=1 00:04:53.556 --rc genhtml_function_coverage=1 00:04:53.556 --rc genhtml_legend=1 00:04:53.556 --rc geninfo_all_blocks=1 00:04:53.556 --rc geninfo_unexecuted_blocks=1 00:04:53.556 00:04:53.556 ' 00:04:53.556 07:28:45 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:53.556 07:28:45 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:53.556 07:28:45 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:53.556 07:28:45 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:53.556 07:28:45 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:53.556 07:28:45 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:53.556 07:28:45 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:53.556 07:28:45 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:53.556 07:28:45 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:53.556 07:28:45 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:53.556 07:28:45 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:53.556 07:28:45 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:53.556 07:28:45 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:53.556 07:28:45 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:53.556 07:28:45 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:53.556 07:28:45 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:53.556 07:28:45 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:53.556 07:28:45 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:53.556 07:28:45 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:53.556 07:28:45 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:53.556 07:28:45 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:53.556 07:28:45 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:53.556 07:28:45 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:53.556 07:28:45 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.556 07:28:45 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.557 07:28:45 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.557 07:28:45 json_config -- paths/export.sh@5 -- # export PATH 00:04:53.557 07:28:45 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.557 07:28:45 json_config -- nvmf/common.sh@51 -- # : 0 00:04:53.557 07:28:45 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:53.557 07:28:45 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:53.557 07:28:45 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:53.557 07:28:45 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:53.557 07:28:45 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:53.557 07:28:45 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:53.557 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:53.557 07:28:45 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:53.557 07:28:45 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:53.557 07:28:45 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:53.557 07:28:45 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:53.557 07:28:45 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:53.557 07:28:45 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:53.557 07:28:45 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:53.557 07:28:45 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:53.557 07:28:45 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:53.557 07:28:45 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:53.557 07:28:45 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:53.557 07:28:45 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:53.557 07:28:45 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:53.557 07:28:45 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:53.557 07:28:45 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:53.557 07:28:45 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:53.557 07:28:45 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:53.557 07:28:45 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:53.557 07:28:45 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:53.557 INFO: JSON configuration test init 00:04:53.557 07:28:45 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:53.557 07:28:45 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:53.557 07:28:45 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:53.557 07:28:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:53.557 07:28:45 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:53.557 07:28:45 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:53.557 07:28:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:53.557 07:28:45 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:53.557 07:28:45 json_config -- json_config/common.sh@9 -- # local app=target 00:04:53.557 07:28:45 json_config -- json_config/common.sh@10 -- # shift 00:04:53.557 07:28:45 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:53.557 07:28:45 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:53.557 07:28:45 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:53.557 07:28:45 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:53.557 07:28:45 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:53.557 07:28:45 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2816633 00:04:53.557 07:28:45 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:53.557 07:28:45 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:53.557 Waiting for target to run... 00:04:53.557 07:28:45 json_config -- json_config/common.sh@25 -- # waitforlisten 2816633 /var/tmp/spdk_tgt.sock 00:04:53.557 07:28:45 json_config -- common/autotest_common.sh@835 -- # '[' -z 2816633 ']' 00:04:53.557 07:28:45 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:53.557 07:28:45 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:53.557 07:28:45 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:53.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:53.557 07:28:45 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:53.557 07:28:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:53.816 [2024-11-19 07:28:45.579041] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:04:53.816 [2024-11-19 07:28:45.579197] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2816633 ] 00:04:54.075 [2024-11-19 07:28:46.001813] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.333 [2024-11-19 07:28:46.124401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.591 07:28:46 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:54.591 07:28:46 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:54.591 07:28:46 json_config -- json_config/common.sh@26 -- # echo '' 00:04:54.591 00:04:54.591 07:28:46 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:54.591 07:28:46 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:54.591 07:28:46 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:54.591 07:28:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:54.591 07:28:46 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:54.591 07:28:46 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:54.591 07:28:46 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:54.591 07:28:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:54.848 07:28:46 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:54.848 07:28:46 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:54.848 07:28:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:59.034 07:28:50 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:59.034 07:28:50 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:59.034 07:28:50 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:59.034 07:28:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:59.034 07:28:50 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:59.034 07:28:50 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:59.034 07:28:50 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:59.034 07:28:50 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:59.034 07:28:50 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:59.034 07:28:50 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:59.034 07:28:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:59.034 07:28:50 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:59.034 07:28:50 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:59.034 07:28:50 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:59.034 07:28:50 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:59.034 07:28:50 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:59.034 07:28:50 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:59.034 07:28:50 json_config -- json_config/json_config.sh@54 -- # sort 00:04:59.034 07:28:50 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:59.034 07:28:50 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:59.034 07:28:50 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:59.034 07:28:50 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:59.034 07:28:50 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:59.034 07:28:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:59.034 07:28:50 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:59.034 07:28:50 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:59.034 07:28:50 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:59.034 07:28:50 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:59.034 07:28:50 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:59.034 07:28:50 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:59.034 07:28:50 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:59.034 07:28:50 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:59.034 07:28:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:59.034 07:28:50 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:59.034 07:28:50 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:59.034 07:28:50 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:59.034 07:28:50 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:59.034 07:28:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:59.292 MallocForNvmf0 00:04:59.292 07:28:51 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:59.292 07:28:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:59.550 MallocForNvmf1 00:04:59.550 07:28:51 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:59.550 07:28:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:59.808 [2024-11-19 07:28:51.541544] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:59.808 07:28:51 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:59.808 07:28:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:00.066 07:28:51 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:00.066 07:28:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:00.324 07:28:52 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:00.324 07:28:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:00.582 07:28:52 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:00.582 07:28:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:00.840 [2024-11-19 07:28:52.613276] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:00.840 07:28:52 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:00.840 07:28:52 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:00.840 07:28:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:00.840 07:28:52 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:00.840 07:28:52 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:00.840 07:28:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:00.840 07:28:52 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:00.840 07:28:52 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:00.840 07:28:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:01.098 MallocBdevForConfigChangeCheck 00:05:01.098 07:28:52 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:01.098 07:28:52 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:01.098 07:28:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:01.098 07:28:52 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:01.098 07:28:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:01.664 07:28:53 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:01.664 INFO: shutting down applications... 00:05:01.664 07:28:53 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:01.664 07:28:53 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:01.664 07:28:53 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:01.664 07:28:53 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:03.563 Calling clear_iscsi_subsystem 00:05:03.563 Calling clear_nvmf_subsystem 00:05:03.563 Calling clear_nbd_subsystem 00:05:03.563 Calling clear_ublk_subsystem 00:05:03.563 Calling clear_vhost_blk_subsystem 00:05:03.563 Calling clear_vhost_scsi_subsystem 00:05:03.563 Calling clear_bdev_subsystem 00:05:03.563 07:28:55 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:03.563 07:28:55 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:03.563 07:28:55 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:03.563 07:28:55 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:03.563 07:28:55 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:03.563 07:28:55 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:03.563 07:28:55 json_config -- json_config/json_config.sh@352 -- # break 00:05:03.563 07:28:55 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:03.563 07:28:55 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:03.563 07:28:55 json_config -- json_config/common.sh@31 -- # local app=target 00:05:03.563 07:28:55 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:03.563 07:28:55 json_config -- json_config/common.sh@35 -- # [[ -n 2816633 ]] 00:05:03.563 07:28:55 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2816633 00:05:03.563 07:28:55 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:03.563 07:28:55 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:03.563 07:28:55 json_config -- json_config/common.sh@41 -- # kill -0 2816633 00:05:03.563 07:28:55 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:04.129 07:28:55 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:04.129 07:28:55 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:04.129 07:28:55 json_config -- json_config/common.sh@41 -- # kill -0 2816633 00:05:04.129 07:28:55 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:04.695 07:28:56 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:04.695 07:28:56 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:04.695 07:28:56 json_config -- json_config/common.sh@41 -- # kill -0 2816633 00:05:04.695 07:28:56 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:04.695 07:28:56 json_config -- json_config/common.sh@43 -- # break 00:05:04.695 07:28:56 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:04.695 07:28:56 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:04.695 SPDK target shutdown done 00:05:04.695 07:28:56 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:04.695 INFO: relaunching applications... 00:05:04.695 07:28:56 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:04.695 07:28:56 json_config -- json_config/common.sh@9 -- # local app=target 00:05:04.695 07:28:56 json_config -- json_config/common.sh@10 -- # shift 00:05:04.695 07:28:56 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:04.695 07:28:56 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:04.695 07:28:56 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:04.695 07:28:56 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:04.695 07:28:56 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:04.695 07:28:56 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2817967 00:05:04.695 07:28:56 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:04.695 07:28:56 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:04.695 Waiting for target to run... 00:05:04.695 07:28:56 json_config -- json_config/common.sh@25 -- # waitforlisten 2817967 /var/tmp/spdk_tgt.sock 00:05:04.695 07:28:56 json_config -- common/autotest_common.sh@835 -- # '[' -z 2817967 ']' 00:05:04.695 07:28:56 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:04.695 07:28:56 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:04.695 07:28:56 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:04.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:04.695 07:28:56 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:04.695 07:28:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.695 [2024-11-19 07:28:56.544626] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:04.695 [2024-11-19 07:28:56.544806] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2817967 ] 00:05:05.262 [2024-11-19 07:28:57.155308] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.520 [2024-11-19 07:28:57.284463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.734 [2024-11-19 07:29:01.076136] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:09.734 [2024-11-19 07:29:01.108720] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:09.734 07:29:01 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:09.734 07:29:01 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:09.734 07:29:01 json_config -- json_config/common.sh@26 -- # echo '' 00:05:09.734 00:05:09.734 07:29:01 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:09.734 07:29:01 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:09.734 INFO: Checking if target configuration is the same... 00:05:09.734 07:29:01 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:09.734 07:29:01 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:09.734 07:29:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:09.734 + '[' 2 -ne 2 ']' 00:05:09.734 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:09.734 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:09.734 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:09.734 +++ basename /dev/fd/62 00:05:09.734 ++ mktemp /tmp/62.XXX 00:05:09.734 + tmp_file_1=/tmp/62.ZFv 00:05:09.734 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:09.734 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:09.734 + tmp_file_2=/tmp/spdk_tgt_config.json.3nb 00:05:09.734 + ret=0 00:05:09.734 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:09.734 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:09.734 + diff -u /tmp/62.ZFv /tmp/spdk_tgt_config.json.3nb 00:05:09.734 + echo 'INFO: JSON config files are the same' 00:05:09.734 INFO: JSON config files are the same 00:05:09.734 + rm /tmp/62.ZFv /tmp/spdk_tgt_config.json.3nb 00:05:09.734 + exit 0 00:05:09.734 07:29:01 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:09.734 07:29:01 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:09.734 INFO: changing configuration and checking if this can be detected... 00:05:09.734 07:29:01 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:09.734 07:29:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:09.993 07:29:01 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:09.993 07:29:01 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:09.993 07:29:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:09.993 + '[' 2 -ne 2 ']' 00:05:09.993 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:09.993 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:09.993 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:09.993 +++ basename /dev/fd/62 00:05:09.993 ++ mktemp /tmp/62.XXX 00:05:09.993 + tmp_file_1=/tmp/62.1xS 00:05:09.993 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:09.993 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:09.993 + tmp_file_2=/tmp/spdk_tgt_config.json.QLz 00:05:09.993 + ret=0 00:05:09.993 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:10.559 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:10.559 + diff -u /tmp/62.1xS /tmp/spdk_tgt_config.json.QLz 00:05:10.559 + ret=1 00:05:10.559 + echo '=== Start of file: /tmp/62.1xS ===' 00:05:10.559 + cat /tmp/62.1xS 00:05:10.559 + echo '=== End of file: /tmp/62.1xS ===' 00:05:10.559 + echo '' 00:05:10.559 + echo '=== Start of file: /tmp/spdk_tgt_config.json.QLz ===' 00:05:10.559 + cat /tmp/spdk_tgt_config.json.QLz 00:05:10.559 + echo '=== End of file: /tmp/spdk_tgt_config.json.QLz ===' 00:05:10.559 + echo '' 00:05:10.559 + rm /tmp/62.1xS /tmp/spdk_tgt_config.json.QLz 00:05:10.559 + exit 1 00:05:10.559 07:29:02 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:10.559 INFO: configuration change detected. 00:05:10.559 07:29:02 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:10.559 07:29:02 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:10.559 07:29:02 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:10.559 07:29:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.559 07:29:02 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:10.559 07:29:02 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:10.559 07:29:02 json_config -- json_config/json_config.sh@324 -- # [[ -n 2817967 ]] 00:05:10.559 07:29:02 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:10.559 07:29:02 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:10.559 07:29:02 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:10.559 07:29:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.559 07:29:02 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:10.559 07:29:02 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:10.559 07:29:02 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:10.559 07:29:02 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:10.559 07:29:02 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:10.559 07:29:02 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:10.559 07:29:02 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:10.559 07:29:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.559 07:29:02 json_config -- json_config/json_config.sh@330 -- # killprocess 2817967 00:05:10.559 07:29:02 json_config -- common/autotest_common.sh@954 -- # '[' -z 2817967 ']' 00:05:10.559 07:29:02 json_config -- common/autotest_common.sh@958 -- # kill -0 2817967 00:05:10.559 07:29:02 json_config -- common/autotest_common.sh@959 -- # uname 00:05:10.559 07:29:02 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:10.559 07:29:02 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2817967 00:05:10.559 07:29:02 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:10.559 07:29:02 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:10.559 07:29:02 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2817967' 00:05:10.559 killing process with pid 2817967 00:05:10.559 07:29:02 json_config -- common/autotest_common.sh@973 -- # kill 2817967 00:05:10.559 07:29:02 json_config -- common/autotest_common.sh@978 -- # wait 2817967 00:05:13.092 07:29:04 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:13.092 07:29:04 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:13.092 07:29:04 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:13.092 07:29:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:13.092 07:29:04 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:13.092 07:29:04 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:13.092 INFO: Success 00:05:13.092 00:05:13.092 real 0m19.506s 00:05:13.092 user 0m21.178s 00:05:13.092 sys 0m3.055s 00:05:13.092 07:29:04 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.092 07:29:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:13.092 ************************************ 00:05:13.092 END TEST json_config 00:05:13.092 ************************************ 00:05:13.092 07:29:04 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:13.092 07:29:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:13.092 07:29:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.092 07:29:04 -- common/autotest_common.sh@10 -- # set +x 00:05:13.092 ************************************ 00:05:13.092 START TEST json_config_extra_key 00:05:13.092 ************************************ 00:05:13.092 07:29:04 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:13.092 07:29:04 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:13.092 07:29:04 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:05:13.092 07:29:04 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:13.092 07:29:05 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:13.092 07:29:05 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:13.092 07:29:05 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:13.092 07:29:05 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:13.092 07:29:05 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:13.092 07:29:05 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:13.092 07:29:05 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:13.092 07:29:05 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:13.092 07:29:05 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:13.092 07:29:05 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:13.092 07:29:05 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:13.092 07:29:05 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:13.092 07:29:05 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:13.092 07:29:05 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:13.092 07:29:05 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:13.092 07:29:05 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:13.092 07:29:05 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:13.092 07:29:05 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:13.092 07:29:05 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:13.092 07:29:05 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:13.092 07:29:05 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:13.092 07:29:05 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:13.092 07:29:05 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:13.092 07:29:05 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:13.092 07:29:05 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:13.092 07:29:05 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:13.092 07:29:05 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:13.092 07:29:05 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:13.092 07:29:05 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:13.092 07:29:05 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:13.092 07:29:05 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:13.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.092 --rc genhtml_branch_coverage=1 00:05:13.092 --rc genhtml_function_coverage=1 00:05:13.092 --rc genhtml_legend=1 00:05:13.092 --rc geninfo_all_blocks=1 00:05:13.092 --rc geninfo_unexecuted_blocks=1 00:05:13.092 00:05:13.092 ' 00:05:13.092 07:29:05 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:13.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.092 --rc genhtml_branch_coverage=1 00:05:13.092 --rc genhtml_function_coverage=1 00:05:13.092 --rc genhtml_legend=1 00:05:13.092 --rc geninfo_all_blocks=1 00:05:13.092 --rc geninfo_unexecuted_blocks=1 00:05:13.092 00:05:13.092 ' 00:05:13.092 07:29:05 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:13.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.092 --rc genhtml_branch_coverage=1 00:05:13.092 --rc genhtml_function_coverage=1 00:05:13.092 --rc genhtml_legend=1 00:05:13.092 --rc geninfo_all_blocks=1 00:05:13.092 --rc geninfo_unexecuted_blocks=1 00:05:13.092 00:05:13.092 ' 00:05:13.092 07:29:05 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:13.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.092 --rc genhtml_branch_coverage=1 00:05:13.092 --rc genhtml_function_coverage=1 00:05:13.092 --rc genhtml_legend=1 00:05:13.092 --rc geninfo_all_blocks=1 00:05:13.092 --rc geninfo_unexecuted_blocks=1 00:05:13.092 00:05:13.092 ' 00:05:13.092 07:29:05 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:13.092 07:29:05 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:13.092 07:29:05 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:13.092 07:29:05 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:13.092 07:29:05 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:13.092 07:29:05 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:13.093 07:29:05 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:13.093 07:29:05 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:13.093 07:29:05 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:13.093 07:29:05 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:13.093 07:29:05 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:13.093 07:29:05 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:13.093 07:29:05 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:13.093 07:29:05 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:13.093 07:29:05 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:13.093 07:29:05 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:13.093 07:29:05 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:13.093 07:29:05 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:13.093 07:29:05 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:13.093 07:29:05 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:13.093 07:29:05 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:13.093 07:29:05 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:13.093 07:29:05 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:13.093 07:29:05 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.093 07:29:05 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.093 07:29:05 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.093 07:29:05 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:13.093 07:29:05 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.093 07:29:05 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:13.093 07:29:05 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:13.093 07:29:05 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:13.093 07:29:05 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:13.093 07:29:05 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:13.093 07:29:05 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:13.093 07:29:05 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:13.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:13.093 07:29:05 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:13.093 07:29:05 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:13.093 07:29:05 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:13.351 07:29:05 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:13.352 07:29:05 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:13.352 07:29:05 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:13.352 07:29:05 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:13.352 07:29:05 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:13.352 07:29:05 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:13.352 07:29:05 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:13.352 07:29:05 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:13.352 07:29:05 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:13.352 07:29:05 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:13.352 07:29:05 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:13.352 INFO: launching applications... 00:05:13.352 07:29:05 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:13.352 07:29:05 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:13.352 07:29:05 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:13.352 07:29:05 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:13.352 07:29:05 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:13.352 07:29:05 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:13.352 07:29:05 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:13.352 07:29:05 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:13.352 07:29:05 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2819163 00:05:13.352 07:29:05 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:13.352 07:29:05 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:13.352 Waiting for target to run... 00:05:13.352 07:29:05 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2819163 /var/tmp/spdk_tgt.sock 00:05:13.352 07:29:05 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 2819163 ']' 00:05:13.352 07:29:05 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:13.352 07:29:05 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:13.352 07:29:05 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:13.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:13.352 07:29:05 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:13.352 07:29:05 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:13.352 [2024-11-19 07:29:05.124646] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:13.352 [2024-11-19 07:29:05.124813] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2819163 ] 00:05:13.918 [2024-11-19 07:29:05.720960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.918 [2024-11-19 07:29:05.851753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.853 07:29:06 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:14.853 07:29:06 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:14.853 07:29:06 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:14.853 00:05:14.853 07:29:06 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:14.853 INFO: shutting down applications... 00:05:14.853 07:29:06 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:14.853 07:29:06 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:14.853 07:29:06 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:14.853 07:29:06 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2819163 ]] 00:05:14.853 07:29:06 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2819163 00:05:14.853 07:29:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:14.853 07:29:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:14.853 07:29:06 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2819163 00:05:14.853 07:29:06 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:15.420 07:29:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:15.420 07:29:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:15.420 07:29:07 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2819163 00:05:15.420 07:29:07 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:15.987 07:29:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:15.987 07:29:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:15.987 07:29:07 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2819163 00:05:15.987 07:29:07 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:16.246 07:29:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:16.246 07:29:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:16.246 07:29:08 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2819163 00:05:16.246 07:29:08 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:16.812 07:29:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:16.812 07:29:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:16.812 07:29:08 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2819163 00:05:16.812 07:29:08 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:17.379 07:29:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:17.379 07:29:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:17.379 07:29:09 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2819163 00:05:17.379 07:29:09 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:17.948 07:29:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:17.948 07:29:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:17.948 07:29:09 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2819163 00:05:17.948 07:29:09 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:17.948 07:29:09 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:17.948 07:29:09 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:17.948 07:29:09 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:17.948 SPDK target shutdown done 00:05:17.948 07:29:09 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:17.948 Success 00:05:17.948 00:05:17.948 real 0m4.756s 00:05:17.948 user 0m4.244s 00:05:17.948 sys 0m0.823s 00:05:17.948 07:29:09 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:17.948 07:29:09 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:17.948 ************************************ 00:05:17.948 END TEST json_config_extra_key 00:05:17.948 ************************************ 00:05:17.948 07:29:09 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:17.948 07:29:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:17.948 07:29:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:17.948 07:29:09 -- common/autotest_common.sh@10 -- # set +x 00:05:17.948 ************************************ 00:05:17.948 START TEST alias_rpc 00:05:17.948 ************************************ 00:05:17.948 07:29:09 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:17.948 * Looking for test storage... 00:05:17.948 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:17.948 07:29:09 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:17.948 07:29:09 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:17.948 07:29:09 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:17.948 07:29:09 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:17.948 07:29:09 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:17.948 07:29:09 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:17.948 07:29:09 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:17.948 07:29:09 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:17.948 07:29:09 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:17.948 07:29:09 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:17.948 07:29:09 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:17.948 07:29:09 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:17.948 07:29:09 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:17.948 07:29:09 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:17.948 07:29:09 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:17.948 07:29:09 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:17.948 07:29:09 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:17.948 07:29:09 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:17.948 07:29:09 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:17.948 07:29:09 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:17.948 07:29:09 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:17.948 07:29:09 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:17.948 07:29:09 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:17.948 07:29:09 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:17.948 07:29:09 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:17.948 07:29:09 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:17.948 07:29:09 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:17.948 07:29:09 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:17.948 07:29:09 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:17.948 07:29:09 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:17.948 07:29:09 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:17.948 07:29:09 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:17.949 07:29:09 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:17.949 07:29:09 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:17.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.949 --rc genhtml_branch_coverage=1 00:05:17.949 --rc genhtml_function_coverage=1 00:05:17.949 --rc genhtml_legend=1 00:05:17.949 --rc geninfo_all_blocks=1 00:05:17.949 --rc geninfo_unexecuted_blocks=1 00:05:17.949 00:05:17.949 ' 00:05:17.949 07:29:09 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:17.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.949 --rc genhtml_branch_coverage=1 00:05:17.949 --rc genhtml_function_coverage=1 00:05:17.949 --rc genhtml_legend=1 00:05:17.949 --rc geninfo_all_blocks=1 00:05:17.949 --rc geninfo_unexecuted_blocks=1 00:05:17.949 00:05:17.949 ' 00:05:17.949 07:29:09 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:17.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.949 --rc genhtml_branch_coverage=1 00:05:17.949 --rc genhtml_function_coverage=1 00:05:17.949 --rc genhtml_legend=1 00:05:17.949 --rc geninfo_all_blocks=1 00:05:17.949 --rc geninfo_unexecuted_blocks=1 00:05:17.949 00:05:17.949 ' 00:05:17.949 07:29:09 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:17.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.949 --rc genhtml_branch_coverage=1 00:05:17.949 --rc genhtml_function_coverage=1 00:05:17.949 --rc genhtml_legend=1 00:05:17.949 --rc geninfo_all_blocks=1 00:05:17.949 --rc geninfo_unexecuted_blocks=1 00:05:17.949 00:05:17.949 ' 00:05:17.949 07:29:09 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:17.949 07:29:09 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2819767 00:05:17.949 07:29:09 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:17.949 07:29:09 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2819767 00:05:17.949 07:29:09 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 2819767 ']' 00:05:17.949 07:29:09 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.949 07:29:09 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:17.949 07:29:09 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.949 07:29:09 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:17.949 07:29:09 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.207 [2024-11-19 07:29:09.928606] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:18.207 [2024-11-19 07:29:09.928769] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2819767 ] 00:05:18.207 [2024-11-19 07:29:10.072960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.466 [2024-11-19 07:29:10.211579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.400 07:29:11 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:19.400 07:29:11 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:19.400 07:29:11 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:19.659 07:29:11 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2819767 00:05:19.659 07:29:11 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 2819767 ']' 00:05:19.659 07:29:11 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 2819767 00:05:19.659 07:29:11 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:19.659 07:29:11 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:19.659 07:29:11 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2819767 00:05:19.659 07:29:11 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:19.659 07:29:11 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:19.659 07:29:11 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2819767' 00:05:19.659 killing process with pid 2819767 00:05:19.659 07:29:11 alias_rpc -- common/autotest_common.sh@973 -- # kill 2819767 00:05:19.659 07:29:11 alias_rpc -- common/autotest_common.sh@978 -- # wait 2819767 00:05:22.262 00:05:22.262 real 0m4.291s 00:05:22.262 user 0m4.437s 00:05:22.262 sys 0m0.679s 00:05:22.262 07:29:13 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:22.262 07:29:13 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.262 ************************************ 00:05:22.262 END TEST alias_rpc 00:05:22.262 ************************************ 00:05:22.262 07:29:14 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:22.262 07:29:14 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:22.262 07:29:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:22.262 07:29:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:22.262 07:29:14 -- common/autotest_common.sh@10 -- # set +x 00:05:22.262 ************************************ 00:05:22.262 START TEST spdkcli_tcp 00:05:22.262 ************************************ 00:05:22.262 07:29:14 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:22.262 * Looking for test storage... 00:05:22.262 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:22.262 07:29:14 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:22.262 07:29:14 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:22.262 07:29:14 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:22.262 07:29:14 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:22.262 07:29:14 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:22.262 07:29:14 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:22.262 07:29:14 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:22.262 07:29:14 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:22.262 07:29:14 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:22.262 07:29:14 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:22.262 07:29:14 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:22.262 07:29:14 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:22.262 07:29:14 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:22.262 07:29:14 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:22.262 07:29:14 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:22.262 07:29:14 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:22.262 07:29:14 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:22.262 07:29:14 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:22.262 07:29:14 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:22.262 07:29:14 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:22.262 07:29:14 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:22.262 07:29:14 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:22.262 07:29:14 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:22.262 07:29:14 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:22.262 07:29:14 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:22.262 07:29:14 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:22.262 07:29:14 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:22.262 07:29:14 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:22.262 07:29:14 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:22.262 07:29:14 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:22.262 07:29:14 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:22.262 07:29:14 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:22.262 07:29:14 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:22.262 07:29:14 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:22.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.262 --rc genhtml_branch_coverage=1 00:05:22.262 --rc genhtml_function_coverage=1 00:05:22.262 --rc genhtml_legend=1 00:05:22.262 --rc geninfo_all_blocks=1 00:05:22.262 --rc geninfo_unexecuted_blocks=1 00:05:22.262 00:05:22.262 ' 00:05:22.262 07:29:14 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:22.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.262 --rc genhtml_branch_coverage=1 00:05:22.262 --rc genhtml_function_coverage=1 00:05:22.262 --rc genhtml_legend=1 00:05:22.262 --rc geninfo_all_blocks=1 00:05:22.262 --rc geninfo_unexecuted_blocks=1 00:05:22.262 00:05:22.262 ' 00:05:22.262 07:29:14 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:22.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.262 --rc genhtml_branch_coverage=1 00:05:22.262 --rc genhtml_function_coverage=1 00:05:22.262 --rc genhtml_legend=1 00:05:22.262 --rc geninfo_all_blocks=1 00:05:22.262 --rc geninfo_unexecuted_blocks=1 00:05:22.262 00:05:22.262 ' 00:05:22.262 07:29:14 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:22.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.262 --rc genhtml_branch_coverage=1 00:05:22.262 --rc genhtml_function_coverage=1 00:05:22.262 --rc genhtml_legend=1 00:05:22.262 --rc geninfo_all_blocks=1 00:05:22.262 --rc geninfo_unexecuted_blocks=1 00:05:22.262 00:05:22.262 ' 00:05:22.262 07:29:14 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:22.263 07:29:14 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:22.263 07:29:14 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:22.263 07:29:14 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:22.263 07:29:14 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:22.263 07:29:14 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:22.263 07:29:14 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:22.263 07:29:14 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:22.263 07:29:14 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:22.263 07:29:14 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2820363 00:05:22.263 07:29:14 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:22.263 07:29:14 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2820363 00:05:22.263 07:29:14 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 2820363 ']' 00:05:22.263 07:29:14 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.263 07:29:14 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:22.263 07:29:14 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.263 07:29:14 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:22.263 07:29:14 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:22.521 [2024-11-19 07:29:14.288035] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:22.521 [2024-11-19 07:29:14.288193] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2820363 ] 00:05:22.521 [2024-11-19 07:29:14.423105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:22.779 [2024-11-19 07:29:14.559069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.779 [2024-11-19 07:29:14.559073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:23.711 07:29:15 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:23.711 07:29:15 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:23.711 07:29:15 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2820502 00:05:23.711 07:29:15 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:23.711 07:29:15 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:23.969 [ 00:05:23.969 "bdev_malloc_delete", 00:05:23.969 "bdev_malloc_create", 00:05:23.969 "bdev_null_resize", 00:05:23.969 "bdev_null_delete", 00:05:23.969 "bdev_null_create", 00:05:23.969 "bdev_nvme_cuse_unregister", 00:05:23.969 "bdev_nvme_cuse_register", 00:05:23.969 "bdev_opal_new_user", 00:05:23.969 "bdev_opal_set_lock_state", 00:05:23.969 "bdev_opal_delete", 00:05:23.969 "bdev_opal_get_info", 00:05:23.969 "bdev_opal_create", 00:05:23.969 "bdev_nvme_opal_revert", 00:05:23.969 "bdev_nvme_opal_init", 00:05:23.969 "bdev_nvme_send_cmd", 00:05:23.969 "bdev_nvme_set_keys", 00:05:23.969 "bdev_nvme_get_path_iostat", 00:05:23.969 "bdev_nvme_get_mdns_discovery_info", 00:05:23.969 "bdev_nvme_stop_mdns_discovery", 00:05:23.969 "bdev_nvme_start_mdns_discovery", 00:05:23.969 "bdev_nvme_set_multipath_policy", 00:05:23.969 "bdev_nvme_set_preferred_path", 00:05:23.969 "bdev_nvme_get_io_paths", 00:05:23.969 "bdev_nvme_remove_error_injection", 00:05:23.969 "bdev_nvme_add_error_injection", 00:05:23.969 "bdev_nvme_get_discovery_info", 00:05:23.969 "bdev_nvme_stop_discovery", 00:05:23.969 "bdev_nvme_start_discovery", 00:05:23.969 "bdev_nvme_get_controller_health_info", 00:05:23.969 "bdev_nvme_disable_controller", 00:05:23.969 "bdev_nvme_enable_controller", 00:05:23.969 "bdev_nvme_reset_controller", 00:05:23.969 "bdev_nvme_get_transport_statistics", 00:05:23.969 "bdev_nvme_apply_firmware", 00:05:23.969 "bdev_nvme_detach_controller", 00:05:23.969 "bdev_nvme_get_controllers", 00:05:23.969 "bdev_nvme_attach_controller", 00:05:23.969 "bdev_nvme_set_hotplug", 00:05:23.969 "bdev_nvme_set_options", 00:05:23.969 "bdev_passthru_delete", 00:05:23.969 "bdev_passthru_create", 00:05:23.969 "bdev_lvol_set_parent_bdev", 00:05:23.969 "bdev_lvol_set_parent", 00:05:23.969 "bdev_lvol_check_shallow_copy", 00:05:23.969 "bdev_lvol_start_shallow_copy", 00:05:23.969 "bdev_lvol_grow_lvstore", 00:05:23.969 "bdev_lvol_get_lvols", 00:05:23.969 "bdev_lvol_get_lvstores", 00:05:23.969 "bdev_lvol_delete", 00:05:23.969 "bdev_lvol_set_read_only", 00:05:23.969 "bdev_lvol_resize", 00:05:23.969 "bdev_lvol_decouple_parent", 00:05:23.969 "bdev_lvol_inflate", 00:05:23.969 "bdev_lvol_rename", 00:05:23.969 "bdev_lvol_clone_bdev", 00:05:23.969 "bdev_lvol_clone", 00:05:23.969 "bdev_lvol_snapshot", 00:05:23.969 "bdev_lvol_create", 00:05:23.969 "bdev_lvol_delete_lvstore", 00:05:23.969 "bdev_lvol_rename_lvstore", 00:05:23.969 "bdev_lvol_create_lvstore", 00:05:23.969 "bdev_raid_set_options", 00:05:23.969 "bdev_raid_remove_base_bdev", 00:05:23.969 "bdev_raid_add_base_bdev", 00:05:23.969 "bdev_raid_delete", 00:05:23.969 "bdev_raid_create", 00:05:23.969 "bdev_raid_get_bdevs", 00:05:23.969 "bdev_error_inject_error", 00:05:23.969 "bdev_error_delete", 00:05:23.969 "bdev_error_create", 00:05:23.969 "bdev_split_delete", 00:05:23.969 "bdev_split_create", 00:05:23.969 "bdev_delay_delete", 00:05:23.969 "bdev_delay_create", 00:05:23.969 "bdev_delay_update_latency", 00:05:23.969 "bdev_zone_block_delete", 00:05:23.969 "bdev_zone_block_create", 00:05:23.969 "blobfs_create", 00:05:23.969 "blobfs_detect", 00:05:23.969 "blobfs_set_cache_size", 00:05:23.969 "bdev_aio_delete", 00:05:23.969 "bdev_aio_rescan", 00:05:23.969 "bdev_aio_create", 00:05:23.969 "bdev_ftl_set_property", 00:05:23.969 "bdev_ftl_get_properties", 00:05:23.969 "bdev_ftl_get_stats", 00:05:23.969 "bdev_ftl_unmap", 00:05:23.969 "bdev_ftl_unload", 00:05:23.969 "bdev_ftl_delete", 00:05:23.969 "bdev_ftl_load", 00:05:23.969 "bdev_ftl_create", 00:05:23.969 "bdev_virtio_attach_controller", 00:05:23.969 "bdev_virtio_scsi_get_devices", 00:05:23.969 "bdev_virtio_detach_controller", 00:05:23.969 "bdev_virtio_blk_set_hotplug", 00:05:23.969 "bdev_iscsi_delete", 00:05:23.969 "bdev_iscsi_create", 00:05:23.969 "bdev_iscsi_set_options", 00:05:23.969 "accel_error_inject_error", 00:05:23.969 "ioat_scan_accel_module", 00:05:23.969 "dsa_scan_accel_module", 00:05:23.969 "iaa_scan_accel_module", 00:05:23.969 "keyring_file_remove_key", 00:05:23.969 "keyring_file_add_key", 00:05:23.969 "keyring_linux_set_options", 00:05:23.969 "fsdev_aio_delete", 00:05:23.969 "fsdev_aio_create", 00:05:23.969 "iscsi_get_histogram", 00:05:23.969 "iscsi_enable_histogram", 00:05:23.969 "iscsi_set_options", 00:05:23.969 "iscsi_get_auth_groups", 00:05:23.969 "iscsi_auth_group_remove_secret", 00:05:23.969 "iscsi_auth_group_add_secret", 00:05:23.969 "iscsi_delete_auth_group", 00:05:23.969 "iscsi_create_auth_group", 00:05:23.969 "iscsi_set_discovery_auth", 00:05:23.969 "iscsi_get_options", 00:05:23.969 "iscsi_target_node_request_logout", 00:05:23.969 "iscsi_target_node_set_redirect", 00:05:23.969 "iscsi_target_node_set_auth", 00:05:23.969 "iscsi_target_node_add_lun", 00:05:23.969 "iscsi_get_stats", 00:05:23.969 "iscsi_get_connections", 00:05:23.969 "iscsi_portal_group_set_auth", 00:05:23.969 "iscsi_start_portal_group", 00:05:23.969 "iscsi_delete_portal_group", 00:05:23.969 "iscsi_create_portal_group", 00:05:23.969 "iscsi_get_portal_groups", 00:05:23.969 "iscsi_delete_target_node", 00:05:23.969 "iscsi_target_node_remove_pg_ig_maps", 00:05:23.969 "iscsi_target_node_add_pg_ig_maps", 00:05:23.969 "iscsi_create_target_node", 00:05:23.969 "iscsi_get_target_nodes", 00:05:23.969 "iscsi_delete_initiator_group", 00:05:23.969 "iscsi_initiator_group_remove_initiators", 00:05:23.969 "iscsi_initiator_group_add_initiators", 00:05:23.969 "iscsi_create_initiator_group", 00:05:23.969 "iscsi_get_initiator_groups", 00:05:23.969 "nvmf_set_crdt", 00:05:23.969 "nvmf_set_config", 00:05:23.969 "nvmf_set_max_subsystems", 00:05:23.969 "nvmf_stop_mdns_prr", 00:05:23.969 "nvmf_publish_mdns_prr", 00:05:23.969 "nvmf_subsystem_get_listeners", 00:05:23.969 "nvmf_subsystem_get_qpairs", 00:05:23.970 "nvmf_subsystem_get_controllers", 00:05:23.970 "nvmf_get_stats", 00:05:23.970 "nvmf_get_transports", 00:05:23.970 "nvmf_create_transport", 00:05:23.970 "nvmf_get_targets", 00:05:23.970 "nvmf_delete_target", 00:05:23.970 "nvmf_create_target", 00:05:23.970 "nvmf_subsystem_allow_any_host", 00:05:23.970 "nvmf_subsystem_set_keys", 00:05:23.970 "nvmf_subsystem_remove_host", 00:05:23.970 "nvmf_subsystem_add_host", 00:05:23.970 "nvmf_ns_remove_host", 00:05:23.970 "nvmf_ns_add_host", 00:05:23.970 "nvmf_subsystem_remove_ns", 00:05:23.970 "nvmf_subsystem_set_ns_ana_group", 00:05:23.970 "nvmf_subsystem_add_ns", 00:05:23.970 "nvmf_subsystem_listener_set_ana_state", 00:05:23.970 "nvmf_discovery_get_referrals", 00:05:23.970 "nvmf_discovery_remove_referral", 00:05:23.970 "nvmf_discovery_add_referral", 00:05:23.970 "nvmf_subsystem_remove_listener", 00:05:23.970 "nvmf_subsystem_add_listener", 00:05:23.970 "nvmf_delete_subsystem", 00:05:23.970 "nvmf_create_subsystem", 00:05:23.970 "nvmf_get_subsystems", 00:05:23.970 "env_dpdk_get_mem_stats", 00:05:23.970 "nbd_get_disks", 00:05:23.970 "nbd_stop_disk", 00:05:23.970 "nbd_start_disk", 00:05:23.970 "ublk_recover_disk", 00:05:23.970 "ublk_get_disks", 00:05:23.970 "ublk_stop_disk", 00:05:23.970 "ublk_start_disk", 00:05:23.970 "ublk_destroy_target", 00:05:23.970 "ublk_create_target", 00:05:23.970 "virtio_blk_create_transport", 00:05:23.970 "virtio_blk_get_transports", 00:05:23.970 "vhost_controller_set_coalescing", 00:05:23.970 "vhost_get_controllers", 00:05:23.970 "vhost_delete_controller", 00:05:23.970 "vhost_create_blk_controller", 00:05:23.970 "vhost_scsi_controller_remove_target", 00:05:23.970 "vhost_scsi_controller_add_target", 00:05:23.970 "vhost_start_scsi_controller", 00:05:23.970 "vhost_create_scsi_controller", 00:05:23.970 "thread_set_cpumask", 00:05:23.970 "scheduler_set_options", 00:05:23.970 "framework_get_governor", 00:05:23.970 "framework_get_scheduler", 00:05:23.970 "framework_set_scheduler", 00:05:23.970 "framework_get_reactors", 00:05:23.970 "thread_get_io_channels", 00:05:23.970 "thread_get_pollers", 00:05:23.970 "thread_get_stats", 00:05:23.970 "framework_monitor_context_switch", 00:05:23.970 "spdk_kill_instance", 00:05:23.970 "log_enable_timestamps", 00:05:23.970 "log_get_flags", 00:05:23.970 "log_clear_flag", 00:05:23.970 "log_set_flag", 00:05:23.970 "log_get_level", 00:05:23.970 "log_set_level", 00:05:23.970 "log_get_print_level", 00:05:23.970 "log_set_print_level", 00:05:23.970 "framework_enable_cpumask_locks", 00:05:23.970 "framework_disable_cpumask_locks", 00:05:23.970 "framework_wait_init", 00:05:23.970 "framework_start_init", 00:05:23.970 "scsi_get_devices", 00:05:23.970 "bdev_get_histogram", 00:05:23.970 "bdev_enable_histogram", 00:05:23.970 "bdev_set_qos_limit", 00:05:23.970 "bdev_set_qd_sampling_period", 00:05:23.970 "bdev_get_bdevs", 00:05:23.970 "bdev_reset_iostat", 00:05:23.970 "bdev_get_iostat", 00:05:23.970 "bdev_examine", 00:05:23.970 "bdev_wait_for_examine", 00:05:23.970 "bdev_set_options", 00:05:23.970 "accel_get_stats", 00:05:23.970 "accel_set_options", 00:05:23.970 "accel_set_driver", 00:05:23.970 "accel_crypto_key_destroy", 00:05:23.970 "accel_crypto_keys_get", 00:05:23.970 "accel_crypto_key_create", 00:05:23.970 "accel_assign_opc", 00:05:23.970 "accel_get_module_info", 00:05:23.970 "accel_get_opc_assignments", 00:05:23.970 "vmd_rescan", 00:05:23.970 "vmd_remove_device", 00:05:23.970 "vmd_enable", 00:05:23.970 "sock_get_default_impl", 00:05:23.970 "sock_set_default_impl", 00:05:23.970 "sock_impl_set_options", 00:05:23.970 "sock_impl_get_options", 00:05:23.970 "iobuf_get_stats", 00:05:23.970 "iobuf_set_options", 00:05:23.970 "keyring_get_keys", 00:05:23.970 "framework_get_pci_devices", 00:05:23.970 "framework_get_config", 00:05:23.970 "framework_get_subsystems", 00:05:23.970 "fsdev_set_opts", 00:05:23.970 "fsdev_get_opts", 00:05:23.970 "trace_get_info", 00:05:23.970 "trace_get_tpoint_group_mask", 00:05:23.970 "trace_disable_tpoint_group", 00:05:23.970 "trace_enable_tpoint_group", 00:05:23.970 "trace_clear_tpoint_mask", 00:05:23.970 "trace_set_tpoint_mask", 00:05:23.970 "notify_get_notifications", 00:05:23.970 "notify_get_types", 00:05:23.970 "spdk_get_version", 00:05:23.970 "rpc_get_methods" 00:05:23.970 ] 00:05:23.970 07:29:15 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:23.970 07:29:15 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:23.970 07:29:15 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:23.970 07:29:15 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:23.970 07:29:15 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2820363 00:05:23.970 07:29:15 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 2820363 ']' 00:05:23.970 07:29:15 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 2820363 00:05:23.970 07:29:15 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:23.970 07:29:15 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:23.970 07:29:15 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2820363 00:05:23.970 07:29:15 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:23.970 07:29:15 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:23.970 07:29:15 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2820363' 00:05:23.970 killing process with pid 2820363 00:05:23.970 07:29:15 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 2820363 00:05:23.970 07:29:15 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 2820363 00:05:26.499 00:05:26.499 real 0m4.152s 00:05:26.499 user 0m7.645s 00:05:26.499 sys 0m0.643s 00:05:26.499 07:29:18 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.499 07:29:18 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:26.499 ************************************ 00:05:26.499 END TEST spdkcli_tcp 00:05:26.499 ************************************ 00:05:26.499 07:29:18 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:26.499 07:29:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:26.499 07:29:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.499 07:29:18 -- common/autotest_common.sh@10 -- # set +x 00:05:26.499 ************************************ 00:05:26.499 START TEST dpdk_mem_utility 00:05:26.499 ************************************ 00:05:26.499 07:29:18 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:26.499 * Looking for test storage... 00:05:26.499 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:26.499 07:29:18 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:26.499 07:29:18 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:05:26.499 07:29:18 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:26.499 07:29:18 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:26.499 07:29:18 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:26.499 07:29:18 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:26.499 07:29:18 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:26.499 07:29:18 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:26.499 07:29:18 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:26.499 07:29:18 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:26.499 07:29:18 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:26.499 07:29:18 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:26.499 07:29:18 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:26.499 07:29:18 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:26.499 07:29:18 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:26.499 07:29:18 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:26.499 07:29:18 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:26.499 07:29:18 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:26.499 07:29:18 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:26.499 07:29:18 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:26.499 07:29:18 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:26.499 07:29:18 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:26.499 07:29:18 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:26.499 07:29:18 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:26.499 07:29:18 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:26.499 07:29:18 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:26.499 07:29:18 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:26.499 07:29:18 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:26.499 07:29:18 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:26.499 07:29:18 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:26.499 07:29:18 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:26.499 07:29:18 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:26.499 07:29:18 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:26.499 07:29:18 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:26.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.499 --rc genhtml_branch_coverage=1 00:05:26.499 --rc genhtml_function_coverage=1 00:05:26.499 --rc genhtml_legend=1 00:05:26.499 --rc geninfo_all_blocks=1 00:05:26.499 --rc geninfo_unexecuted_blocks=1 00:05:26.499 00:05:26.499 ' 00:05:26.499 07:29:18 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:26.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.499 --rc genhtml_branch_coverage=1 00:05:26.499 --rc genhtml_function_coverage=1 00:05:26.499 --rc genhtml_legend=1 00:05:26.499 --rc geninfo_all_blocks=1 00:05:26.499 --rc geninfo_unexecuted_blocks=1 00:05:26.499 00:05:26.499 ' 00:05:26.499 07:29:18 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:26.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.499 --rc genhtml_branch_coverage=1 00:05:26.499 --rc genhtml_function_coverage=1 00:05:26.499 --rc genhtml_legend=1 00:05:26.499 --rc geninfo_all_blocks=1 00:05:26.499 --rc geninfo_unexecuted_blocks=1 00:05:26.499 00:05:26.499 ' 00:05:26.499 07:29:18 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:26.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.499 --rc genhtml_branch_coverage=1 00:05:26.499 --rc genhtml_function_coverage=1 00:05:26.499 --rc genhtml_legend=1 00:05:26.499 --rc geninfo_all_blocks=1 00:05:26.499 --rc geninfo_unexecuted_blocks=1 00:05:26.499 00:05:26.499 ' 00:05:26.499 07:29:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:26.499 07:29:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2820969 00:05:26.499 07:29:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:26.499 07:29:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2820969 00:05:26.499 07:29:18 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 2820969 ']' 00:05:26.499 07:29:18 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.499 07:29:18 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:26.499 07:29:18 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.499 07:29:18 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:26.499 07:29:18 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:26.758 [2024-11-19 07:29:18.462412] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:26.758 [2024-11-19 07:29:18.462560] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2820969 ] 00:05:26.758 [2024-11-19 07:29:18.602711] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.016 [2024-11-19 07:29:18.740871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.952 07:29:19 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:27.952 07:29:19 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:27.952 07:29:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:27.952 07:29:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:27.952 07:29:19 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:27.952 07:29:19 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:27.952 { 00:05:27.952 "filename": "/tmp/spdk_mem_dump.txt" 00:05:27.952 } 00:05:27.952 07:29:19 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:27.952 07:29:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:27.952 DPDK memory size 816.000000 MiB in 1 heap(s) 00:05:27.952 1 heaps totaling size 816.000000 MiB 00:05:27.952 size: 816.000000 MiB heap id: 0 00:05:27.952 end heaps---------- 00:05:27.952 9 mempools totaling size 595.772034 MiB 00:05:27.952 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:27.952 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:27.952 size: 92.545471 MiB name: bdev_io_2820969 00:05:27.952 size: 50.003479 MiB name: msgpool_2820969 00:05:27.952 size: 36.509338 MiB name: fsdev_io_2820969 00:05:27.952 size: 21.763794 MiB name: PDU_Pool 00:05:27.952 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:27.952 size: 4.133484 MiB name: evtpool_2820969 00:05:27.952 size: 0.026123 MiB name: Session_Pool 00:05:27.952 end mempools------- 00:05:27.952 6 memzones totaling size 4.142822 MiB 00:05:27.952 size: 1.000366 MiB name: RG_ring_0_2820969 00:05:27.952 size: 1.000366 MiB name: RG_ring_1_2820969 00:05:27.952 size: 1.000366 MiB name: RG_ring_4_2820969 00:05:27.952 size: 1.000366 MiB name: RG_ring_5_2820969 00:05:27.952 size: 0.125366 MiB name: RG_ring_2_2820969 00:05:27.952 size: 0.015991 MiB name: RG_ring_3_2820969 00:05:27.952 end memzones------- 00:05:27.952 07:29:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:27.952 heap id: 0 total size: 816.000000 MiB number of busy elements: 44 number of free elements: 19 00:05:27.952 list of free elements. size: 16.857605 MiB 00:05:27.952 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:27.952 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:27.952 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:27.952 element at address: 0x200018d00040 with size: 0.999939 MiB 00:05:27.952 element at address: 0x200019100040 with size: 0.999939 MiB 00:05:27.952 element at address: 0x200019200000 with size: 0.999329 MiB 00:05:27.952 element at address: 0x200000400000 with size: 0.998108 MiB 00:05:27.952 element at address: 0x200031e00000 with size: 0.994324 MiB 00:05:27.952 element at address: 0x200018a00000 with size: 0.959900 MiB 00:05:27.952 element at address: 0x200019500040 with size: 0.937256 MiB 00:05:27.952 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:27.952 element at address: 0x20001ac00000 with size: 0.583191 MiB 00:05:27.952 element at address: 0x200000c00000 with size: 0.495300 MiB 00:05:27.952 element at address: 0x200018e00000 with size: 0.491150 MiB 00:05:27.952 element at address: 0x200019600000 with size: 0.485657 MiB 00:05:27.952 element at address: 0x200012c00000 with size: 0.446167 MiB 00:05:27.952 element at address: 0x200028000000 with size: 0.411072 MiB 00:05:27.952 element at address: 0x200000800000 with size: 0.355286 MiB 00:05:27.952 element at address: 0x20000a5ff040 with size: 0.001038 MiB 00:05:27.952 list of standard malloc elements. size: 199.221497 MiB 00:05:27.952 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:27.952 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:27.952 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:05:27.952 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:05:27.952 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:27.952 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:27.952 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:05:27.952 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:27.952 element at address: 0x200012bff040 with size: 0.000427 MiB 00:05:27.952 element at address: 0x200012bffa00 with size: 0.000366 MiB 00:05:27.952 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:27.952 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:27.952 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:27.952 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:27.952 element at address: 0x2000004ffa40 with size: 0.000244 MiB 00:05:27.952 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:27.952 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:27.952 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:27.952 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:27.952 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:27.952 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:27.952 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:27.952 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:27.952 element at address: 0x20000a5ff480 with size: 0.000244 MiB 00:05:27.952 element at address: 0x20000a5ff580 with size: 0.000244 MiB 00:05:27.952 element at address: 0x20000a5ff680 with size: 0.000244 MiB 00:05:27.952 element at address: 0x20000a5ff780 with size: 0.000244 MiB 00:05:27.952 element at address: 0x20000a5ff880 with size: 0.000244 MiB 00:05:27.952 element at address: 0x20000a5ff980 with size: 0.000244 MiB 00:05:27.952 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:27.952 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:27.952 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:27.952 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:27.952 element at address: 0x200012bff200 with size: 0.000244 MiB 00:05:27.952 element at address: 0x200012bff300 with size: 0.000244 MiB 00:05:27.952 element at address: 0x200012bff400 with size: 0.000244 MiB 00:05:27.952 element at address: 0x200012bff500 with size: 0.000244 MiB 00:05:27.952 element at address: 0x200012bff600 with size: 0.000244 MiB 00:05:27.952 element at address: 0x200012bff700 with size: 0.000244 MiB 00:05:27.952 element at address: 0x200012bff800 with size: 0.000244 MiB 00:05:27.952 element at address: 0x200012bff900 with size: 0.000244 MiB 00:05:27.952 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:27.952 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:27.952 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:27.952 list of memzone associated elements. size: 599.920898 MiB 00:05:27.952 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:05:27.952 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:27.952 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:05:27.952 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:27.952 element at address: 0x200012df4740 with size: 92.045105 MiB 00:05:27.952 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_2820969_0 00:05:27.952 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:27.952 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2820969_0 00:05:27.952 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:27.952 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_2820969_0 00:05:27.952 element at address: 0x2000197be900 with size: 20.255615 MiB 00:05:27.952 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:27.952 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:05:27.952 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:27.952 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:27.952 associated memzone info: size: 3.000122 MiB name: MP_evtpool_2820969_0 00:05:27.952 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:27.952 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2820969 00:05:27.952 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:27.952 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2820969 00:05:27.952 element at address: 0x200018efde00 with size: 1.008179 MiB 00:05:27.952 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:27.952 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:05:27.952 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:27.952 element at address: 0x200018afde00 with size: 1.008179 MiB 00:05:27.952 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:27.952 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:05:27.952 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:27.952 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:27.952 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2820969 00:05:27.952 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:27.952 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2820969 00:05:27.952 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:05:27.953 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2820969 00:05:27.953 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:05:27.953 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2820969 00:05:27.953 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:27.953 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_2820969 00:05:27.953 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:27.953 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2820969 00:05:27.953 element at address: 0x200018e7dbc0 with size: 0.500549 MiB 00:05:27.953 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:27.953 element at address: 0x200012c72380 with size: 0.500549 MiB 00:05:27.953 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:27.953 element at address: 0x20001967c540 with size: 0.250549 MiB 00:05:27.953 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:27.953 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:27.953 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_2820969 00:05:27.953 element at address: 0x20000085f180 with size: 0.125549 MiB 00:05:27.953 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2820969 00:05:27.953 element at address: 0x200018af5bc0 with size: 0.031799 MiB 00:05:27.953 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:27.953 element at address: 0x2000280693c0 with size: 0.023804 MiB 00:05:27.953 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:27.953 element at address: 0x20000085af40 with size: 0.016174 MiB 00:05:27.953 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2820969 00:05:27.953 element at address: 0x20002806f540 with size: 0.002502 MiB 00:05:27.953 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:27.953 element at address: 0x2000004ffb40 with size: 0.000366 MiB 00:05:27.953 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2820969 00:05:27.953 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:27.953 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_2820969 00:05:27.953 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:27.953 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2820969 00:05:27.953 element at address: 0x20000a5ffa80 with size: 0.000366 MiB 00:05:27.953 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:27.953 07:29:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:27.953 07:29:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2820969 00:05:27.953 07:29:19 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 2820969 ']' 00:05:27.953 07:29:19 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 2820969 00:05:27.953 07:29:19 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:27.953 07:29:19 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:27.953 07:29:19 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2820969 00:05:27.953 07:29:19 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:27.953 07:29:19 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:27.953 07:29:19 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2820969' 00:05:27.953 killing process with pid 2820969 00:05:27.953 07:29:19 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 2820969 00:05:27.953 07:29:19 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 2820969 00:05:30.484 00:05:30.484 real 0m4.069s 00:05:30.484 user 0m4.112s 00:05:30.484 sys 0m0.649s 00:05:30.484 07:29:22 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:30.484 07:29:22 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:30.484 ************************************ 00:05:30.485 END TEST dpdk_mem_utility 00:05:30.485 ************************************ 00:05:30.485 07:29:22 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:30.485 07:29:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:30.485 07:29:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.485 07:29:22 -- common/autotest_common.sh@10 -- # set +x 00:05:30.485 ************************************ 00:05:30.485 START TEST event 00:05:30.485 ************************************ 00:05:30.485 07:29:22 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:30.485 * Looking for test storage... 00:05:30.485 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:30.485 07:29:22 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:30.485 07:29:22 event -- common/autotest_common.sh@1693 -- # lcov --version 00:05:30.485 07:29:22 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:30.743 07:29:22 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:30.743 07:29:22 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:30.743 07:29:22 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:30.743 07:29:22 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:30.743 07:29:22 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:30.743 07:29:22 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:30.743 07:29:22 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:30.743 07:29:22 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:30.743 07:29:22 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:30.743 07:29:22 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:30.743 07:29:22 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:30.743 07:29:22 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:30.743 07:29:22 event -- scripts/common.sh@344 -- # case "$op" in 00:05:30.743 07:29:22 event -- scripts/common.sh@345 -- # : 1 00:05:30.743 07:29:22 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:30.743 07:29:22 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:30.743 07:29:22 event -- scripts/common.sh@365 -- # decimal 1 00:05:30.743 07:29:22 event -- scripts/common.sh@353 -- # local d=1 00:05:30.743 07:29:22 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:30.743 07:29:22 event -- scripts/common.sh@355 -- # echo 1 00:05:30.743 07:29:22 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:30.743 07:29:22 event -- scripts/common.sh@366 -- # decimal 2 00:05:30.743 07:29:22 event -- scripts/common.sh@353 -- # local d=2 00:05:30.743 07:29:22 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:30.743 07:29:22 event -- scripts/common.sh@355 -- # echo 2 00:05:30.743 07:29:22 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:30.743 07:29:22 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:30.743 07:29:22 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:30.743 07:29:22 event -- scripts/common.sh@368 -- # return 0 00:05:30.743 07:29:22 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:30.743 07:29:22 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:30.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.743 --rc genhtml_branch_coverage=1 00:05:30.743 --rc genhtml_function_coverage=1 00:05:30.743 --rc genhtml_legend=1 00:05:30.744 --rc geninfo_all_blocks=1 00:05:30.744 --rc geninfo_unexecuted_blocks=1 00:05:30.744 00:05:30.744 ' 00:05:30.744 07:29:22 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:30.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.744 --rc genhtml_branch_coverage=1 00:05:30.744 --rc genhtml_function_coverage=1 00:05:30.744 --rc genhtml_legend=1 00:05:30.744 --rc geninfo_all_blocks=1 00:05:30.744 --rc geninfo_unexecuted_blocks=1 00:05:30.744 00:05:30.744 ' 00:05:30.744 07:29:22 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:30.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.744 --rc genhtml_branch_coverage=1 00:05:30.744 --rc genhtml_function_coverage=1 00:05:30.744 --rc genhtml_legend=1 00:05:30.744 --rc geninfo_all_blocks=1 00:05:30.744 --rc geninfo_unexecuted_blocks=1 00:05:30.744 00:05:30.744 ' 00:05:30.744 07:29:22 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:30.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.744 --rc genhtml_branch_coverage=1 00:05:30.744 --rc genhtml_function_coverage=1 00:05:30.744 --rc genhtml_legend=1 00:05:30.744 --rc geninfo_all_blocks=1 00:05:30.744 --rc geninfo_unexecuted_blocks=1 00:05:30.744 00:05:30.744 ' 00:05:30.744 07:29:22 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:30.744 07:29:22 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:30.744 07:29:22 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:30.744 07:29:22 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:30.744 07:29:22 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.744 07:29:22 event -- common/autotest_common.sh@10 -- # set +x 00:05:30.744 ************************************ 00:05:30.744 START TEST event_perf 00:05:30.744 ************************************ 00:05:30.744 07:29:22 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:30.744 Running I/O for 1 seconds...[2024-11-19 07:29:22.540663] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:30.744 [2024-11-19 07:29:22.540839] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2821440 ] 00:05:31.002 [2024-11-19 07:29:22.679410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:31.002 [2024-11-19 07:29:22.826634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:31.002 [2024-11-19 07:29:22.826709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:31.002 [2024-11-19 07:29:22.826795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.002 [2024-11-19 07:29:22.826803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:32.376 Running I/O for 1 seconds... 00:05:32.376 lcore 0: 219211 00:05:32.376 lcore 1: 219210 00:05:32.376 lcore 2: 219210 00:05:32.376 lcore 3: 219211 00:05:32.376 done. 00:05:32.376 00:05:32.376 real 0m1.587s 00:05:32.376 user 0m4.419s 00:05:32.376 sys 0m0.155s 00:05:32.376 07:29:24 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:32.376 07:29:24 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:32.376 ************************************ 00:05:32.376 END TEST event_perf 00:05:32.376 ************************************ 00:05:32.376 07:29:24 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:32.376 07:29:24 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:32.376 07:29:24 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.376 07:29:24 event -- common/autotest_common.sh@10 -- # set +x 00:05:32.376 ************************************ 00:05:32.376 START TEST event_reactor 00:05:32.376 ************************************ 00:05:32.376 07:29:24 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:32.376 [2024-11-19 07:29:24.176092] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:32.376 [2024-11-19 07:29:24.176229] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2821727 ] 00:05:32.635 [2024-11-19 07:29:24.316000] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.635 [2024-11-19 07:29:24.454078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.016 test_start 00:05:34.016 oneshot 00:05:34.016 tick 100 00:05:34.016 tick 100 00:05:34.016 tick 250 00:05:34.016 tick 100 00:05:34.016 tick 100 00:05:34.016 tick 100 00:05:34.016 tick 250 00:05:34.016 tick 500 00:05:34.016 tick 100 00:05:34.016 tick 100 00:05:34.016 tick 250 00:05:34.016 tick 100 00:05:34.016 tick 100 00:05:34.016 test_end 00:05:34.016 00:05:34.016 real 0m1.568s 00:05:34.016 user 0m1.414s 00:05:34.016 sys 0m0.145s 00:05:34.016 07:29:25 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.016 07:29:25 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:34.016 ************************************ 00:05:34.016 END TEST event_reactor 00:05:34.016 ************************************ 00:05:34.016 07:29:25 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:34.016 07:29:25 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:34.016 07:29:25 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.016 07:29:25 event -- common/autotest_common.sh@10 -- # set +x 00:05:34.016 ************************************ 00:05:34.016 START TEST event_reactor_perf 00:05:34.016 ************************************ 00:05:34.016 07:29:25 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:34.016 [2024-11-19 07:29:25.789196] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:34.016 [2024-11-19 07:29:25.789309] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2821882 ] 00:05:34.016 [2024-11-19 07:29:25.929923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.275 [2024-11-19 07:29:26.071147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.649 test_start 00:05:35.649 test_end 00:05:35.649 Performance: 268515 events per second 00:05:35.649 00:05:35.649 real 0m1.572s 00:05:35.649 user 0m1.417s 00:05:35.649 sys 0m0.146s 00:05:35.649 07:29:27 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.649 07:29:27 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:35.649 ************************************ 00:05:35.649 END TEST event_reactor_perf 00:05:35.649 ************************************ 00:05:35.649 07:29:27 event -- event/event.sh@49 -- # uname -s 00:05:35.649 07:29:27 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:35.649 07:29:27 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:35.649 07:29:27 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:35.649 07:29:27 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.649 07:29:27 event -- common/autotest_common.sh@10 -- # set +x 00:05:35.649 ************************************ 00:05:35.649 START TEST event_scheduler 00:05:35.649 ************************************ 00:05:35.649 07:29:27 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:35.649 * Looking for test storage... 00:05:35.649 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:35.649 07:29:27 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:35.649 07:29:27 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:35.649 07:29:27 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:35.649 07:29:27 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:35.649 07:29:27 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:35.649 07:29:27 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:35.649 07:29:27 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:35.649 07:29:27 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:35.649 07:29:27 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:35.649 07:29:27 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:35.649 07:29:27 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:35.649 07:29:27 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:35.649 07:29:27 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:35.649 07:29:27 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:35.649 07:29:27 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:35.649 07:29:27 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:35.649 07:29:27 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:35.649 07:29:27 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:35.649 07:29:27 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:35.649 07:29:27 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:35.649 07:29:27 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:35.649 07:29:27 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:35.649 07:29:27 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:35.649 07:29:27 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:35.649 07:29:27 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:35.649 07:29:27 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:35.649 07:29:27 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:35.649 07:29:27 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:35.649 07:29:27 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:35.649 07:29:27 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:35.649 07:29:27 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:35.649 07:29:27 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:35.649 07:29:27 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:35.649 07:29:27 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:35.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.649 --rc genhtml_branch_coverage=1 00:05:35.649 --rc genhtml_function_coverage=1 00:05:35.649 --rc genhtml_legend=1 00:05:35.649 --rc geninfo_all_blocks=1 00:05:35.649 --rc geninfo_unexecuted_blocks=1 00:05:35.649 00:05:35.649 ' 00:05:35.649 07:29:27 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:35.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.649 --rc genhtml_branch_coverage=1 00:05:35.649 --rc genhtml_function_coverage=1 00:05:35.649 --rc genhtml_legend=1 00:05:35.649 --rc geninfo_all_blocks=1 00:05:35.649 --rc geninfo_unexecuted_blocks=1 00:05:35.649 00:05:35.649 ' 00:05:35.649 07:29:27 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:35.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.649 --rc genhtml_branch_coverage=1 00:05:35.650 --rc genhtml_function_coverage=1 00:05:35.650 --rc genhtml_legend=1 00:05:35.650 --rc geninfo_all_blocks=1 00:05:35.650 --rc geninfo_unexecuted_blocks=1 00:05:35.650 00:05:35.650 ' 00:05:35.650 07:29:27 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:35.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.650 --rc genhtml_branch_coverage=1 00:05:35.650 --rc genhtml_function_coverage=1 00:05:35.650 --rc genhtml_legend=1 00:05:35.650 --rc geninfo_all_blocks=1 00:05:35.650 --rc geninfo_unexecuted_blocks=1 00:05:35.650 00:05:35.650 ' 00:05:35.650 07:29:27 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:35.650 07:29:27 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2822202 00:05:35.650 07:29:27 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:35.650 07:29:27 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:35.650 07:29:27 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2822202 00:05:35.650 07:29:27 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 2822202 ']' 00:05:35.650 07:29:27 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.650 07:29:27 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:35.650 07:29:27 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.650 07:29:27 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:35.650 07:29:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:35.908 [2024-11-19 07:29:27.599239] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:35.908 [2024-11-19 07:29:27.599383] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2822202 ] 00:05:35.908 [2024-11-19 07:29:27.729890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:36.166 [2024-11-19 07:29:27.850910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.166 [2024-11-19 07:29:27.850971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:36.166 [2024-11-19 07:29:27.851011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:36.166 [2024-11-19 07:29:27.851020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:36.733 07:29:28 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:36.733 07:29:28 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:36.733 07:29:28 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:36.733 07:29:28 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.733 07:29:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:36.733 [2024-11-19 07:29:28.566203] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:36.733 [2024-11-19 07:29:28.566281] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:36.733 [2024-11-19 07:29:28.566318] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:36.733 [2024-11-19 07:29:28.566338] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:36.733 [2024-11-19 07:29:28.566359] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:36.733 07:29:28 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:36.733 07:29:28 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:36.733 07:29:28 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.733 07:29:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:36.992 [2024-11-19 07:29:28.873431] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:36.992 07:29:28 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:36.992 07:29:28 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:36.992 07:29:28 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:36.992 07:29:28 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.992 07:29:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:36.992 ************************************ 00:05:36.992 START TEST scheduler_create_thread 00:05:36.992 ************************************ 00:05:36.992 07:29:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:36.992 07:29:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:36.992 07:29:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.992 07:29:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:36.992 2 00:05:36.992 07:29:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:36.992 07:29:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:36.992 07:29:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.992 07:29:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:36.992 3 00:05:37.251 07:29:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.251 07:29:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:37.251 07:29:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.251 07:29:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.251 4 00:05:37.251 07:29:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.251 07:29:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:37.251 07:29:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.251 07:29:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.251 5 00:05:37.251 07:29:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.251 07:29:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:37.251 07:29:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.251 07:29:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.251 6 00:05:37.251 07:29:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.251 07:29:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:37.251 07:29:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.251 07:29:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.251 7 00:05:37.251 07:29:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.251 07:29:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:37.251 07:29:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.251 07:29:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.251 8 00:05:37.251 07:29:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.251 07:29:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:37.251 07:29:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.251 07:29:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.251 9 00:05:37.251 07:29:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.251 07:29:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:37.251 07:29:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.252 07:29:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.252 10 00:05:37.252 07:29:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.252 07:29:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:37.252 07:29:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.252 07:29:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.252 07:29:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.252 07:29:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:37.252 07:29:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:37.252 07:29:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.252 07:29:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.252 07:29:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.252 07:29:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:37.252 07:29:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.252 07:29:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.252 07:29:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.252 07:29:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:37.252 07:29:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:37.252 07:29:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.252 07:29:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.252 07:29:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.252 00:05:37.252 real 0m0.112s 00:05:37.252 user 0m0.009s 00:05:37.252 sys 0m0.005s 00:05:37.252 07:29:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.252 07:29:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.252 ************************************ 00:05:37.252 END TEST scheduler_create_thread 00:05:37.252 ************************************ 00:05:37.252 07:29:29 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:37.252 07:29:29 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2822202 00:05:37.252 07:29:29 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 2822202 ']' 00:05:37.252 07:29:29 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 2822202 00:05:37.252 07:29:29 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:37.252 07:29:29 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:37.252 07:29:29 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2822202 00:05:37.252 07:29:29 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:37.252 07:29:29 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:37.252 07:29:29 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2822202' 00:05:37.252 killing process with pid 2822202 00:05:37.252 07:29:29 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 2822202 00:05:37.252 07:29:29 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 2822202 00:05:37.817 [2024-11-19 07:29:29.500897] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:38.753 00:05:38.753 real 0m3.125s 00:05:38.753 user 0m5.456s 00:05:38.753 sys 0m0.506s 00:05:38.753 07:29:30 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:38.753 07:29:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:38.753 ************************************ 00:05:38.753 END TEST event_scheduler 00:05:38.753 ************************************ 00:05:38.753 07:29:30 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:38.753 07:29:30 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:38.753 07:29:30 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:38.753 07:29:30 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:38.753 07:29:30 event -- common/autotest_common.sh@10 -- # set +x 00:05:38.753 ************************************ 00:05:38.753 START TEST app_repeat 00:05:38.753 ************************************ 00:05:38.753 07:29:30 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:38.753 07:29:30 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.753 07:29:30 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.753 07:29:30 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:38.753 07:29:30 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:38.753 07:29:30 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:38.753 07:29:30 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:38.753 07:29:30 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:38.753 07:29:30 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2822646 00:05:38.753 07:29:30 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:38.753 07:29:30 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:38.753 07:29:30 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2822646' 00:05:38.753 Process app_repeat pid: 2822646 00:05:38.753 07:29:30 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:38.753 07:29:30 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:38.753 spdk_app_start Round 0 00:05:38.753 07:29:30 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2822646 /var/tmp/spdk-nbd.sock 00:05:38.753 07:29:30 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2822646 ']' 00:05:38.753 07:29:30 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:38.753 07:29:30 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:38.753 07:29:30 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:38.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:38.753 07:29:30 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:38.753 07:29:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:38.753 [2024-11-19 07:29:30.597425] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:38.753 [2024-11-19 07:29:30.597584] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2822646 ] 00:05:39.011 [2024-11-19 07:29:30.733968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:39.011 [2024-11-19 07:29:30.862059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.011 [2024-11-19 07:29:30.862066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:39.946 07:29:31 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:39.946 07:29:31 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:39.946 07:29:31 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:40.204 Malloc0 00:05:40.204 07:29:31 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:40.463 Malloc1 00:05:40.463 07:29:32 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:40.463 07:29:32 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.463 07:29:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:40.463 07:29:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:40.463 07:29:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.463 07:29:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:40.463 07:29:32 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:40.463 07:29:32 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.463 07:29:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:40.463 07:29:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:40.463 07:29:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.463 07:29:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:40.463 07:29:32 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:40.463 07:29:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:40.463 07:29:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.463 07:29:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:40.721 /dev/nbd0 00:05:40.721 07:29:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:40.721 07:29:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:40.721 07:29:32 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:40.721 07:29:32 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:40.721 07:29:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:40.721 07:29:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:40.721 07:29:32 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:40.721 07:29:32 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:40.721 07:29:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:40.721 07:29:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:40.721 07:29:32 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:40.721 1+0 records in 00:05:40.721 1+0 records out 00:05:40.721 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000215601 s, 19.0 MB/s 00:05:40.721 07:29:32 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:40.721 07:29:32 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:40.721 07:29:32 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:40.721 07:29:32 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:40.721 07:29:32 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:40.721 07:29:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:40.721 07:29:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.721 07:29:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:41.288 /dev/nbd1 00:05:41.288 07:29:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:41.288 07:29:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:41.288 07:29:32 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:41.288 07:29:32 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:41.288 07:29:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:41.288 07:29:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:41.288 07:29:32 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:41.288 07:29:32 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:41.288 07:29:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:41.288 07:29:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:41.288 07:29:32 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:41.288 1+0 records in 00:05:41.288 1+0 records out 00:05:41.288 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000213983 s, 19.1 MB/s 00:05:41.288 07:29:32 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:41.288 07:29:32 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:41.288 07:29:32 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:41.288 07:29:32 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:41.288 07:29:32 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:41.288 07:29:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:41.288 07:29:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:41.288 07:29:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:41.288 07:29:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.288 07:29:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:41.546 07:29:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:41.546 { 00:05:41.546 "nbd_device": "/dev/nbd0", 00:05:41.546 "bdev_name": "Malloc0" 00:05:41.546 }, 00:05:41.546 { 00:05:41.546 "nbd_device": "/dev/nbd1", 00:05:41.546 "bdev_name": "Malloc1" 00:05:41.546 } 00:05:41.546 ]' 00:05:41.546 07:29:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:41.546 { 00:05:41.546 "nbd_device": "/dev/nbd0", 00:05:41.546 "bdev_name": "Malloc0" 00:05:41.546 }, 00:05:41.546 { 00:05:41.546 "nbd_device": "/dev/nbd1", 00:05:41.546 "bdev_name": "Malloc1" 00:05:41.546 } 00:05:41.546 ]' 00:05:41.546 07:29:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:41.546 07:29:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:41.546 /dev/nbd1' 00:05:41.546 07:29:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:41.546 /dev/nbd1' 00:05:41.546 07:29:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:41.546 07:29:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:41.546 07:29:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:41.546 07:29:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:41.546 07:29:33 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:41.546 07:29:33 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:41.546 07:29:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.546 07:29:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:41.546 07:29:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:41.546 07:29:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:41.546 07:29:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:41.546 07:29:33 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:41.546 256+0 records in 00:05:41.546 256+0 records out 00:05:41.546 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00514631 s, 204 MB/s 00:05:41.546 07:29:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:41.546 07:29:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:41.546 256+0 records in 00:05:41.546 256+0 records out 00:05:41.546 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0246314 s, 42.6 MB/s 00:05:41.546 07:29:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:41.546 07:29:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:41.546 256+0 records in 00:05:41.546 256+0 records out 00:05:41.546 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0299977 s, 35.0 MB/s 00:05:41.546 07:29:33 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:41.546 07:29:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.546 07:29:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:41.546 07:29:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:41.546 07:29:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:41.546 07:29:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:41.546 07:29:33 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:41.546 07:29:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:41.546 07:29:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:41.546 07:29:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:41.546 07:29:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:41.546 07:29:33 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:41.546 07:29:33 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:41.546 07:29:33 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.546 07:29:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.546 07:29:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:41.546 07:29:33 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:41.546 07:29:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:41.546 07:29:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:41.803 07:29:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:41.803 07:29:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:41.803 07:29:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:41.803 07:29:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:41.803 07:29:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:41.803 07:29:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:41.803 07:29:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:41.803 07:29:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:41.803 07:29:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:41.803 07:29:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:42.061 07:29:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:42.061 07:29:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:42.061 07:29:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:42.061 07:29:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:42.061 07:29:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:42.061 07:29:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:42.061 07:29:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:42.061 07:29:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:42.061 07:29:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:42.061 07:29:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.061 07:29:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:42.318 07:29:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:42.318 07:29:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:42.318 07:29:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:42.575 07:29:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:42.575 07:29:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:42.575 07:29:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:42.575 07:29:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:42.575 07:29:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:42.575 07:29:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:42.575 07:29:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:42.575 07:29:34 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:42.575 07:29:34 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:42.575 07:29:34 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:42.833 07:29:34 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:44.208 [2024-11-19 07:29:35.904300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:44.208 [2024-11-19 07:29:36.039146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.208 [2024-11-19 07:29:36.039150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.466 [2024-11-19 07:29:36.246192] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:44.466 [2024-11-19 07:29:36.246294] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:45.839 07:29:37 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:45.839 07:29:37 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:45.839 spdk_app_start Round 1 00:05:45.839 07:29:37 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2822646 /var/tmp/spdk-nbd.sock 00:05:45.839 07:29:37 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2822646 ']' 00:05:45.839 07:29:37 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:45.839 07:29:37 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:45.839 07:29:37 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:45.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:45.839 07:29:37 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:45.839 07:29:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:46.097 07:29:37 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:46.097 07:29:37 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:46.097 07:29:37 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:46.355 Malloc0 00:05:46.613 07:29:38 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:46.872 Malloc1 00:05:46.872 07:29:38 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:46.872 07:29:38 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.872 07:29:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:46.872 07:29:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:46.872 07:29:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.872 07:29:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:46.872 07:29:38 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:46.872 07:29:38 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.872 07:29:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:46.872 07:29:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:46.872 07:29:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.872 07:29:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:46.872 07:29:38 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:46.872 07:29:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:46.872 07:29:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:46.872 07:29:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:47.131 /dev/nbd0 00:05:47.131 07:29:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:47.131 07:29:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:47.131 07:29:38 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:47.131 07:29:38 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:47.131 07:29:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:47.131 07:29:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:47.131 07:29:38 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:47.131 07:29:38 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:47.131 07:29:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:47.131 07:29:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:47.131 07:29:38 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:47.131 1+0 records in 00:05:47.131 1+0 records out 00:05:47.131 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000195685 s, 20.9 MB/s 00:05:47.131 07:29:38 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:47.131 07:29:38 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:47.131 07:29:38 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:47.131 07:29:38 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:47.131 07:29:38 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:47.131 07:29:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:47.131 07:29:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:47.131 07:29:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:47.390 /dev/nbd1 00:05:47.390 07:29:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:47.390 07:29:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:47.390 07:29:39 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:47.390 07:29:39 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:47.390 07:29:39 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:47.390 07:29:39 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:47.390 07:29:39 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:47.390 07:29:39 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:47.390 07:29:39 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:47.390 07:29:39 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:47.390 07:29:39 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:47.390 1+0 records in 00:05:47.390 1+0 records out 00:05:47.390 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000236456 s, 17.3 MB/s 00:05:47.390 07:29:39 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:47.390 07:29:39 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:47.390 07:29:39 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:47.390 07:29:39 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:47.390 07:29:39 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:47.390 07:29:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:47.390 07:29:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:47.390 07:29:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:47.390 07:29:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.390 07:29:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:47.956 07:29:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:47.956 { 00:05:47.956 "nbd_device": "/dev/nbd0", 00:05:47.956 "bdev_name": "Malloc0" 00:05:47.956 }, 00:05:47.956 { 00:05:47.956 "nbd_device": "/dev/nbd1", 00:05:47.956 "bdev_name": "Malloc1" 00:05:47.956 } 00:05:47.956 ]' 00:05:47.956 07:29:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:47.956 { 00:05:47.956 "nbd_device": "/dev/nbd0", 00:05:47.956 "bdev_name": "Malloc0" 00:05:47.956 }, 00:05:47.956 { 00:05:47.956 "nbd_device": "/dev/nbd1", 00:05:47.956 "bdev_name": "Malloc1" 00:05:47.956 } 00:05:47.956 ]' 00:05:47.956 07:29:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:47.956 07:29:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:47.956 /dev/nbd1' 00:05:47.956 07:29:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:47.956 /dev/nbd1' 00:05:47.956 07:29:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:47.956 07:29:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:47.956 07:29:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:47.956 07:29:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:47.956 07:29:39 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:47.956 07:29:39 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:47.956 07:29:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.956 07:29:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:47.956 07:29:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:47.956 07:29:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:47.956 07:29:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:47.956 07:29:39 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:47.956 256+0 records in 00:05:47.956 256+0 records out 00:05:47.956 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00502747 s, 209 MB/s 00:05:47.956 07:29:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:47.956 07:29:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:47.956 256+0 records in 00:05:47.956 256+0 records out 00:05:47.956 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0239314 s, 43.8 MB/s 00:05:47.956 07:29:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:47.956 07:29:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:47.956 256+0 records in 00:05:47.956 256+0 records out 00:05:47.956 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0297288 s, 35.3 MB/s 00:05:47.956 07:29:39 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:47.956 07:29:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.956 07:29:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:47.956 07:29:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:47.956 07:29:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:47.956 07:29:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:47.956 07:29:39 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:47.957 07:29:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:47.957 07:29:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:47.957 07:29:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:47.957 07:29:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:47.957 07:29:39 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:47.957 07:29:39 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:47.957 07:29:39 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.957 07:29:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.957 07:29:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:47.957 07:29:39 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:47.957 07:29:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:47.957 07:29:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:48.215 07:29:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:48.215 07:29:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:48.215 07:29:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:48.215 07:29:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:48.215 07:29:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:48.215 07:29:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:48.215 07:29:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:48.215 07:29:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:48.215 07:29:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:48.215 07:29:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:48.473 07:29:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:48.473 07:29:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:48.473 07:29:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:48.473 07:29:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:48.473 07:29:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:48.473 07:29:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:48.473 07:29:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:48.473 07:29:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:48.473 07:29:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:48.473 07:29:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.473 07:29:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:48.732 07:29:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:48.732 07:29:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:48.732 07:29:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:48.732 07:29:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:48.732 07:29:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:48.732 07:29:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:48.990 07:29:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:48.990 07:29:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:48.990 07:29:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:48.990 07:29:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:48.990 07:29:40 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:48.990 07:29:40 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:48.990 07:29:40 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:49.248 07:29:41 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:50.623 [2024-11-19 07:29:42.301270] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:50.623 [2024-11-19 07:29:42.436465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.623 [2024-11-19 07:29:42.436466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:50.882 [2024-11-19 07:29:42.650759] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:50.882 [2024-11-19 07:29:42.650836] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:52.256 07:29:44 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:52.256 07:29:44 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:52.256 spdk_app_start Round 2 00:05:52.256 07:29:44 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2822646 /var/tmp/spdk-nbd.sock 00:05:52.256 07:29:44 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2822646 ']' 00:05:52.256 07:29:44 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:52.256 07:29:44 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:52.256 07:29:44 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:52.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:52.256 07:29:44 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:52.256 07:29:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:52.514 07:29:44 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:52.514 07:29:44 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:52.514 07:29:44 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:53.081 Malloc0 00:05:53.081 07:29:44 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:53.339 Malloc1 00:05:53.339 07:29:45 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:53.339 07:29:45 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.339 07:29:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:53.339 07:29:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:53.339 07:29:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.339 07:29:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:53.339 07:29:45 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:53.339 07:29:45 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.339 07:29:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:53.339 07:29:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:53.339 07:29:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.339 07:29:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:53.339 07:29:45 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:53.339 07:29:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:53.339 07:29:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:53.339 07:29:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:53.597 /dev/nbd0 00:05:53.597 07:29:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:53.597 07:29:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:53.597 07:29:45 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:53.597 07:29:45 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:53.597 07:29:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:53.597 07:29:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:53.597 07:29:45 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:53.597 07:29:45 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:53.597 07:29:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:53.598 07:29:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:53.598 07:29:45 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:53.598 1+0 records in 00:05:53.598 1+0 records out 00:05:53.598 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000291901 s, 14.0 MB/s 00:05:53.598 07:29:45 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:53.598 07:29:45 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:53.598 07:29:45 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:53.598 07:29:45 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:53.598 07:29:45 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:53.598 07:29:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:53.598 07:29:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:53.598 07:29:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:53.886 /dev/nbd1 00:05:53.886 07:29:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:53.886 07:29:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:53.886 07:29:45 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:53.886 07:29:45 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:53.886 07:29:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:53.886 07:29:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:53.886 07:29:45 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:53.886 07:29:45 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:53.886 07:29:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:53.886 07:29:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:53.886 07:29:45 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:53.886 1+0 records in 00:05:53.886 1+0 records out 00:05:53.886 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000236551 s, 17.3 MB/s 00:05:53.886 07:29:45 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:53.886 07:29:45 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:53.886 07:29:45 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:53.886 07:29:45 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:53.886 07:29:45 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:53.886 07:29:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:53.886 07:29:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:53.886 07:29:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:53.886 07:29:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.886 07:29:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:54.181 07:29:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:54.181 { 00:05:54.181 "nbd_device": "/dev/nbd0", 00:05:54.181 "bdev_name": "Malloc0" 00:05:54.181 }, 00:05:54.181 { 00:05:54.181 "nbd_device": "/dev/nbd1", 00:05:54.181 "bdev_name": "Malloc1" 00:05:54.181 } 00:05:54.181 ]' 00:05:54.181 07:29:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:54.181 { 00:05:54.181 "nbd_device": "/dev/nbd0", 00:05:54.181 "bdev_name": "Malloc0" 00:05:54.181 }, 00:05:54.181 { 00:05:54.181 "nbd_device": "/dev/nbd1", 00:05:54.181 "bdev_name": "Malloc1" 00:05:54.181 } 00:05:54.181 ]' 00:05:54.181 07:29:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:54.181 07:29:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:54.181 /dev/nbd1' 00:05:54.181 07:29:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:54.181 /dev/nbd1' 00:05:54.181 07:29:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:54.181 07:29:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:54.181 07:29:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:54.181 07:29:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:54.181 07:29:46 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:54.181 07:29:46 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:54.182 07:29:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.182 07:29:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:54.182 07:29:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:54.182 07:29:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:54.182 07:29:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:54.182 07:29:46 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:54.182 256+0 records in 00:05:54.182 256+0 records out 00:05:54.182 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00514912 s, 204 MB/s 00:05:54.182 07:29:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:54.182 07:29:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:54.442 256+0 records in 00:05:54.442 256+0 records out 00:05:54.442 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0255094 s, 41.1 MB/s 00:05:54.442 07:29:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:54.442 07:29:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:54.442 256+0 records in 00:05:54.442 256+0 records out 00:05:54.442 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0300189 s, 34.9 MB/s 00:05:54.442 07:29:46 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:54.442 07:29:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.442 07:29:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:54.442 07:29:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:54.442 07:29:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:54.442 07:29:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:54.442 07:29:46 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:54.442 07:29:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:54.442 07:29:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:54.442 07:29:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:54.442 07:29:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:54.442 07:29:46 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:54.442 07:29:46 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:54.442 07:29:46 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.442 07:29:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.442 07:29:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:54.442 07:29:46 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:54.442 07:29:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:54.442 07:29:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:54.700 07:29:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:54.700 07:29:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:54.700 07:29:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:54.700 07:29:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:54.700 07:29:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:54.700 07:29:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:54.700 07:29:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:54.700 07:29:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:54.700 07:29:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:54.700 07:29:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:54.958 07:29:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:54.958 07:29:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:54.958 07:29:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:54.958 07:29:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:54.958 07:29:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:54.958 07:29:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:54.958 07:29:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:54.958 07:29:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:54.958 07:29:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:54.958 07:29:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.958 07:29:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:55.216 07:29:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:55.216 07:29:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:55.216 07:29:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:55.216 07:29:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:55.216 07:29:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:55.216 07:29:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:55.216 07:29:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:55.216 07:29:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:55.216 07:29:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:55.216 07:29:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:55.216 07:29:47 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:55.216 07:29:47 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:55.216 07:29:47 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:55.782 07:29:47 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:57.157 [2024-11-19 07:29:48.720478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:57.157 [2024-11-19 07:29:48.855234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.157 [2024-11-19 07:29:48.855238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.157 [2024-11-19 07:29:49.066599] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:57.157 [2024-11-19 07:29:49.066678] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:59.063 07:29:50 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2822646 /var/tmp/spdk-nbd.sock 00:05:59.063 07:29:50 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2822646 ']' 00:05:59.063 07:29:50 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:59.063 07:29:50 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:59.063 07:29:50 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:59.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:59.063 07:29:50 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:59.063 07:29:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:59.063 07:29:50 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:59.063 07:29:50 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:59.063 07:29:50 event.app_repeat -- event/event.sh@39 -- # killprocess 2822646 00:05:59.063 07:29:50 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 2822646 ']' 00:05:59.063 07:29:50 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 2822646 00:05:59.063 07:29:50 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:59.063 07:29:50 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:59.063 07:29:50 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2822646 00:05:59.063 07:29:50 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:59.064 07:29:50 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:59.064 07:29:50 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2822646' 00:05:59.064 killing process with pid 2822646 00:05:59.064 07:29:50 event.app_repeat -- common/autotest_common.sh@973 -- # kill 2822646 00:05:59.064 07:29:50 event.app_repeat -- common/autotest_common.sh@978 -- # wait 2822646 00:05:59.997 spdk_app_start is called in Round 0. 00:05:59.997 Shutdown signal received, stop current app iteration 00:05:59.997 Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 reinitialization... 00:05:59.997 spdk_app_start is called in Round 1. 00:05:59.997 Shutdown signal received, stop current app iteration 00:05:59.997 Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 reinitialization... 00:05:59.997 spdk_app_start is called in Round 2. 00:05:59.997 Shutdown signal received, stop current app iteration 00:05:59.997 Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 reinitialization... 00:05:59.997 spdk_app_start is called in Round 3. 00:05:59.997 Shutdown signal received, stop current app iteration 00:05:59.997 07:29:51 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:59.997 07:29:51 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:59.997 00:05:59.997 real 0m21.327s 00:05:59.997 user 0m45.594s 00:05:59.997 sys 0m3.330s 00:05:59.997 07:29:51 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:59.997 07:29:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:59.997 ************************************ 00:05:59.997 END TEST app_repeat 00:05:59.997 ************************************ 00:05:59.997 07:29:51 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:59.997 07:29:51 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:59.997 07:29:51 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:59.997 07:29:51 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:59.997 07:29:51 event -- common/autotest_common.sh@10 -- # set +x 00:05:59.997 ************************************ 00:05:59.997 START TEST cpu_locks 00:05:59.997 ************************************ 00:05:59.997 07:29:51 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:00.256 * Looking for test storage... 00:06:00.256 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:00.256 07:29:51 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:00.256 07:29:51 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:06:00.256 07:29:51 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:00.256 07:29:52 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:00.256 07:29:52 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:00.256 07:29:52 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:00.256 07:29:52 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:00.256 07:29:52 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:00.256 07:29:52 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:00.256 07:29:52 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:00.256 07:29:52 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:00.256 07:29:52 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:00.256 07:29:52 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:00.256 07:29:52 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:00.256 07:29:52 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:00.256 07:29:52 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:00.256 07:29:52 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:00.256 07:29:52 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:00.256 07:29:52 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:00.256 07:29:52 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:00.256 07:29:52 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:00.256 07:29:52 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:00.256 07:29:52 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:00.256 07:29:52 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:00.256 07:29:52 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:00.256 07:29:52 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:00.256 07:29:52 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:00.256 07:29:52 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:00.256 07:29:52 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:00.256 07:29:52 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:00.256 07:29:52 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:00.256 07:29:52 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:00.256 07:29:52 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:00.256 07:29:52 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:00.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.256 --rc genhtml_branch_coverage=1 00:06:00.256 --rc genhtml_function_coverage=1 00:06:00.256 --rc genhtml_legend=1 00:06:00.256 --rc geninfo_all_blocks=1 00:06:00.256 --rc geninfo_unexecuted_blocks=1 00:06:00.256 00:06:00.256 ' 00:06:00.256 07:29:52 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:00.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.256 --rc genhtml_branch_coverage=1 00:06:00.256 --rc genhtml_function_coverage=1 00:06:00.256 --rc genhtml_legend=1 00:06:00.256 --rc geninfo_all_blocks=1 00:06:00.256 --rc geninfo_unexecuted_blocks=1 00:06:00.256 00:06:00.256 ' 00:06:00.256 07:29:52 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:00.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.256 --rc genhtml_branch_coverage=1 00:06:00.256 --rc genhtml_function_coverage=1 00:06:00.256 --rc genhtml_legend=1 00:06:00.256 --rc geninfo_all_blocks=1 00:06:00.256 --rc geninfo_unexecuted_blocks=1 00:06:00.256 00:06:00.256 ' 00:06:00.256 07:29:52 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:00.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.256 --rc genhtml_branch_coverage=1 00:06:00.256 --rc genhtml_function_coverage=1 00:06:00.256 --rc genhtml_legend=1 00:06:00.256 --rc geninfo_all_blocks=1 00:06:00.256 --rc geninfo_unexecuted_blocks=1 00:06:00.256 00:06:00.256 ' 00:06:00.256 07:29:52 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:00.256 07:29:52 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:00.256 07:29:52 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:00.256 07:29:52 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:00.256 07:29:52 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:00.256 07:29:52 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.256 07:29:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:00.256 ************************************ 00:06:00.256 START TEST default_locks 00:06:00.256 ************************************ 00:06:00.256 07:29:52 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:00.256 07:29:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2825407 00:06:00.256 07:29:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:00.256 07:29:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2825407 00:06:00.256 07:29:52 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2825407 ']' 00:06:00.256 07:29:52 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.256 07:29:52 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:00.256 07:29:52 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.256 07:29:52 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:00.256 07:29:52 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:00.256 [2024-11-19 07:29:52.167657] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:00.256 [2024-11-19 07:29:52.167804] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2825407 ] 00:06:00.515 [2024-11-19 07:29:52.309207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.515 [2024-11-19 07:29:52.447711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.889 07:29:53 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.889 07:29:53 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:01.889 07:29:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2825407 00:06:01.889 07:29:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2825407 00:06:01.889 07:29:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:01.889 lslocks: write error 00:06:01.890 07:29:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2825407 00:06:01.890 07:29:53 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 2825407 ']' 00:06:01.890 07:29:53 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 2825407 00:06:01.890 07:29:53 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:01.890 07:29:53 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:01.890 07:29:53 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2825407 00:06:01.890 07:29:53 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:01.890 07:29:53 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:01.890 07:29:53 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2825407' 00:06:01.890 killing process with pid 2825407 00:06:01.890 07:29:53 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 2825407 00:06:01.890 07:29:53 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 2825407 00:06:04.421 07:29:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2825407 00:06:04.421 07:29:56 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:04.421 07:29:56 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2825407 00:06:04.421 07:29:56 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:04.421 07:29:56 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:04.421 07:29:56 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:04.421 07:29:56 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:04.421 07:29:56 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 2825407 00:06:04.421 07:29:56 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2825407 ']' 00:06:04.421 07:29:56 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.421 07:29:56 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:04.421 07:29:56 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.421 07:29:56 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:04.421 07:29:56 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:04.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2825407) - No such process 00:06:04.421 ERROR: process (pid: 2825407) is no longer running 00:06:04.421 07:29:56 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:04.421 07:29:56 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:04.421 07:29:56 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:04.421 07:29:56 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:04.421 07:29:56 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:04.421 07:29:56 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:04.421 07:29:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:04.421 07:29:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:04.421 07:29:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:04.421 07:29:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:04.421 00:06:04.421 real 0m4.076s 00:06:04.421 user 0m4.025s 00:06:04.421 sys 0m0.765s 00:06:04.421 07:29:56 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.421 07:29:56 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:04.421 ************************************ 00:06:04.421 END TEST default_locks 00:06:04.421 ************************************ 00:06:04.421 07:29:56 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:04.421 07:29:56 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:04.421 07:29:56 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.421 07:29:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:04.421 ************************************ 00:06:04.421 START TEST default_locks_via_rpc 00:06:04.421 ************************************ 00:06:04.421 07:29:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:04.421 07:29:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2825847 00:06:04.421 07:29:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2825847 00:06:04.421 07:29:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:04.421 07:29:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2825847 ']' 00:06:04.421 07:29:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.421 07:29:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:04.421 07:29:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.421 07:29:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:04.421 07:29:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.421 [2024-11-19 07:29:56.298018] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:04.421 [2024-11-19 07:29:56.298154] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2825847 ] 00:06:04.680 [2024-11-19 07:29:56.444776] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.680 [2024-11-19 07:29:56.573180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.614 07:29:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:05.614 07:29:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:05.614 07:29:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:05.614 07:29:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.614 07:29:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.615 07:29:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.615 07:29:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:05.615 07:29:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:05.615 07:29:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:05.615 07:29:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:05.615 07:29:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:05.615 07:29:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.615 07:29:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.615 07:29:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.615 07:29:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2825847 00:06:05.615 07:29:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2825847 00:06:05.615 07:29:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:06.181 07:29:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2825847 00:06:06.181 07:29:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 2825847 ']' 00:06:06.181 07:29:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 2825847 00:06:06.181 07:29:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:06.181 07:29:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:06.181 07:29:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2825847 00:06:06.181 07:29:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:06.181 07:29:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:06.181 07:29:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2825847' 00:06:06.181 killing process with pid 2825847 00:06:06.181 07:29:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 2825847 00:06:06.181 07:29:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 2825847 00:06:08.712 00:06:08.712 real 0m4.094s 00:06:08.712 user 0m4.085s 00:06:08.712 sys 0m0.736s 00:06:08.712 07:30:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.712 07:30:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.712 ************************************ 00:06:08.712 END TEST default_locks_via_rpc 00:06:08.712 ************************************ 00:06:08.712 07:30:00 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:08.712 07:30:00 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:08.712 07:30:00 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.712 07:30:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:08.712 ************************************ 00:06:08.712 START TEST non_locking_app_on_locked_coremask 00:06:08.712 ************************************ 00:06:08.712 07:30:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:08.712 07:30:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2826434 00:06:08.712 07:30:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:08.712 07:30:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2826434 /var/tmp/spdk.sock 00:06:08.712 07:30:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2826434 ']' 00:06:08.712 07:30:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.712 07:30:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:08.712 07:30:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.713 07:30:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:08.713 07:30:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:08.713 [2024-11-19 07:30:00.443790] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:08.713 [2024-11-19 07:30:00.443933] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2826434 ] 00:06:08.713 [2024-11-19 07:30:00.591433] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.972 [2024-11-19 07:30:00.728381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.905 07:30:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:09.905 07:30:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:09.905 07:30:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2826648 00:06:09.905 07:30:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:09.905 07:30:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2826648 /var/tmp/spdk2.sock 00:06:09.905 07:30:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2826648 ']' 00:06:09.905 07:30:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:09.905 07:30:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:09.905 07:30:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:09.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:09.905 07:30:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:09.905 07:30:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:09.905 [2024-11-19 07:30:01.728514] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:09.905 [2024-11-19 07:30:01.728669] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2826648 ] 00:06:10.163 [2024-11-19 07:30:01.939898] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:10.163 [2024-11-19 07:30:01.939981] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.422 [2024-11-19 07:30:02.219051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.960 07:30:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:12.960 07:30:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:12.960 07:30:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2826434 00:06:12.960 07:30:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2826434 00:06:12.960 07:30:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:13.218 lslocks: write error 00:06:13.218 07:30:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2826434 00:06:13.218 07:30:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2826434 ']' 00:06:13.218 07:30:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2826434 00:06:13.218 07:30:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:13.218 07:30:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:13.218 07:30:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2826434 00:06:13.218 07:30:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:13.218 07:30:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:13.218 07:30:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2826434' 00:06:13.218 killing process with pid 2826434 00:06:13.218 07:30:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2826434 00:06:13.218 07:30:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2826434 00:06:18.487 07:30:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2826648 00:06:18.487 07:30:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2826648 ']' 00:06:18.487 07:30:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2826648 00:06:18.487 07:30:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:18.487 07:30:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:18.487 07:30:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2826648 00:06:18.487 07:30:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:18.487 07:30:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:18.487 07:30:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2826648' 00:06:18.487 killing process with pid 2826648 00:06:18.487 07:30:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2826648 00:06:18.487 07:30:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2826648 00:06:20.388 00:06:20.388 real 0m11.951s 00:06:20.388 user 0m12.391s 00:06:20.388 sys 0m1.478s 00:06:20.388 07:30:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.388 07:30:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:20.388 ************************************ 00:06:20.388 END TEST non_locking_app_on_locked_coremask 00:06:20.388 ************************************ 00:06:20.388 07:30:12 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:20.388 07:30:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:20.388 07:30:12 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.388 07:30:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:20.646 ************************************ 00:06:20.646 START TEST locking_app_on_unlocked_coremask 00:06:20.646 ************************************ 00:06:20.646 07:30:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:20.646 07:30:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2828518 00:06:20.646 07:30:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:20.646 07:30:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2828518 /var/tmp/spdk.sock 00:06:20.646 07:30:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2828518 ']' 00:06:20.646 07:30:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.646 07:30:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:20.646 07:30:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.646 07:30:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:20.646 07:30:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:20.646 [2024-11-19 07:30:12.438838] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:20.646 [2024-11-19 07:30:12.438984] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2828518 ] 00:06:20.904 [2024-11-19 07:30:12.591305] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:20.904 [2024-11-19 07:30:12.591381] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.904 [2024-11-19 07:30:12.729959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.839 07:30:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:21.839 07:30:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:21.839 07:30:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2828659 00:06:21.839 07:30:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:21.839 07:30:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2828659 /var/tmp/spdk2.sock 00:06:21.839 07:30:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2828659 ']' 00:06:21.839 07:30:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:21.839 07:30:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:21.839 07:30:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:21.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:21.839 07:30:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:21.839 07:30:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:21.839 [2024-11-19 07:30:13.749749] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:21.839 [2024-11-19 07:30:13.749883] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2828659 ] 00:06:22.097 [2024-11-19 07:30:13.960468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.356 [2024-11-19 07:30:14.239936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.886 07:30:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:24.886 07:30:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:24.886 07:30:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2828659 00:06:24.886 07:30:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2828659 00:06:24.886 07:30:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:25.144 lslocks: write error 00:06:25.144 07:30:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2828518 00:06:25.144 07:30:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2828518 ']' 00:06:25.144 07:30:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2828518 00:06:25.144 07:30:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:25.144 07:30:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:25.144 07:30:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2828518 00:06:25.144 07:30:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:25.144 07:30:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:25.144 07:30:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2828518' 00:06:25.144 killing process with pid 2828518 00:06:25.144 07:30:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2828518 00:06:25.144 07:30:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2828518 00:06:30.415 07:30:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2828659 00:06:30.415 07:30:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2828659 ']' 00:06:30.415 07:30:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2828659 00:06:30.415 07:30:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:30.415 07:30:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:30.415 07:30:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2828659 00:06:30.415 07:30:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:30.415 07:30:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:30.415 07:30:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2828659' 00:06:30.415 killing process with pid 2828659 00:06:30.415 07:30:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2828659 00:06:30.415 07:30:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2828659 00:06:32.944 00:06:32.944 real 0m11.929s 00:06:32.944 user 0m12.272s 00:06:32.944 sys 0m1.486s 00:06:32.944 07:30:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:32.944 07:30:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:32.944 ************************************ 00:06:32.944 END TEST locking_app_on_unlocked_coremask 00:06:32.944 ************************************ 00:06:32.944 07:30:24 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:32.944 07:30:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:32.944 07:30:24 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:32.944 07:30:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:32.944 ************************************ 00:06:32.944 START TEST locking_app_on_locked_coremask 00:06:32.944 ************************************ 00:06:32.944 07:30:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:32.944 07:30:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2829896 00:06:32.944 07:30:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:32.944 07:30:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2829896 /var/tmp/spdk.sock 00:06:32.945 07:30:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2829896 ']' 00:06:32.945 07:30:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.945 07:30:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:32.945 07:30:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.945 07:30:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:32.945 07:30:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:32.945 [2024-11-19 07:30:24.416234] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:32.945 [2024-11-19 07:30:24.416368] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2829896 ] 00:06:32.945 [2024-11-19 07:30:24.555643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.945 [2024-11-19 07:30:24.696003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.882 07:30:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:33.882 07:30:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:33.882 07:30:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2830035 00:06:33.882 07:30:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:33.882 07:30:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2830035 /var/tmp/spdk2.sock 00:06:33.882 07:30:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:33.882 07:30:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2830035 /var/tmp/spdk2.sock 00:06:33.882 07:30:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:33.882 07:30:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:33.882 07:30:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:33.882 07:30:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:33.882 07:30:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2830035 /var/tmp/spdk2.sock 00:06:33.882 07:30:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2830035 ']' 00:06:33.882 07:30:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:33.882 07:30:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:33.882 07:30:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:33.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:33.882 07:30:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:33.882 07:30:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:33.882 [2024-11-19 07:30:25.742027] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:33.882 [2024-11-19 07:30:25.742186] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2830035 ] 00:06:34.141 [2024-11-19 07:30:25.959047] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2829896 has claimed it. 00:06:34.141 [2024-11-19 07:30:25.959144] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:34.708 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2830035) - No such process 00:06:34.708 ERROR: process (pid: 2830035) is no longer running 00:06:34.708 07:30:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:34.708 07:30:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:34.708 07:30:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:34.708 07:30:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:34.708 07:30:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:34.708 07:30:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:34.708 07:30:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2829896 00:06:34.708 07:30:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2829896 00:06:34.708 07:30:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:34.967 lslocks: write error 00:06:34.967 07:30:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2829896 00:06:34.967 07:30:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2829896 ']' 00:06:34.967 07:30:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2829896 00:06:34.967 07:30:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:34.967 07:30:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:34.967 07:30:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2829896 00:06:34.967 07:30:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:34.967 07:30:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:34.967 07:30:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2829896' 00:06:34.967 killing process with pid 2829896 00:06:34.967 07:30:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2829896 00:06:34.967 07:30:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2829896 00:06:37.620 00:06:37.620 real 0m4.934s 00:06:37.620 user 0m5.198s 00:06:37.620 sys 0m0.996s 00:06:37.620 07:30:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:37.620 07:30:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:37.620 ************************************ 00:06:37.620 END TEST locking_app_on_locked_coremask 00:06:37.620 ************************************ 00:06:37.620 07:30:29 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:37.620 07:30:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:37.620 07:30:29 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:37.620 07:30:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:37.620 ************************************ 00:06:37.620 START TEST locking_overlapped_coremask 00:06:37.620 ************************************ 00:06:37.620 07:30:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:37.620 07:30:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2830590 00:06:37.620 07:30:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:37.620 07:30:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2830590 /var/tmp/spdk.sock 00:06:37.620 07:30:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2830590 ']' 00:06:37.620 07:30:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.620 07:30:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:37.620 07:30:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.620 07:30:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:37.620 07:30:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:37.620 [2024-11-19 07:30:29.418533] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:37.621 [2024-11-19 07:30:29.418711] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2830590 ] 00:06:37.880 [2024-11-19 07:30:29.576560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:37.880 [2024-11-19 07:30:29.724616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.880 [2024-11-19 07:30:29.724667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.880 [2024-11-19 07:30:29.724673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:38.814 07:30:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:38.814 07:30:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:38.814 07:30:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2830733 00:06:38.814 07:30:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2830733 /var/tmp/spdk2.sock 00:06:38.814 07:30:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:38.814 07:30:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2830733 /var/tmp/spdk2.sock 00:06:38.814 07:30:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:38.814 07:30:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:38.814 07:30:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:38.814 07:30:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:38.814 07:30:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:38.814 07:30:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2830733 /var/tmp/spdk2.sock 00:06:38.814 07:30:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2830733 ']' 00:06:38.814 07:30:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:38.814 07:30:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:38.814 07:30:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:38.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:38.814 07:30:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:38.814 07:30:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:38.814 [2024-11-19 07:30:30.674561] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:38.814 [2024-11-19 07:30:30.674715] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2830733 ] 00:06:39.072 [2024-11-19 07:30:30.868535] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2830590 has claimed it. 00:06:39.072 [2024-11-19 07:30:30.868621] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:39.638 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2830733) - No such process 00:06:39.638 ERROR: process (pid: 2830733) is no longer running 00:06:39.638 07:30:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:39.638 07:30:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:39.638 07:30:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:39.638 07:30:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:39.638 07:30:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:39.638 07:30:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:39.638 07:30:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:39.638 07:30:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:39.638 07:30:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:39.638 07:30:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:39.638 07:30:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2830590 00:06:39.638 07:30:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 2830590 ']' 00:06:39.638 07:30:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 2830590 00:06:39.638 07:30:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:39.638 07:30:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:39.638 07:30:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2830590 00:06:39.638 07:30:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:39.638 07:30:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:39.638 07:30:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2830590' 00:06:39.638 killing process with pid 2830590 00:06:39.638 07:30:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 2830590 00:06:39.638 07:30:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 2830590 00:06:42.172 00:06:42.172 real 0m4.235s 00:06:42.172 user 0m11.496s 00:06:42.172 sys 0m0.791s 00:06:42.172 07:30:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:42.172 07:30:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:42.172 ************************************ 00:06:42.172 END TEST locking_overlapped_coremask 00:06:42.172 ************************************ 00:06:42.172 07:30:33 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:42.172 07:30:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:42.172 07:30:33 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:42.172 07:30:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:42.172 ************************************ 00:06:42.172 START TEST locking_overlapped_coremask_via_rpc 00:06:42.172 ************************************ 00:06:42.172 07:30:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:42.172 07:30:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2831049 00:06:42.172 07:30:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2831049 /var/tmp/spdk.sock 00:06:42.172 07:30:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:42.172 07:30:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2831049 ']' 00:06:42.172 07:30:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.172 07:30:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:42.172 07:30:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.172 07:30:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:42.172 07:30:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.172 [2024-11-19 07:30:33.694411] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:42.172 [2024-11-19 07:30:33.694568] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2831049 ] 00:06:42.172 [2024-11-19 07:30:33.828720] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:42.172 [2024-11-19 07:30:33.828803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:42.172 [2024-11-19 07:30:33.964579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.172 [2024-11-19 07:30:33.964642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.172 [2024-11-19 07:30:33.964651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:43.107 07:30:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:43.107 07:30:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:43.107 07:30:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2831244 00:06:43.107 07:30:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2831244 /var/tmp/spdk2.sock 00:06:43.107 07:30:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2831244 ']' 00:06:43.107 07:30:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:43.107 07:30:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:43.107 07:30:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:43.107 07:30:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:43.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:43.107 07:30:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:43.107 07:30:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.107 [2024-11-19 07:30:34.907989] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:43.107 [2024-11-19 07:30:34.908134] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2831244 ] 00:06:43.365 [2024-11-19 07:30:35.102614] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:43.365 [2024-11-19 07:30:35.102695] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:43.623 [2024-11-19 07:30:35.360244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:43.623 [2024-11-19 07:30:35.363758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:43.623 [2024-11-19 07:30:35.363767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:46.150 07:30:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:46.150 07:30:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:46.150 07:30:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:46.150 07:30:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.150 07:30:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.150 07:30:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.150 07:30:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:46.150 07:30:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:46.150 07:30:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:46.150 07:30:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:46.150 07:30:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:46.150 07:30:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:46.150 07:30:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:46.150 07:30:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:46.150 07:30:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.150 07:30:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.151 [2024-11-19 07:30:37.620847] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2831049 has claimed it. 00:06:46.151 request: 00:06:46.151 { 00:06:46.151 "method": "framework_enable_cpumask_locks", 00:06:46.151 "req_id": 1 00:06:46.151 } 00:06:46.151 Got JSON-RPC error response 00:06:46.151 response: 00:06:46.151 { 00:06:46.151 "code": -32603, 00:06:46.151 "message": "Failed to claim CPU core: 2" 00:06:46.151 } 00:06:46.151 07:30:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:46.151 07:30:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:46.151 07:30:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:46.151 07:30:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:46.151 07:30:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:46.151 07:30:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2831049 /var/tmp/spdk.sock 00:06:46.151 07:30:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2831049 ']' 00:06:46.151 07:30:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.151 07:30:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:46.151 07:30:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.151 07:30:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:46.151 07:30:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.151 07:30:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:46.151 07:30:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:46.151 07:30:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2831244 /var/tmp/spdk2.sock 00:06:46.151 07:30:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2831244 ']' 00:06:46.151 07:30:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:46.151 07:30:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:46.151 07:30:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:46.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:46.151 07:30:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:46.151 07:30:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.410 07:30:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:46.410 07:30:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:46.410 07:30:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:46.410 07:30:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:46.410 07:30:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:46.410 07:30:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:46.410 00:06:46.410 real 0m4.588s 00:06:46.410 user 0m1.607s 00:06:46.410 sys 0m0.233s 00:06:46.410 07:30:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.410 07:30:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.410 ************************************ 00:06:46.410 END TEST locking_overlapped_coremask_via_rpc 00:06:46.410 ************************************ 00:06:46.410 07:30:38 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:46.410 07:30:38 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2831049 ]] 00:06:46.410 07:30:38 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2831049 00:06:46.410 07:30:38 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2831049 ']' 00:06:46.410 07:30:38 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2831049 00:06:46.410 07:30:38 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:46.410 07:30:38 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:46.410 07:30:38 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2831049 00:06:46.410 07:30:38 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:46.410 07:30:38 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:46.410 07:30:38 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2831049' 00:06:46.410 killing process with pid 2831049 00:06:46.410 07:30:38 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2831049 00:06:46.410 07:30:38 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2831049 00:06:48.940 07:30:40 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2831244 ]] 00:06:48.940 07:30:40 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2831244 00:06:48.940 07:30:40 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2831244 ']' 00:06:48.940 07:30:40 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2831244 00:06:48.940 07:30:40 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:48.940 07:30:40 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:48.940 07:30:40 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2831244 00:06:48.940 07:30:40 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:48.940 07:30:40 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:48.940 07:30:40 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2831244' 00:06:48.940 killing process with pid 2831244 00:06:48.940 07:30:40 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2831244 00:06:48.940 07:30:40 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2831244 00:06:50.841 07:30:42 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:50.841 07:30:42 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:50.841 07:30:42 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2831049 ]] 00:06:50.841 07:30:42 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2831049 00:06:50.841 07:30:42 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2831049 ']' 00:06:50.841 07:30:42 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2831049 00:06:50.841 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2831049) - No such process 00:06:50.841 07:30:42 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2831049 is not found' 00:06:50.841 Process with pid 2831049 is not found 00:06:50.841 07:30:42 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2831244 ]] 00:06:50.841 07:30:42 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2831244 00:06:50.841 07:30:42 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2831244 ']' 00:06:50.841 07:30:42 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2831244 00:06:50.841 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2831244) - No such process 00:06:50.841 07:30:42 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2831244 is not found' 00:06:50.841 Process with pid 2831244 is not found 00:06:50.841 07:30:42 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:50.841 00:06:50.841 real 0m50.741s 00:06:50.841 user 1m26.215s 00:06:50.841 sys 0m7.754s 00:06:50.841 07:30:42 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.841 07:30:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:50.841 ************************************ 00:06:50.841 END TEST cpu_locks 00:06:50.841 ************************************ 00:06:50.841 00:06:50.841 real 1m20.336s 00:06:50.841 user 2m24.719s 00:06:50.841 sys 0m12.274s 00:06:50.841 07:30:42 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.841 07:30:42 event -- common/autotest_common.sh@10 -- # set +x 00:06:50.841 ************************************ 00:06:50.841 END TEST event 00:06:50.841 ************************************ 00:06:50.841 07:30:42 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:50.841 07:30:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:50.841 07:30:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:50.841 07:30:42 -- common/autotest_common.sh@10 -- # set +x 00:06:50.841 ************************************ 00:06:50.841 START TEST thread 00:06:50.841 ************************************ 00:06:50.841 07:30:42 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:51.100 * Looking for test storage... 00:06:51.100 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:51.100 07:30:42 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:51.100 07:30:42 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:51.100 07:30:42 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:51.100 07:30:42 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:51.100 07:30:42 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:51.100 07:30:42 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:51.100 07:30:42 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:51.100 07:30:42 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:51.100 07:30:42 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:51.100 07:30:42 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:51.100 07:30:42 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:51.100 07:30:42 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:51.100 07:30:42 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:51.100 07:30:42 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:51.100 07:30:42 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:51.100 07:30:42 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:51.100 07:30:42 thread -- scripts/common.sh@345 -- # : 1 00:06:51.100 07:30:42 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:51.100 07:30:42 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:51.100 07:30:42 thread -- scripts/common.sh@365 -- # decimal 1 00:06:51.100 07:30:42 thread -- scripts/common.sh@353 -- # local d=1 00:06:51.100 07:30:42 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:51.100 07:30:42 thread -- scripts/common.sh@355 -- # echo 1 00:06:51.100 07:30:42 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:51.100 07:30:42 thread -- scripts/common.sh@366 -- # decimal 2 00:06:51.100 07:30:42 thread -- scripts/common.sh@353 -- # local d=2 00:06:51.100 07:30:42 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:51.100 07:30:42 thread -- scripts/common.sh@355 -- # echo 2 00:06:51.100 07:30:42 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:51.100 07:30:42 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:51.100 07:30:42 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:51.100 07:30:42 thread -- scripts/common.sh@368 -- # return 0 00:06:51.100 07:30:42 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:51.100 07:30:42 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:51.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.100 --rc genhtml_branch_coverage=1 00:06:51.100 --rc genhtml_function_coverage=1 00:06:51.100 --rc genhtml_legend=1 00:06:51.100 --rc geninfo_all_blocks=1 00:06:51.100 --rc geninfo_unexecuted_blocks=1 00:06:51.100 00:06:51.100 ' 00:06:51.100 07:30:42 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:51.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.100 --rc genhtml_branch_coverage=1 00:06:51.100 --rc genhtml_function_coverage=1 00:06:51.100 --rc genhtml_legend=1 00:06:51.100 --rc geninfo_all_blocks=1 00:06:51.100 --rc geninfo_unexecuted_blocks=1 00:06:51.100 00:06:51.100 ' 00:06:51.100 07:30:42 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:51.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.100 --rc genhtml_branch_coverage=1 00:06:51.100 --rc genhtml_function_coverage=1 00:06:51.100 --rc genhtml_legend=1 00:06:51.100 --rc geninfo_all_blocks=1 00:06:51.100 --rc geninfo_unexecuted_blocks=1 00:06:51.100 00:06:51.100 ' 00:06:51.100 07:30:42 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:51.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.100 --rc genhtml_branch_coverage=1 00:06:51.100 --rc genhtml_function_coverage=1 00:06:51.100 --rc genhtml_legend=1 00:06:51.100 --rc geninfo_all_blocks=1 00:06:51.100 --rc geninfo_unexecuted_blocks=1 00:06:51.100 00:06:51.100 ' 00:06:51.100 07:30:42 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:51.100 07:30:42 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:51.100 07:30:42 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.100 07:30:42 thread -- common/autotest_common.sh@10 -- # set +x 00:06:51.100 ************************************ 00:06:51.100 START TEST thread_poller_perf 00:06:51.100 ************************************ 00:06:51.100 07:30:42 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:51.100 [2024-11-19 07:30:42.956079] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:51.100 [2024-11-19 07:30:42.956188] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2832334 ] 00:06:51.359 [2024-11-19 07:30:43.097651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.359 [2024-11-19 07:30:43.235919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.359 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:52.733 [2024-11-19T06:30:44.663Z] ====================================== 00:06:52.733 [2024-11-19T06:30:44.663Z] busy:2716696177 (cyc) 00:06:52.733 [2024-11-19T06:30:44.663Z] total_run_count: 282000 00:06:52.733 [2024-11-19T06:30:44.663Z] tsc_hz: 2700000000 (cyc) 00:06:52.733 [2024-11-19T06:30:44.663Z] ====================================== 00:06:52.733 [2024-11-19T06:30:44.663Z] poller_cost: 9633 (cyc), 3567 (nsec) 00:06:52.733 00:06:52.733 real 0m1.580s 00:06:52.733 user 0m1.433s 00:06:52.733 sys 0m0.139s 00:06:52.733 07:30:44 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.733 07:30:44 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:52.733 ************************************ 00:06:52.733 END TEST thread_poller_perf 00:06:52.733 ************************************ 00:06:52.733 07:30:44 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:52.733 07:30:44 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:52.733 07:30:44 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.733 07:30:44 thread -- common/autotest_common.sh@10 -- # set +x 00:06:52.733 ************************************ 00:06:52.733 START TEST thread_poller_perf 00:06:52.733 ************************************ 00:06:52.733 07:30:44 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:52.733 [2024-11-19 07:30:44.582380] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:52.733 [2024-11-19 07:30:44.582501] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2832503 ] 00:06:52.992 [2024-11-19 07:30:44.721273] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.992 [2024-11-19 07:30:44.859335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.992 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:54.366 [2024-11-19T06:30:46.296Z] ====================================== 00:06:54.366 [2024-11-19T06:30:46.296Z] busy:2705251654 (cyc) 00:06:54.366 [2024-11-19T06:30:46.296Z] total_run_count: 3743000 00:06:54.366 [2024-11-19T06:30:46.296Z] tsc_hz: 2700000000 (cyc) 00:06:54.366 [2024-11-19T06:30:46.296Z] ====================================== 00:06:54.366 [2024-11-19T06:30:46.296Z] poller_cost: 722 (cyc), 267 (nsec) 00:06:54.366 00:06:54.366 real 0m1.568s 00:06:54.366 user 0m1.414s 00:06:54.366 sys 0m0.146s 00:06:54.366 07:30:46 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:54.366 07:30:46 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:54.366 ************************************ 00:06:54.366 END TEST thread_poller_perf 00:06:54.366 ************************************ 00:06:54.366 07:30:46 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:54.366 00:06:54.366 real 0m3.400s 00:06:54.366 user 0m3.008s 00:06:54.366 sys 0m0.389s 00:06:54.366 07:30:46 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:54.366 07:30:46 thread -- common/autotest_common.sh@10 -- # set +x 00:06:54.366 ************************************ 00:06:54.366 END TEST thread 00:06:54.366 ************************************ 00:06:54.366 07:30:46 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:54.366 07:30:46 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:54.366 07:30:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:54.366 07:30:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:54.366 07:30:46 -- common/autotest_common.sh@10 -- # set +x 00:06:54.366 ************************************ 00:06:54.366 START TEST app_cmdline 00:06:54.366 ************************************ 00:06:54.366 07:30:46 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:54.366 * Looking for test storage... 00:06:54.366 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:54.366 07:30:46 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:54.366 07:30:46 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:54.366 07:30:46 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:54.625 07:30:46 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:54.625 07:30:46 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:54.625 07:30:46 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:54.625 07:30:46 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:54.625 07:30:46 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:54.625 07:30:46 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:54.625 07:30:46 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:54.625 07:30:46 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:54.625 07:30:46 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:54.625 07:30:46 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:54.625 07:30:46 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:54.625 07:30:46 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:54.625 07:30:46 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:54.625 07:30:46 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:54.625 07:30:46 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:54.625 07:30:46 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:54.625 07:30:46 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:54.625 07:30:46 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:54.625 07:30:46 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:54.625 07:30:46 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:54.625 07:30:46 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:54.625 07:30:46 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:54.625 07:30:46 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:54.625 07:30:46 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:54.625 07:30:46 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:54.625 07:30:46 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:54.625 07:30:46 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:54.625 07:30:46 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:54.625 07:30:46 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:54.625 07:30:46 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:54.625 07:30:46 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:54.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.625 --rc genhtml_branch_coverage=1 00:06:54.625 --rc genhtml_function_coverage=1 00:06:54.625 --rc genhtml_legend=1 00:06:54.625 --rc geninfo_all_blocks=1 00:06:54.625 --rc geninfo_unexecuted_blocks=1 00:06:54.625 00:06:54.625 ' 00:06:54.625 07:30:46 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:54.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.625 --rc genhtml_branch_coverage=1 00:06:54.625 --rc genhtml_function_coverage=1 00:06:54.625 --rc genhtml_legend=1 00:06:54.625 --rc geninfo_all_blocks=1 00:06:54.625 --rc geninfo_unexecuted_blocks=1 00:06:54.625 00:06:54.625 ' 00:06:54.625 07:30:46 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:54.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.625 --rc genhtml_branch_coverage=1 00:06:54.625 --rc genhtml_function_coverage=1 00:06:54.625 --rc genhtml_legend=1 00:06:54.625 --rc geninfo_all_blocks=1 00:06:54.625 --rc geninfo_unexecuted_blocks=1 00:06:54.625 00:06:54.625 ' 00:06:54.625 07:30:46 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:54.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.625 --rc genhtml_branch_coverage=1 00:06:54.625 --rc genhtml_function_coverage=1 00:06:54.625 --rc genhtml_legend=1 00:06:54.625 --rc geninfo_all_blocks=1 00:06:54.625 --rc geninfo_unexecuted_blocks=1 00:06:54.625 00:06:54.625 ' 00:06:54.626 07:30:46 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:54.626 07:30:46 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2832827 00:06:54.626 07:30:46 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:54.626 07:30:46 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2832827 00:06:54.626 07:30:46 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 2832827 ']' 00:06:54.626 07:30:46 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.626 07:30:46 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:54.626 07:30:46 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.626 07:30:46 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:54.626 07:30:46 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:54.626 [2024-11-19 07:30:46.441166] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:54.626 [2024-11-19 07:30:46.441315] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2832827 ] 00:06:54.884 [2024-11-19 07:30:46.585467] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.884 [2024-11-19 07:30:46.723025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.819 07:30:47 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:55.819 07:30:47 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:55.819 07:30:47 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:56.075 { 00:06:56.075 "version": "SPDK v25.01-pre git sha1 d47eb51c9", 00:06:56.075 "fields": { 00:06:56.075 "major": 25, 00:06:56.075 "minor": 1, 00:06:56.076 "patch": 0, 00:06:56.076 "suffix": "-pre", 00:06:56.076 "commit": "d47eb51c9" 00:06:56.076 } 00:06:56.076 } 00:06:56.333 07:30:48 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:56.333 07:30:48 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:56.333 07:30:48 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:56.333 07:30:48 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:56.333 07:30:48 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:56.333 07:30:48 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:56.333 07:30:48 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.333 07:30:48 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:56.333 07:30:48 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:56.333 07:30:48 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.333 07:30:48 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:56.333 07:30:48 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:56.333 07:30:48 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:56.333 07:30:48 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:56.333 07:30:48 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:56.333 07:30:48 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:56.333 07:30:48 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:56.333 07:30:48 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:56.333 07:30:48 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:56.333 07:30:48 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:56.334 07:30:48 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:56.334 07:30:48 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:56.334 07:30:48 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:56.334 07:30:48 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:56.591 request: 00:06:56.591 { 00:06:56.591 "method": "env_dpdk_get_mem_stats", 00:06:56.591 "req_id": 1 00:06:56.591 } 00:06:56.591 Got JSON-RPC error response 00:06:56.591 response: 00:06:56.591 { 00:06:56.591 "code": -32601, 00:06:56.591 "message": "Method not found" 00:06:56.591 } 00:06:56.591 07:30:48 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:56.591 07:30:48 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:56.591 07:30:48 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:56.591 07:30:48 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:56.591 07:30:48 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2832827 00:06:56.591 07:30:48 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 2832827 ']' 00:06:56.591 07:30:48 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 2832827 00:06:56.591 07:30:48 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:56.591 07:30:48 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:56.591 07:30:48 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2832827 00:06:56.591 07:30:48 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:56.591 07:30:48 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:56.591 07:30:48 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2832827' 00:06:56.591 killing process with pid 2832827 00:06:56.591 07:30:48 app_cmdline -- common/autotest_common.sh@973 -- # kill 2832827 00:06:56.591 07:30:48 app_cmdline -- common/autotest_common.sh@978 -- # wait 2832827 00:06:59.122 00:06:59.122 real 0m4.659s 00:06:59.122 user 0m5.151s 00:06:59.122 sys 0m0.763s 00:06:59.122 07:30:50 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:59.122 07:30:50 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:59.122 ************************************ 00:06:59.122 END TEST app_cmdline 00:06:59.122 ************************************ 00:06:59.122 07:30:50 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:59.122 07:30:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:59.122 07:30:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.122 07:30:50 -- common/autotest_common.sh@10 -- # set +x 00:06:59.122 ************************************ 00:06:59.122 START TEST version 00:06:59.122 ************************************ 00:06:59.122 07:30:50 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:59.122 * Looking for test storage... 00:06:59.122 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:59.122 07:30:50 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:59.122 07:30:50 version -- common/autotest_common.sh@1693 -- # lcov --version 00:06:59.122 07:30:50 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:59.122 07:30:51 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:59.122 07:30:51 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:59.122 07:30:51 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:59.122 07:30:51 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:59.122 07:30:51 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:59.122 07:30:51 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:59.122 07:30:51 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:59.122 07:30:51 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:59.122 07:30:51 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:59.122 07:30:51 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:59.122 07:30:51 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:59.122 07:30:51 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:59.122 07:30:51 version -- scripts/common.sh@344 -- # case "$op" in 00:06:59.122 07:30:51 version -- scripts/common.sh@345 -- # : 1 00:06:59.122 07:30:51 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:59.122 07:30:51 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:59.122 07:30:51 version -- scripts/common.sh@365 -- # decimal 1 00:06:59.122 07:30:51 version -- scripts/common.sh@353 -- # local d=1 00:06:59.122 07:30:51 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:59.122 07:30:51 version -- scripts/common.sh@355 -- # echo 1 00:06:59.122 07:30:51 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:59.122 07:30:51 version -- scripts/common.sh@366 -- # decimal 2 00:06:59.122 07:30:51 version -- scripts/common.sh@353 -- # local d=2 00:06:59.122 07:30:51 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:59.122 07:30:51 version -- scripts/common.sh@355 -- # echo 2 00:06:59.122 07:30:51 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:59.122 07:30:51 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:59.122 07:30:51 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:59.122 07:30:51 version -- scripts/common.sh@368 -- # return 0 00:06:59.122 07:30:51 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:59.122 07:30:51 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:59.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.122 --rc genhtml_branch_coverage=1 00:06:59.122 --rc genhtml_function_coverage=1 00:06:59.122 --rc genhtml_legend=1 00:06:59.122 --rc geninfo_all_blocks=1 00:06:59.122 --rc geninfo_unexecuted_blocks=1 00:06:59.122 00:06:59.122 ' 00:06:59.122 07:30:51 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:59.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.122 --rc genhtml_branch_coverage=1 00:06:59.122 --rc genhtml_function_coverage=1 00:06:59.122 --rc genhtml_legend=1 00:06:59.122 --rc geninfo_all_blocks=1 00:06:59.122 --rc geninfo_unexecuted_blocks=1 00:06:59.122 00:06:59.122 ' 00:06:59.122 07:30:51 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:59.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.122 --rc genhtml_branch_coverage=1 00:06:59.122 --rc genhtml_function_coverage=1 00:06:59.122 --rc genhtml_legend=1 00:06:59.122 --rc geninfo_all_blocks=1 00:06:59.122 --rc geninfo_unexecuted_blocks=1 00:06:59.122 00:06:59.122 ' 00:06:59.122 07:30:51 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:59.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.122 --rc genhtml_branch_coverage=1 00:06:59.122 --rc genhtml_function_coverage=1 00:06:59.122 --rc genhtml_legend=1 00:06:59.122 --rc geninfo_all_blocks=1 00:06:59.122 --rc geninfo_unexecuted_blocks=1 00:06:59.122 00:06:59.122 ' 00:06:59.122 07:30:51 version -- app/version.sh@17 -- # get_header_version major 00:06:59.122 07:30:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:59.122 07:30:51 version -- app/version.sh@14 -- # cut -f2 00:06:59.122 07:30:51 version -- app/version.sh@14 -- # tr -d '"' 00:06:59.122 07:30:51 version -- app/version.sh@17 -- # major=25 00:06:59.122 07:30:51 version -- app/version.sh@18 -- # get_header_version minor 00:06:59.122 07:30:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:59.122 07:30:51 version -- app/version.sh@14 -- # cut -f2 00:06:59.122 07:30:51 version -- app/version.sh@14 -- # tr -d '"' 00:06:59.122 07:30:51 version -- app/version.sh@18 -- # minor=1 00:06:59.122 07:30:51 version -- app/version.sh@19 -- # get_header_version patch 00:06:59.122 07:30:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:59.122 07:30:51 version -- app/version.sh@14 -- # cut -f2 00:06:59.122 07:30:51 version -- app/version.sh@14 -- # tr -d '"' 00:06:59.122 07:30:51 version -- app/version.sh@19 -- # patch=0 00:06:59.122 07:30:51 version -- app/version.sh@20 -- # get_header_version suffix 00:06:59.122 07:30:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:59.122 07:30:51 version -- app/version.sh@14 -- # cut -f2 00:06:59.122 07:30:51 version -- app/version.sh@14 -- # tr -d '"' 00:06:59.381 07:30:51 version -- app/version.sh@20 -- # suffix=-pre 00:06:59.381 07:30:51 version -- app/version.sh@22 -- # version=25.1 00:06:59.381 07:30:51 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:59.381 07:30:51 version -- app/version.sh@28 -- # version=25.1rc0 00:06:59.381 07:30:51 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:59.381 07:30:51 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:59.381 07:30:51 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:59.381 07:30:51 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:59.381 00:06:59.381 real 0m0.197s 00:06:59.381 user 0m0.127s 00:06:59.381 sys 0m0.095s 00:06:59.381 07:30:51 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:59.381 07:30:51 version -- common/autotest_common.sh@10 -- # set +x 00:06:59.381 ************************************ 00:06:59.381 END TEST version 00:06:59.381 ************************************ 00:06:59.381 07:30:51 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:59.381 07:30:51 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:59.381 07:30:51 -- spdk/autotest.sh@194 -- # uname -s 00:06:59.381 07:30:51 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:59.381 07:30:51 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:59.381 07:30:51 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:59.381 07:30:51 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:59.381 07:30:51 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:59.381 07:30:51 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:59.381 07:30:51 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:59.381 07:30:51 -- common/autotest_common.sh@10 -- # set +x 00:06:59.381 07:30:51 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:59.381 07:30:51 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:06:59.381 07:30:51 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:06:59.381 07:30:51 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:06:59.381 07:30:51 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:06:59.381 07:30:51 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:06:59.381 07:30:51 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:59.381 07:30:51 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:59.381 07:30:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.381 07:30:51 -- common/autotest_common.sh@10 -- # set +x 00:06:59.381 ************************************ 00:06:59.381 START TEST nvmf_tcp 00:06:59.381 ************************************ 00:06:59.381 07:30:51 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:59.381 * Looking for test storage... 00:06:59.381 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:59.381 07:30:51 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:59.381 07:30:51 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:06:59.382 07:30:51 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:59.382 07:30:51 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:59.382 07:30:51 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:59.382 07:30:51 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:59.382 07:30:51 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:59.382 07:30:51 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:59.382 07:30:51 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:59.382 07:30:51 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:59.382 07:30:51 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:59.382 07:30:51 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:59.382 07:30:51 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:59.382 07:30:51 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:59.382 07:30:51 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:59.382 07:30:51 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:59.382 07:30:51 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:06:59.382 07:30:51 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:59.382 07:30:51 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:59.382 07:30:51 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:59.382 07:30:51 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:06:59.382 07:30:51 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:59.382 07:30:51 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:06:59.382 07:30:51 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:59.382 07:30:51 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:59.382 07:30:51 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:06:59.382 07:30:51 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:59.382 07:30:51 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:06:59.382 07:30:51 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:59.382 07:30:51 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:59.382 07:30:51 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:59.382 07:30:51 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:06:59.382 07:30:51 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:59.382 07:30:51 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:59.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.382 --rc genhtml_branch_coverage=1 00:06:59.382 --rc genhtml_function_coverage=1 00:06:59.382 --rc genhtml_legend=1 00:06:59.382 --rc geninfo_all_blocks=1 00:06:59.382 --rc geninfo_unexecuted_blocks=1 00:06:59.382 00:06:59.382 ' 00:06:59.382 07:30:51 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:59.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.382 --rc genhtml_branch_coverage=1 00:06:59.382 --rc genhtml_function_coverage=1 00:06:59.382 --rc genhtml_legend=1 00:06:59.382 --rc geninfo_all_blocks=1 00:06:59.382 --rc geninfo_unexecuted_blocks=1 00:06:59.382 00:06:59.382 ' 00:06:59.382 07:30:51 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:59.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.382 --rc genhtml_branch_coverage=1 00:06:59.382 --rc genhtml_function_coverage=1 00:06:59.382 --rc genhtml_legend=1 00:06:59.382 --rc geninfo_all_blocks=1 00:06:59.382 --rc geninfo_unexecuted_blocks=1 00:06:59.382 00:06:59.382 ' 00:06:59.382 07:30:51 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:59.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.382 --rc genhtml_branch_coverage=1 00:06:59.382 --rc genhtml_function_coverage=1 00:06:59.382 --rc genhtml_legend=1 00:06:59.382 --rc geninfo_all_blocks=1 00:06:59.382 --rc geninfo_unexecuted_blocks=1 00:06:59.382 00:06:59.382 ' 00:06:59.382 07:30:51 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:59.382 07:30:51 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:59.382 07:30:51 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:59.382 07:30:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:59.382 07:30:51 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.382 07:30:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:59.640 ************************************ 00:06:59.640 START TEST nvmf_target_core 00:06:59.640 ************************************ 00:06:59.640 07:30:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:59.640 * Looking for test storage... 00:06:59.640 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:59.640 07:30:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:59.640 07:30:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:06:59.640 07:30:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:59.640 07:30:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:59.640 07:30:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:59.640 07:30:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:59.640 07:30:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:59.640 07:30:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:06:59.640 07:30:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:06:59.640 07:30:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:06:59.640 07:30:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:06:59.640 07:30:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:06:59.640 07:30:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:06:59.640 07:30:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:06:59.640 07:30:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:59.640 07:30:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:06:59.640 07:30:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:06:59.640 07:30:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:59.640 07:30:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:59.640 07:30:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:06:59.641 07:30:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:06:59.641 07:30:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:59.641 07:30:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:06:59.641 07:30:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:06:59.641 07:30:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:06:59.641 07:30:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:06:59.641 07:30:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:59.641 07:30:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:06:59.641 07:30:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:06:59.641 07:30:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:59.641 07:30:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:59.641 07:30:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:06:59.641 07:30:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:59.641 07:30:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:59.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.641 --rc genhtml_branch_coverage=1 00:06:59.641 --rc genhtml_function_coverage=1 00:06:59.641 --rc genhtml_legend=1 00:06:59.641 --rc geninfo_all_blocks=1 00:06:59.641 --rc geninfo_unexecuted_blocks=1 00:06:59.641 00:06:59.641 ' 00:06:59.641 07:30:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:59.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.641 --rc genhtml_branch_coverage=1 00:06:59.641 --rc genhtml_function_coverage=1 00:06:59.641 --rc genhtml_legend=1 00:06:59.641 --rc geninfo_all_blocks=1 00:06:59.641 --rc geninfo_unexecuted_blocks=1 00:06:59.641 00:06:59.641 ' 00:06:59.641 07:30:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:59.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.641 --rc genhtml_branch_coverage=1 00:06:59.641 --rc genhtml_function_coverage=1 00:06:59.641 --rc genhtml_legend=1 00:06:59.641 --rc geninfo_all_blocks=1 00:06:59.641 --rc geninfo_unexecuted_blocks=1 00:06:59.641 00:06:59.641 ' 00:06:59.641 07:30:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:59.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.641 --rc genhtml_branch_coverage=1 00:06:59.641 --rc genhtml_function_coverage=1 00:06:59.641 --rc genhtml_legend=1 00:06:59.641 --rc geninfo_all_blocks=1 00:06:59.641 --rc geninfo_unexecuted_blocks=1 00:06:59.641 00:06:59.641 ' 00:06:59.641 07:30:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:59.641 07:30:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:59.641 07:30:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:59.641 07:30:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:59.641 07:30:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:59.641 07:30:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:59.641 07:30:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:59.641 07:30:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:59.641 07:30:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:59.641 07:30:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:59.641 07:30:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:59.641 07:30:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:59.641 07:30:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:59.641 07:30:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:59.641 07:30:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:59.641 07:30:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:59.641 07:30:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:59.641 07:30:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:59.641 07:30:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:59.641 07:30:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:59.641 07:30:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:59.641 07:30:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:06:59.641 07:30:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:59.641 07:30:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:59.641 07:30:51 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:59.641 07:30:51 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.641 07:30:51 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.641 07:30:51 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.641 07:30:51 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:59.641 07:30:51 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.641 07:30:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:06:59.641 07:30:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:59.641 07:30:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:59.641 07:30:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:59.641 07:30:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:59.641 07:30:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:59.641 07:30:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:59.641 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:59.641 07:30:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:59.641 07:30:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:59.641 07:30:51 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:59.641 07:30:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:59.641 07:30:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:59.641 07:30:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:59.641 07:30:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:59.641 07:30:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:59.641 07:30:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.641 07:30:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:59.641 ************************************ 00:06:59.641 START TEST nvmf_abort 00:06:59.641 ************************************ 00:06:59.641 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:59.641 * Looking for test storage... 00:06:59.641 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:59.641 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:59.641 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:06:59.641 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:59.900 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:59.900 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:59.900 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:59.900 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:59.900 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:06:59.900 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:06:59.900 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:06:59.900 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:06:59.900 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:06:59.900 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:06:59.900 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:06:59.900 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:59.900 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:06:59.900 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:06:59.900 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:59.900 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:59.900 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:06:59.900 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:06:59.900 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:59.900 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:06:59.900 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:06:59.900 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:06:59.900 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:06:59.900 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:59.900 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:06:59.900 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:06:59.900 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:59.900 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:59.900 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:06:59.900 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:59.900 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:59.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.900 --rc genhtml_branch_coverage=1 00:06:59.900 --rc genhtml_function_coverage=1 00:06:59.900 --rc genhtml_legend=1 00:06:59.900 --rc geninfo_all_blocks=1 00:06:59.900 --rc geninfo_unexecuted_blocks=1 00:06:59.900 00:06:59.900 ' 00:06:59.900 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:59.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.900 --rc genhtml_branch_coverage=1 00:06:59.900 --rc genhtml_function_coverage=1 00:06:59.900 --rc genhtml_legend=1 00:06:59.900 --rc geninfo_all_blocks=1 00:06:59.900 --rc geninfo_unexecuted_blocks=1 00:06:59.900 00:06:59.900 ' 00:06:59.900 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:59.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.900 --rc genhtml_branch_coverage=1 00:06:59.900 --rc genhtml_function_coverage=1 00:06:59.900 --rc genhtml_legend=1 00:06:59.900 --rc geninfo_all_blocks=1 00:06:59.900 --rc geninfo_unexecuted_blocks=1 00:06:59.900 00:06:59.900 ' 00:06:59.900 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:59.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.900 --rc genhtml_branch_coverage=1 00:06:59.900 --rc genhtml_function_coverage=1 00:06:59.900 --rc genhtml_legend=1 00:06:59.900 --rc geninfo_all_blocks=1 00:06:59.900 --rc geninfo_unexecuted_blocks=1 00:06:59.900 00:06:59.900 ' 00:06:59.900 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:59.900 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:59.900 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:59.900 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:59.900 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:59.900 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:59.900 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:59.900 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:59.900 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:59.900 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:59.900 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:59.900 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:59.900 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:59.900 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:59.900 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:59.900 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:59.900 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:59.900 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:59.900 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:59.900 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:06:59.900 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:59.900 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:59.900 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:59.900 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.900 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.900 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.901 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:59.901 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.901 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:06:59.901 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:59.901 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:59.901 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:59.901 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:59.901 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:59.901 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:59.901 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:59.901 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:59.901 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:59.901 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:59.901 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:59.901 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:59.901 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:59.901 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:59.901 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:59.901 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:59.901 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:59.901 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:59.901 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:59.901 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:59.901 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:59.901 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:59.901 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:59.901 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:06:59.901 07:30:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:01.803 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:01.803 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:01.803 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:01.803 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:01.803 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:01.804 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:01.804 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:01.804 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:01.804 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:02.062 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:02.062 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:02.062 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:02.062 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:02.062 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:02.062 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:07:02.062 00:07:02.062 --- 10.0.0.2 ping statistics --- 00:07:02.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:02.062 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:07:02.062 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:02.062 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:02.062 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:07:02.062 00:07:02.062 --- 10.0.0.1 ping statistics --- 00:07:02.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:02.062 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:07:02.062 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:02.062 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:07:02.062 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:02.062 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:02.062 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:02.062 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:02.062 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:02.062 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:02.062 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:02.062 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:02.062 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:02.062 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:02.062 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:02.062 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2835186 00:07:02.062 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2835186 00:07:02.062 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 2835186 ']' 00:07:02.062 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.062 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:02.063 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:02.063 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.063 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:02.063 07:30:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:02.063 [2024-11-19 07:30:53.891777] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:02.063 [2024-11-19 07:30:53.891925] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:02.321 [2024-11-19 07:30:54.051628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:02.321 [2024-11-19 07:30:54.195126] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:02.321 [2024-11-19 07:30:54.195222] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:02.321 [2024-11-19 07:30:54.195249] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:02.321 [2024-11-19 07:30:54.195274] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:02.321 [2024-11-19 07:30:54.195300] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:02.321 [2024-11-19 07:30:54.198066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:02.321 [2024-11-19 07:30:54.198162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:02.321 [2024-11-19 07:30:54.198165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:03.255 07:30:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:03.255 07:30:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:07:03.255 07:30:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:03.256 07:30:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:03.256 07:30:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:03.256 07:30:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:03.256 07:30:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:03.256 07:30:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.256 07:30:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:03.256 [2024-11-19 07:30:54.932449] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:03.256 07:30:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.256 07:30:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:03.256 07:30:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.256 07:30:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:03.256 Malloc0 00:07:03.256 07:30:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.256 07:30:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:03.256 07:30:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.256 07:30:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:03.256 Delay0 00:07:03.256 07:30:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.256 07:30:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:03.256 07:30:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.256 07:30:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:03.256 07:30:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.256 07:30:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:03.256 07:30:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.256 07:30:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:03.256 07:30:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.256 07:30:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:03.256 07:30:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.256 07:30:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:03.256 [2024-11-19 07:30:55.067067] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:03.256 07:30:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.256 07:30:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:03.256 07:30:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.256 07:30:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:03.256 07:30:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.256 07:30:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:03.514 [2024-11-19 07:30:55.233903] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:05.413 Initializing NVMe Controllers 00:07:05.413 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:05.413 controller IO queue size 128 less than required 00:07:05.413 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:05.413 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:05.413 Initialization complete. Launching workers. 00:07:05.413 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 22392 00:07:05.413 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 22453, failed to submit 66 00:07:05.413 success 22392, unsuccessful 61, failed 0 00:07:05.671 07:30:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:05.671 07:30:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.671 07:30:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:05.671 07:30:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.671 07:30:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:05.671 07:30:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:05.671 07:30:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:05.671 07:30:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:07:05.671 07:30:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:05.671 07:30:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:07:05.671 07:30:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:05.671 07:30:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:05.671 rmmod nvme_tcp 00:07:05.671 rmmod nvme_fabrics 00:07:05.671 rmmod nvme_keyring 00:07:05.671 07:30:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:05.671 07:30:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:07:05.671 07:30:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:07:05.671 07:30:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2835186 ']' 00:07:05.671 07:30:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2835186 00:07:05.671 07:30:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 2835186 ']' 00:07:05.671 07:30:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 2835186 00:07:05.671 07:30:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:07:05.671 07:30:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:05.671 07:30:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2835186 00:07:05.671 07:30:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:05.671 07:30:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:05.671 07:30:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2835186' 00:07:05.671 killing process with pid 2835186 00:07:05.671 07:30:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 2835186 00:07:05.671 07:30:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 2835186 00:07:07.045 07:30:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:07.045 07:30:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:07.045 07:30:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:07.045 07:30:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:07:07.045 07:30:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:07:07.045 07:30:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:07:07.045 07:30:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:07.045 07:30:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:07.045 07:30:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:07.045 07:30:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:07.045 07:30:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:07.045 07:30:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:08.951 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:08.951 00:07:08.951 real 0m9.206s 00:07:08.951 user 0m15.415s 00:07:08.951 sys 0m2.687s 00:07:08.951 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:08.951 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:08.951 ************************************ 00:07:08.951 END TEST nvmf_abort 00:07:08.951 ************************************ 00:07:08.951 07:31:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:08.951 07:31:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:08.951 07:31:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:08.951 07:31:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:08.951 ************************************ 00:07:08.951 START TEST nvmf_ns_hotplug_stress 00:07:08.951 ************************************ 00:07:08.951 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:08.951 * Looking for test storage... 00:07:08.951 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:08.951 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:08.951 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:07:08.951 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:09.212 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:09.212 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:09.212 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:09.212 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:09.212 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:07:09.212 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:07:09.212 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:07:09.212 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:07:09.212 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:07:09.212 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:07:09.212 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:07:09.212 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:09.212 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:07:09.212 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:07:09.212 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:09.212 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:09.212 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:07:09.212 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:07:09.212 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:09.212 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:07:09.212 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:07:09.212 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:07:09.212 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:07:09.212 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:09.212 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:07:09.212 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:07:09.212 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:09.212 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:09.212 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:07:09.212 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:09.212 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:09.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.212 --rc genhtml_branch_coverage=1 00:07:09.212 --rc genhtml_function_coverage=1 00:07:09.212 --rc genhtml_legend=1 00:07:09.212 --rc geninfo_all_blocks=1 00:07:09.212 --rc geninfo_unexecuted_blocks=1 00:07:09.212 00:07:09.212 ' 00:07:09.212 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:09.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.212 --rc genhtml_branch_coverage=1 00:07:09.212 --rc genhtml_function_coverage=1 00:07:09.212 --rc genhtml_legend=1 00:07:09.212 --rc geninfo_all_blocks=1 00:07:09.212 --rc geninfo_unexecuted_blocks=1 00:07:09.212 00:07:09.212 ' 00:07:09.212 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:09.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.212 --rc genhtml_branch_coverage=1 00:07:09.212 --rc genhtml_function_coverage=1 00:07:09.212 --rc genhtml_legend=1 00:07:09.212 --rc geninfo_all_blocks=1 00:07:09.212 --rc geninfo_unexecuted_blocks=1 00:07:09.212 00:07:09.212 ' 00:07:09.212 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:09.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.212 --rc genhtml_branch_coverage=1 00:07:09.212 --rc genhtml_function_coverage=1 00:07:09.212 --rc genhtml_legend=1 00:07:09.212 --rc geninfo_all_blocks=1 00:07:09.212 --rc geninfo_unexecuted_blocks=1 00:07:09.212 00:07:09.212 ' 00:07:09.212 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:09.212 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:09.212 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:09.212 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:09.212 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:09.212 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:09.212 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:09.212 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:09.212 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:09.212 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:09.212 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:09.212 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:09.212 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:09.212 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:09.212 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:09.212 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:09.212 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:09.212 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:09.212 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:09.212 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:07:09.212 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:09.212 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:09.212 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:09.213 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.213 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.213 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.213 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:09.213 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.213 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:07:09.213 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:09.213 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:09.213 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:09.213 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:09.213 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:09.213 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:09.213 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:09.213 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:09.213 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:09.213 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:09.213 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:09.213 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:09.213 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:09.213 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:09.213 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:09.213 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:09.213 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:09.213 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:09.213 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:09.213 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:09.213 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:09.213 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:09.213 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:07:09.213 07:31:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:11.745 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:11.745 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:07:11.745 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:11.745 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:11.745 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:11.745 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:11.745 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:11.745 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:07:11.745 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:11.745 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:07:11.745 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:07:11.745 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:07:11.745 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:07:11.745 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:07:11.745 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:07:11.745 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:11.745 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:11.745 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:11.745 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:11.745 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:11.745 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:11.745 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:11.745 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:11.745 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:11.745 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:11.745 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:11.745 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:11.745 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:11.745 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:11.745 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:11.745 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:11.745 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:11.745 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:11.745 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:11.745 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:11.745 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:11.745 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:11.745 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:11.745 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:11.745 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:11.745 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:11.745 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:11.745 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:11.745 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:11.745 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:11.745 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:11.745 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:11.745 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:11.745 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:11.745 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:11.745 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:11.745 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:11.745 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:11.745 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:11.745 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:11.745 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:11.745 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:11.745 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:11.745 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:11.745 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:11.745 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:11.745 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:11.745 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:11.745 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:11.745 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:11.745 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:11.745 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:11.746 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:11.746 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:11.746 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:11.746 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:11.746 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:11.746 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:11.746 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:07:11.746 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:11.746 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:11.746 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:11.746 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:11.746 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:11.746 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:11.746 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:11.746 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:11.746 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:11.746 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:11.746 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:11.746 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:11.746 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:11.746 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:11.746 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:11.746 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:11.746 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:11.746 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:11.746 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:11.746 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:11.746 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:11.746 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:11.746 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:11.746 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:11.746 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:11.746 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:11.746 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:11.746 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:07:11.746 00:07:11.746 --- 10.0.0.2 ping statistics --- 00:07:11.746 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:11.746 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:07:11.746 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:11.746 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:11.746 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:07:11.746 00:07:11.746 --- 10.0.0.1 ping statistics --- 00:07:11.746 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:11.746 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:07:11.746 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:11.746 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:07:11.746 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:11.746 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:11.746 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:11.746 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:11.746 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:11.746 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:11.746 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:11.746 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:11.746 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:11.746 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:11.746 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:11.746 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2837801 00:07:11.746 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:11.746 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2837801 00:07:11.746 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 2837801 ']' 00:07:11.746 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.746 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:11.746 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.746 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:11.746 07:31:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:11.746 [2024-11-19 07:31:03.348762] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:11.746 [2024-11-19 07:31:03.348910] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:11.746 [2024-11-19 07:31:03.502835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:11.746 [2024-11-19 07:31:03.638252] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:11.746 [2024-11-19 07:31:03.638329] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:11.746 [2024-11-19 07:31:03.638360] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:11.746 [2024-11-19 07:31:03.638388] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:11.746 [2024-11-19 07:31:03.638409] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:11.746 [2024-11-19 07:31:03.641029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:11.746 [2024-11-19 07:31:03.641095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:11.746 [2024-11-19 07:31:03.641099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:12.679 07:31:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:12.679 07:31:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:07:12.679 07:31:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:12.679 07:31:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:12.679 07:31:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:12.679 07:31:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:12.679 07:31:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:12.679 07:31:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:12.967 [2024-11-19 07:31:04.672616] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:12.967 07:31:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:13.250 07:31:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:13.507 [2024-11-19 07:31:05.202621] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:13.507 07:31:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:13.764 07:31:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:14.021 Malloc0 00:07:14.021 07:31:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:14.278 Delay0 00:07:14.278 07:31:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:14.536 07:31:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:14.793 NULL1 00:07:14.793 07:31:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:15.049 07:31:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2838241 00:07:15.049 07:31:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:15.049 07:31:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2838241 00:07:15.049 07:31:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:16.419 Read completed with error (sct=0, sc=11) 00:07:16.419 07:31:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:16.419 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:16.419 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:16.419 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:16.419 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:16.677 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:16.677 07:31:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:16.677 07:31:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:16.934 true 00:07:16.934 07:31:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2838241 00:07:16.934 07:31:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:17.866 07:31:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:17.866 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:18.124 07:31:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:18.124 07:31:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:18.381 true 00:07:18.381 07:31:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2838241 00:07:18.381 07:31:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:18.639 07:31:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:18.897 07:31:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:18.897 07:31:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:19.156 true 00:07:19.156 07:31:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2838241 00:07:19.156 07:31:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:19.413 07:31:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:19.671 07:31:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:19.671 07:31:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:19.929 true 00:07:19.929 07:31:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2838241 00:07:19.929 07:31:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:20.862 07:31:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:20.862 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:20.862 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:21.121 07:31:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:21.121 07:31:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:21.380 true 00:07:21.380 07:31:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2838241 00:07:21.380 07:31:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.639 07:31:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:21.896 07:31:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:21.896 07:31:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:22.154 true 00:07:22.154 07:31:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2838241 00:07:22.154 07:31:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:23.087 07:31:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:23.087 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:23.087 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:23.087 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:23.344 07:31:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:23.344 07:31:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:23.601 true 00:07:23.601 07:31:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2838241 00:07:23.601 07:31:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:23.859 07:31:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:24.116 07:31:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:24.117 07:31:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:24.375 true 00:07:24.375 07:31:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2838241 00:07:24.376 07:31:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:25.308 07:31:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:25.566 07:31:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:25.566 07:31:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:25.824 true 00:07:25.824 07:31:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2838241 00:07:25.824 07:31:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:26.082 07:31:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:26.340 07:31:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:26.340 07:31:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:26.598 true 00:07:26.598 07:31:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2838241 00:07:26.598 07:31:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:26.856 07:31:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:27.115 07:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:27.115 07:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:27.372 true 00:07:27.372 07:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2838241 00:07:27.372 07:31:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:28.303 07:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:28.868 07:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:28.868 07:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:29.125 true 00:07:29.125 07:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2838241 00:07:29.125 07:31:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:29.383 07:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:29.640 07:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:29.640 07:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:29.898 true 00:07:29.898 07:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2838241 00:07:29.898 07:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:30.156 07:31:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:30.413 07:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:30.413 07:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:30.671 true 00:07:30.671 07:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2838241 00:07:30.671 07:31:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:31.605 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:31.605 07:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:31.863 07:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:31.863 07:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:32.120 true 00:07:32.120 07:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2838241 00:07:32.120 07:31:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:32.377 07:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:32.634 07:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:32.634 07:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:32.892 true 00:07:32.892 07:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2838241 00:07:32.892 07:31:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:33.150 07:31:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:33.715 07:31:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:33.715 07:31:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:33.715 true 00:07:33.715 07:31:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2838241 00:07:33.715 07:31:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:34.649 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:34.649 07:31:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:34.906 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:35.164 07:31:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:35.164 07:31:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:35.420 true 00:07:35.420 07:31:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2838241 00:07:35.420 07:31:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.678 07:31:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:35.936 07:31:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:35.936 07:31:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:36.193 true 00:07:36.193 07:31:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2838241 00:07:36.193 07:31:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:37.127 07:31:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:37.127 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:37.127 07:31:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:37.127 07:31:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:37.384 true 00:07:37.642 07:31:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2838241 00:07:37.642 07:31:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:37.900 07:31:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:38.158 07:31:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:38.158 07:31:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:38.415 true 00:07:38.415 07:31:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2838241 00:07:38.415 07:31:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:38.672 07:31:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:38.930 07:31:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:38.930 07:31:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:39.187 true 00:07:39.187 07:31:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2838241 00:07:39.187 07:31:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.119 07:31:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:40.377 07:31:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:40.377 07:31:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:40.634 true 00:07:40.634 07:31:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2838241 00:07:40.634 07:31:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.893 07:31:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:41.151 07:31:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:41.151 07:31:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:41.409 true 00:07:41.409 07:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2838241 00:07:41.409 07:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.667 07:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:41.924 07:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:41.924 07:31:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:42.205 true 00:07:42.205 07:31:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2838241 00:07:42.205 07:31:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:43.155 07:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:43.413 07:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:43.413 07:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:43.670 true 00:07:43.670 07:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2838241 00:07:43.670 07:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:43.928 07:31:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:44.185 07:31:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:44.185 07:31:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:44.443 true 00:07:44.443 07:31:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2838241 00:07:44.443 07:31:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:44.701 07:31:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:45.266 07:31:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:45.266 07:31:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:45.266 true 00:07:45.266 07:31:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2838241 00:07:45.266 07:31:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:46.199 Initializing NVMe Controllers 00:07:46.199 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:46.199 Controller IO queue size 128, less than required. 00:07:46.199 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:46.199 Controller IO queue size 128, less than required. 00:07:46.199 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:46.199 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:46.199 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:46.199 Initialization complete. Launching workers. 00:07:46.199 ======================================================== 00:07:46.199 Latency(us) 00:07:46.199 Device Information : IOPS MiB/s Average min max 00:07:46.199 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 574.22 0.28 100217.31 3681.31 1014942.62 00:07:46.199 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 6873.07 3.36 18624.35 4294.09 574976.36 00:07:46.199 ======================================================== 00:07:46.199 Total : 7447.29 3.64 24915.56 3681.31 1014942.62 00:07:46.199 00:07:46.199 07:31:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:46.764 07:31:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:07:46.764 07:31:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:07:47.021 true 00:07:47.021 07:31:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2838241 00:07:47.021 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2838241) - No such process 00:07:47.021 07:31:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2838241 00:07:47.021 07:31:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:47.279 07:31:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:47.536 07:31:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:47.536 07:31:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:47.536 07:31:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:47.536 07:31:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:47.536 07:31:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:47.793 null0 00:07:47.793 07:31:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:47.793 07:31:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:47.793 07:31:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:48.050 null1 00:07:48.050 07:31:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:48.050 07:31:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:48.050 07:31:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:48.308 null2 00:07:48.308 07:31:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:48.308 07:31:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:48.308 07:31:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:48.566 null3 00:07:48.566 07:31:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:48.566 07:31:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:48.566 07:31:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:48.824 null4 00:07:48.824 07:31:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:48.824 07:31:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:48.824 07:31:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:49.081 null5 00:07:49.082 07:31:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:49.082 07:31:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:49.082 07:31:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:49.339 null6 00:07:49.339 07:31:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:49.339 07:31:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:49.339 07:31:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:49.597 null7 00:07:49.597 07:31:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:49.597 07:31:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:49.597 07:31:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:49.597 07:31:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:49.597 07:31:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:49.597 07:31:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:49.597 07:31:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:49.597 07:31:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:49.597 07:31:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:49.597 07:31:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:49.597 07:31:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:49.597 07:31:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:49.597 07:31:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:49.597 07:31:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:49.597 07:31:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:49.597 07:31:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:49.597 07:31:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:49.597 07:31:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:49.597 07:31:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:49.597 07:31:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:49.597 07:31:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:49.597 07:31:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:49.597 07:31:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:49.597 07:31:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:49.597 07:31:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:49.597 07:31:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:49.597 07:31:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:49.597 07:31:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:49.597 07:31:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:49.597 07:31:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:49.597 07:31:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:49.597 07:31:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:49.597 07:31:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:49.597 07:31:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:49.597 07:31:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:49.597 07:31:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:49.597 07:31:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:49.597 07:31:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:49.597 07:31:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:49.597 07:31:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:49.597 07:31:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:49.597 07:31:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:49.597 07:31:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:49.597 07:31:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:49.597 07:31:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:49.597 07:31:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:49.597 07:31:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:49.598 07:31:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:49.598 07:31:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:49.598 07:31:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:49.598 07:31:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:49.598 07:31:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:49.598 07:31:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:49.598 07:31:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:49.598 07:31:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:49.598 07:31:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:49.598 07:31:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:49.598 07:31:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:49.598 07:31:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:49.598 07:31:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:49.598 07:31:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:49.598 07:31:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:49.598 07:31:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:49.598 07:31:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:49.598 07:31:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:49.598 07:31:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:49.598 07:31:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2842443 2842444 2842446 2842448 2842450 2842452 2842454 2842456 00:07:49.598 07:31:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:49.598 07:31:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:49.856 07:31:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:49.856 07:31:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:49.856 07:31:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:49.856 07:31:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:49.856 07:31:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:49.856 07:31:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:49.856 07:31:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:49.856 07:31:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:50.114 07:31:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:50.114 07:31:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.114 07:31:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:50.114 07:31:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:50.114 07:31:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.114 07:31:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:50.114 07:31:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:50.114 07:31:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.114 07:31:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:50.114 07:31:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:50.114 07:31:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.114 07:31:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:50.114 07:31:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:50.114 07:31:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.114 07:31:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:50.114 07:31:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:50.114 07:31:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.114 07:31:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:50.114 07:31:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:50.114 07:31:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.114 07:31:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:50.372 07:31:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:50.372 07:31:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.372 07:31:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:50.629 07:31:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:50.629 07:31:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:50.630 07:31:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.630 07:31:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:50.630 07:31:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:50.630 07:31:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:50.630 07:31:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:50.630 07:31:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:50.888 07:31:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:50.888 07:31:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.888 07:31:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:50.888 07:31:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:50.888 07:31:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.888 07:31:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:50.888 07:31:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:50.888 07:31:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.888 07:31:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:50.888 07:31:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:50.888 07:31:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.888 07:31:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:50.888 07:31:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:50.888 07:31:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.888 07:31:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:50.888 07:31:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:50.888 07:31:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.888 07:31:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:50.888 07:31:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:50.888 07:31:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.888 07:31:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:50.888 07:31:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:50.888 07:31:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.888 07:31:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:51.146 07:31:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:51.146 07:31:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:51.146 07:31:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:51.146 07:31:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.146 07:31:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:51.146 07:31:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:51.146 07:31:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:51.146 07:31:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:51.405 07:31:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.405 07:31:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.405 07:31:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:51.405 07:31:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.405 07:31:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.405 07:31:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:51.405 07:31:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.405 07:31:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.405 07:31:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:51.405 07:31:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.405 07:31:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.405 07:31:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:51.405 07:31:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.405 07:31:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.405 07:31:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:51.405 07:31:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.405 07:31:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.405 07:31:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.405 07:31:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:51.405 07:31:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.405 07:31:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:51.405 07:31:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.405 07:31:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.405 07:31:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:51.663 07:31:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:51.663 07:31:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:51.663 07:31:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.663 07:31:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:51.663 07:31:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:51.663 07:31:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:51.663 07:31:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:51.663 07:31:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:51.922 07:31:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.922 07:31:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.922 07:31:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:51.922 07:31:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.922 07:31:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.922 07:31:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:51.922 07:31:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.922 07:31:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.922 07:31:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:51.922 07:31:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.922 07:31:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.922 07:31:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:51.922 07:31:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.922 07:31:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.922 07:31:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:51.922 07:31:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.922 07:31:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.922 07:31:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:51.922 07:31:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.922 07:31:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.922 07:31:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:51.922 07:31:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.922 07:31:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.922 07:31:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:52.488 07:31:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:52.488 07:31:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:52.488 07:31:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.488 07:31:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:52.488 07:31:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:52.488 07:31:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:52.488 07:31:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:52.488 07:31:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:52.488 07:31:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.488 07:31:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.488 07:31:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:52.746 07:31:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.746 07:31:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.746 07:31:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:52.746 07:31:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.746 07:31:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.746 07:31:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:52.747 07:31:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.747 07:31:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.747 07:31:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:52.747 07:31:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.747 07:31:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.747 07:31:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:52.747 07:31:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.747 07:31:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.747 07:31:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:52.747 07:31:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.747 07:31:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.747 07:31:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:52.747 07:31:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.747 07:31:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.747 07:31:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:53.005 07:31:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.005 07:31:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:53.005 07:31:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:53.005 07:31:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:53.005 07:31:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:53.005 07:31:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:53.005 07:31:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:53.005 07:31:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:53.264 07:31:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.264 07:31:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.264 07:31:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:53.264 07:31:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.264 07:31:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.264 07:31:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:53.264 07:31:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.264 07:31:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.264 07:31:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:53.264 07:31:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.264 07:31:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.264 07:31:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:53.264 07:31:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.264 07:31:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.264 07:31:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:53.264 07:31:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.264 07:31:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.264 07:31:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:53.264 07:31:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.264 07:31:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.264 07:31:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:53.264 07:31:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.264 07:31:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.264 07:31:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:53.523 07:31:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:53.523 07:31:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.523 07:31:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:53.523 07:31:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:53.523 07:31:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:53.523 07:31:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:53.523 07:31:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:53.523 07:31:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:53.782 07:31:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.782 07:31:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.782 07:31:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:53.782 07:31:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.782 07:31:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.782 07:31:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:53.782 07:31:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.782 07:31:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.782 07:31:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:53.782 07:31:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.782 07:31:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.782 07:31:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:53.782 07:31:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.782 07:31:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.782 07:31:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:53.782 07:31:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.782 07:31:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.782 07:31:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.782 07:31:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.782 07:31:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:53.782 07:31:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:53.782 07:31:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.782 07:31:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.782 07:31:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:54.041 07:31:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:54.041 07:31:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.041 07:31:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:54.041 07:31:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:54.041 07:31:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:54.041 07:31:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:54.041 07:31:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:54.041 07:31:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:54.299 07:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.299 07:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.299 07:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.299 07:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.299 07:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:54.299 07:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:54.299 07:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.299 07:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.299 07:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:54.299 07:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.299 07:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.299 07:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:54.299 07:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.299 07:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.299 07:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:54.299 07:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.299 07:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.299 07:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.299 07:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:54.299 07:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.299 07:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:54.558 07:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.558 07:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.558 07:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:54.558 07:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:54.817 07:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.817 07:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:54.817 07:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:54.817 07:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:54.817 07:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:54.817 07:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:54.817 07:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:55.076 07:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.076 07:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.076 07:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:55.076 07:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.076 07:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.076 07:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:55.076 07:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.076 07:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.076 07:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:55.076 07:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.076 07:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.076 07:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:55.076 07:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.076 07:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.076 07:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:55.076 07:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.076 07:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.076 07:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:55.076 07:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.076 07:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.076 07:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:55.076 07:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.076 07:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.076 07:31:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:55.334 07:31:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:55.334 07:31:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.334 07:31:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:55.334 07:31:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:55.334 07:31:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:55.334 07:31:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:55.334 07:31:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:55.334 07:31:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:55.593 07:31:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.593 07:31:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.593 07:31:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.593 07:31:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.593 07:31:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.593 07:31:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.593 07:31:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.593 07:31:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.593 07:31:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.593 07:31:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.593 07:31:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.593 07:31:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.593 07:31:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.593 07:31:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.593 07:31:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.593 07:31:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.593 07:31:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:55.593 07:31:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:55.593 07:31:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:55.593 07:31:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:07:55.593 07:31:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:55.593 07:31:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:07:55.593 07:31:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:55.593 07:31:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:55.593 rmmod nvme_tcp 00:07:55.593 rmmod nvme_fabrics 00:07:55.593 rmmod nvme_keyring 00:07:55.593 07:31:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:55.593 07:31:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:07:55.593 07:31:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:07:55.593 07:31:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2837801 ']' 00:07:55.593 07:31:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2837801 00:07:55.593 07:31:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 2837801 ']' 00:07:55.593 07:31:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 2837801 00:07:55.593 07:31:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:07:55.593 07:31:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:55.593 07:31:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2837801 00:07:55.593 07:31:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:55.593 07:31:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:55.593 07:31:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2837801' 00:07:55.593 killing process with pid 2837801 00:07:55.593 07:31:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 2837801 00:07:55.593 07:31:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 2837801 00:07:56.966 07:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:56.966 07:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:56.966 07:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:56.966 07:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:07:56.966 07:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:07:56.966 07:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:56.966 07:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:07:56.966 07:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:56.966 07:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:56.966 07:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:56.966 07:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:56.966 07:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:58.868 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:58.868 00:07:58.868 real 0m49.943s 00:07:58.868 user 3m48.019s 00:07:58.868 sys 0m16.392s 00:07:58.868 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:58.868 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:58.868 ************************************ 00:07:58.868 END TEST nvmf_ns_hotplug_stress 00:07:58.868 ************************************ 00:07:58.868 07:31:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:58.868 07:31:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:58.868 07:31:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:58.868 07:31:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:58.868 ************************************ 00:07:58.868 START TEST nvmf_delete_subsystem 00:07:58.868 ************************************ 00:07:58.869 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:59.128 * Looking for test storage... 00:07:59.128 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:59.128 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:59.128 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:07:59.128 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:59.128 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:59.128 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:59.128 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:59.128 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:59.128 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:07:59.128 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:07:59.128 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:07:59.128 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:07:59.128 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:07:59.128 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:07:59.128 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:07:59.128 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:59.128 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:07:59.128 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:07:59.128 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:59.128 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:59.128 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:07:59.128 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:07:59.128 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:59.128 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:07:59.128 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:07:59.128 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:07:59.128 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:07:59.128 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:59.128 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:07:59.128 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:07:59.128 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:59.128 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:59.128 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:07:59.128 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:59.128 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:59.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.128 --rc genhtml_branch_coverage=1 00:07:59.128 --rc genhtml_function_coverage=1 00:07:59.128 --rc genhtml_legend=1 00:07:59.128 --rc geninfo_all_blocks=1 00:07:59.128 --rc geninfo_unexecuted_blocks=1 00:07:59.128 00:07:59.128 ' 00:07:59.128 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:59.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.128 --rc genhtml_branch_coverage=1 00:07:59.128 --rc genhtml_function_coverage=1 00:07:59.128 --rc genhtml_legend=1 00:07:59.128 --rc geninfo_all_blocks=1 00:07:59.128 --rc geninfo_unexecuted_blocks=1 00:07:59.128 00:07:59.128 ' 00:07:59.128 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:59.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.128 --rc genhtml_branch_coverage=1 00:07:59.128 --rc genhtml_function_coverage=1 00:07:59.128 --rc genhtml_legend=1 00:07:59.128 --rc geninfo_all_blocks=1 00:07:59.128 --rc geninfo_unexecuted_blocks=1 00:07:59.128 00:07:59.128 ' 00:07:59.128 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:59.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.128 --rc genhtml_branch_coverage=1 00:07:59.128 --rc genhtml_function_coverage=1 00:07:59.128 --rc genhtml_legend=1 00:07:59.128 --rc geninfo_all_blocks=1 00:07:59.128 --rc geninfo_unexecuted_blocks=1 00:07:59.128 00:07:59.128 ' 00:07:59.128 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:59.128 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:59.128 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:59.128 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:59.128 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:59.128 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:59.128 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:59.128 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:59.128 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:59.128 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:59.128 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:59.128 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:59.128 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:59.128 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:59.128 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:59.128 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:59.128 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:59.128 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:59.129 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:59.129 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:07:59.129 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:59.129 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:59.129 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:59.129 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.129 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.129 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.129 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:59.129 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.129 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:07:59.129 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:59.129 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:59.129 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:59.129 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:59.129 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:59.129 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:59.129 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:59.129 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:59.129 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:59.129 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:59.129 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:59.129 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:59.129 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:59.129 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:59.129 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:59.129 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:59.129 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:59.129 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:59.129 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:59.129 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:59.129 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:59.129 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:07:59.129 07:31:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:01.732 07:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:01.732 07:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:08:01.732 07:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:01.732 07:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:01.732 07:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:01.732 07:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:01.732 07:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:01.732 07:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:08:01.732 07:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:01.732 07:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:08:01.732 07:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:08:01.732 07:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:08:01.732 07:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:08:01.732 07:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:08:01.732 07:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:08:01.732 07:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:01.732 07:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:01.732 07:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:01.732 07:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:01.732 07:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:01.732 07:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:01.732 07:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:01.732 07:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:01.732 07:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:01.732 07:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:01.732 07:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:01.732 07:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:01.732 07:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:01.732 07:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:01.732 07:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:01.732 07:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:01.732 07:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:01.732 07:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:01.732 07:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:01.732 07:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:01.732 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:01.732 07:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:01.732 07:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:01.732 07:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:01.732 07:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:01.732 07:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:01.732 07:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:01.732 07:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:01.732 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:01.732 07:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:01.732 07:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:01.732 07:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:01.732 07:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:01.732 07:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:01.732 07:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:01.732 07:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:01.732 07:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:01.732 07:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:01.732 07:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:01.732 07:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:01.732 07:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:01.732 07:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:01.732 07:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:01.732 07:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:01.732 07:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:01.732 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:01.732 07:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:01.732 07:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:01.732 07:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:01.732 07:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:01.732 07:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:01.732 07:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:01.732 07:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:01.732 07:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:01.732 07:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:01.732 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:01.732 07:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:01.732 07:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:01.732 07:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:08:01.732 07:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:01.732 07:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:01.732 07:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:01.732 07:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:01.732 07:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:01.732 07:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:01.733 07:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:01.733 07:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:01.733 07:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:01.733 07:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:01.733 07:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:01.733 07:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:01.733 07:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:01.733 07:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:01.733 07:31:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:01.733 07:31:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:01.733 07:31:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:01.733 07:31:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:01.733 07:31:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:01.733 07:31:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:01.733 07:31:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:01.733 07:31:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:01.733 07:31:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:01.733 07:31:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:01.733 07:31:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:01.733 07:31:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:01.733 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:01.733 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:08:01.733 00:08:01.733 --- 10.0.0.2 ping statistics --- 00:08:01.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:01.733 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:08:01.733 07:31:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:01.733 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:01.733 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:08:01.733 00:08:01.733 --- 10.0.0.1 ping statistics --- 00:08:01.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:01.733 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:08:01.733 07:31:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:01.733 07:31:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:08:01.733 07:31:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:01.733 07:31:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:01.733 07:31:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:01.733 07:31:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:01.733 07:31:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:01.733 07:31:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:01.733 07:31:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:01.733 07:31:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:01.733 07:31:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:01.733 07:31:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:01.733 07:31:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:01.733 07:31:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2845362 00:08:01.733 07:31:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:01.733 07:31:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2845362 00:08:01.733 07:31:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 2845362 ']' 00:08:01.733 07:31:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:01.733 07:31:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:01.733 07:31:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:01.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:01.733 07:31:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:01.733 07:31:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:01.733 [2024-11-19 07:31:53.255507] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:08:01.733 [2024-11-19 07:31:53.255649] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:01.733 [2024-11-19 07:31:53.402781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:01.733 [2024-11-19 07:31:53.535751] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:01.733 [2024-11-19 07:31:53.535851] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:01.733 [2024-11-19 07:31:53.535878] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:01.733 [2024-11-19 07:31:53.535902] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:01.733 [2024-11-19 07:31:53.535923] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:01.733 [2024-11-19 07:31:53.538579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.733 [2024-11-19 07:31:53.538581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:02.300 07:31:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:02.300 07:31:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:08:02.300 07:31:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:02.300 07:31:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:02.300 07:31:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:02.558 07:31:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:02.558 07:31:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:02.558 07:31:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.558 07:31:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:02.558 [2024-11-19 07:31:54.244714] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:02.558 07:31:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.558 07:31:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:02.558 07:31:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.558 07:31:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:02.558 07:31:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.558 07:31:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:02.558 07:31:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.558 07:31:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:02.558 [2024-11-19 07:31:54.261668] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:02.558 07:31:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.558 07:31:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:02.558 07:31:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.558 07:31:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:02.558 NULL1 00:08:02.558 07:31:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.558 07:31:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:02.558 07:31:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.558 07:31:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:02.558 Delay0 00:08:02.558 07:31:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.558 07:31:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:02.558 07:31:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.558 07:31:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:02.558 07:31:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.558 07:31:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2845514 00:08:02.558 07:31:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:02.558 07:31:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:02.558 [2024-11-19 07:31:54.396839] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:04.456 07:31:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:04.456 07:31:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.456 07:31:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:04.714 Read completed with error (sct=0, sc=8) 00:08:04.714 Read completed with error (sct=0, sc=8) 00:08:04.714 Read completed with error (sct=0, sc=8) 00:08:04.714 Read completed with error (sct=0, sc=8) 00:08:04.714 starting I/O failed: -6 00:08:04.714 Write completed with error (sct=0, sc=8) 00:08:04.714 Write completed with error (sct=0, sc=8) 00:08:04.714 Read completed with error (sct=0, sc=8) 00:08:04.714 Write completed with error (sct=0, sc=8) 00:08:04.714 starting I/O failed: -6 00:08:04.714 Read completed with error (sct=0, sc=8) 00:08:04.714 Read completed with error (sct=0, sc=8) 00:08:04.714 Read completed with error (sct=0, sc=8) 00:08:04.714 Read completed with error (sct=0, sc=8) 00:08:04.714 starting I/O failed: -6 00:08:04.714 Write completed with error (sct=0, sc=8) 00:08:04.714 Read completed with error (sct=0, sc=8) 00:08:04.714 Write completed with error (sct=0, sc=8) 00:08:04.714 Write completed with error (sct=0, sc=8) 00:08:04.714 starting I/O failed: -6 00:08:04.714 Write completed with error (sct=0, sc=8) 00:08:04.714 Write completed with error (sct=0, sc=8) 00:08:04.714 Read completed with error (sct=0, sc=8) 00:08:04.714 Read completed with error (sct=0, sc=8) 00:08:04.714 starting I/O failed: -6 00:08:04.714 Read completed with error (sct=0, sc=8) 00:08:04.714 Read completed with error (sct=0, sc=8) 00:08:04.714 Write completed with error (sct=0, sc=8) 00:08:04.714 Read completed with error (sct=0, sc=8) 00:08:04.715 starting I/O failed: -6 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Write completed with error (sct=0, sc=8) 00:08:04.715 Write completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 starting I/O failed: -6 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 starting I/O failed: -6 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 starting I/O failed: -6 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 [2024-11-19 07:31:56.495553] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001fe80 is same with the state(6) to be set 00:08:04.715 Write completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Write completed with error (sct=0, sc=8) 00:08:04.715 Write completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Write completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Write completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Write completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Write completed with error (sct=0, sc=8) 00:08:04.715 Write completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Write completed with error (sct=0, sc=8) 00:08:04.715 Write completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Write completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Write completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Write completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Write completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Write completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Write completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Write completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Write completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Write completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Write completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Write completed with error (sct=0, sc=8) 00:08:04.715 Write completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Write completed with error (sct=0, sc=8) 00:08:04.715 Write completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Write completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Write completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Write completed with error (sct=0, sc=8) 00:08:04.715 Write completed with error (sct=0, sc=8) 00:08:04.715 Write completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Write completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Write completed with error (sct=0, sc=8) 00:08:04.715 starting I/O failed: -6 00:08:04.715 Write completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Write completed with error (sct=0, sc=8) 00:08:04.715 starting I/O failed: -6 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Write completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 starting I/O failed: -6 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Write completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 starting I/O failed: -6 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 starting I/O failed: -6 00:08:04.715 Write completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 starting I/O failed: -6 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 starting I/O failed: -6 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 starting I/O failed: -6 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Write completed with error (sct=0, sc=8) 00:08:04.715 starting I/O failed: -6 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Write completed with error (sct=0, sc=8) 00:08:04.715 starting I/O failed: -6 00:08:04.715 Write completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Write completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 starting I/O failed: -6 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Write completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Write completed with error (sct=0, sc=8) 00:08:04.715 starting I/O failed: -6 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Write completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 starting I/O failed: -6 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 Write completed with error (sct=0, sc=8) 00:08:04.715 starting I/O failed: -6 00:08:04.715 Read completed with error (sct=0, sc=8) 00:08:04.715 [2024-11-19 07:31:56.498088] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016880 is same with the state(6) to be set 00:08:05.649 [2024-11-19 07:31:57.455511] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000015c00 is same with the state(6) to be set 00:08:05.649 Read completed with error (sct=0, sc=8) 00:08:05.649 Read completed with error (sct=0, sc=8) 00:08:05.649 Write completed with error (sct=0, sc=8) 00:08:05.649 Read completed with error (sct=0, sc=8) 00:08:05.649 Write completed with error (sct=0, sc=8) 00:08:05.649 Read completed with error (sct=0, sc=8) 00:08:05.649 Read completed with error (sct=0, sc=8) 00:08:05.649 Read completed with error (sct=0, sc=8) 00:08:05.649 Read completed with error (sct=0, sc=8) 00:08:05.649 Write completed with error (sct=0, sc=8) 00:08:05.649 Write completed with error (sct=0, sc=8) 00:08:05.649 Read completed with error (sct=0, sc=8) 00:08:05.649 Read completed with error (sct=0, sc=8) 00:08:05.649 Read completed with error (sct=0, sc=8) 00:08:05.649 Read completed with error (sct=0, sc=8) 00:08:05.649 Write completed with error (sct=0, sc=8) 00:08:05.649 Read completed with error (sct=0, sc=8) 00:08:05.649 Read completed with error (sct=0, sc=8) 00:08:05.649 Write completed with error (sct=0, sc=8) 00:08:05.649 Write completed with error (sct=0, sc=8) 00:08:05.649 Read completed with error (sct=0, sc=8) 00:08:05.649 Write completed with error (sct=0, sc=8) 00:08:05.649 Read completed with error (sct=0, sc=8) 00:08:05.649 Write completed with error (sct=0, sc=8) 00:08:05.649 Write completed with error (sct=0, sc=8) 00:08:05.649 Read completed with error (sct=0, sc=8) 00:08:05.649 Write completed with error (sct=0, sc=8) 00:08:05.649 Write completed with error (sct=0, sc=8) 00:08:05.649 Write completed with error (sct=0, sc=8) 00:08:05.649 Write completed with error (sct=0, sc=8) 00:08:05.649 Read completed with error (sct=0, sc=8) 00:08:05.649 Read completed with error (sct=0, sc=8) 00:08:05.649 Read completed with error (sct=0, sc=8) 00:08:05.649 Read completed with error (sct=0, sc=8) 00:08:05.650 Write completed with error (sct=0, sc=8) 00:08:05.650 Read completed with error (sct=0, sc=8) 00:08:05.650 Write completed with error (sct=0, sc=8) 00:08:05.650 Read completed with error (sct=0, sc=8) 00:08:05.650 [2024-11-19 07:31:57.499418] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016b00 is same with the state(6) to be set 00:08:05.650 Write completed with error (sct=0, sc=8) 00:08:05.650 Read completed with error (sct=0, sc=8) 00:08:05.650 Write completed with error (sct=0, sc=8) 00:08:05.650 Write completed with error (sct=0, sc=8) 00:08:05.650 Read completed with error (sct=0, sc=8) 00:08:05.650 Write completed with error (sct=0, sc=8) 00:08:05.650 Write completed with error (sct=0, sc=8) 00:08:05.650 Write completed with error (sct=0, sc=8) 00:08:05.650 Write completed with error (sct=0, sc=8) 00:08:05.650 Write completed with error (sct=0, sc=8) 00:08:05.650 Read completed with error (sct=0, sc=8) 00:08:05.650 Read completed with error (sct=0, sc=8) 00:08:05.650 Read completed with error (sct=0, sc=8) 00:08:05.650 Read completed with error (sct=0, sc=8) 00:08:05.650 Read completed with error (sct=0, sc=8) 00:08:05.650 Read completed with error (sct=0, sc=8) 00:08:05.650 Read completed with error (sct=0, sc=8) 00:08:05.650 Read completed with error (sct=0, sc=8) 00:08:05.650 Write completed with error (sct=0, sc=8) 00:08:05.650 Read completed with error (sct=0, sc=8) 00:08:05.650 Write completed with error (sct=0, sc=8) 00:08:05.650 Read completed with error (sct=0, sc=8) 00:08:05.650 Read completed with error (sct=0, sc=8) 00:08:05.650 Write completed with error (sct=0, sc=8) 00:08:05.650 Read completed with error (sct=0, sc=8) 00:08:05.650 Write completed with error (sct=0, sc=8) 00:08:05.650 Read completed with error (sct=0, sc=8) 00:08:05.650 Read completed with error (sct=0, sc=8) 00:08:05.650 Read completed with error (sct=0, sc=8) 00:08:05.650 Read completed with error (sct=0, sc=8) 00:08:05.650 Read completed with error (sct=0, sc=8) 00:08:05.650 Write completed with error (sct=0, sc=8) 00:08:05.650 Read completed with error (sct=0, sc=8) 00:08:05.650 Write completed with error (sct=0, sc=8) 00:08:05.650 Read completed with error (sct=0, sc=8) 00:08:05.650 Read completed with error (sct=0, sc=8) 00:08:05.650 Read completed with error (sct=0, sc=8) 00:08:05.650 Read completed with error (sct=0, sc=8) 00:08:05.650 [2024-11-19 07:31:57.500231] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016600 is same with the state(6) to be set 00:08:05.650 Read completed with error (sct=0, sc=8) 00:08:05.650 Write completed with error (sct=0, sc=8) 00:08:05.650 Read completed with error (sct=0, sc=8) 00:08:05.650 Read completed with error (sct=0, sc=8) 00:08:05.650 Write completed with error (sct=0, sc=8) 00:08:05.650 Read completed with error (sct=0, sc=8) 00:08:05.650 Write completed with error (sct=0, sc=8) 00:08:05.650 Write completed with error (sct=0, sc=8) 00:08:05.650 Write completed with error (sct=0, sc=8) 00:08:05.650 Read completed with error (sct=0, sc=8) 00:08:05.650 Write completed with error (sct=0, sc=8) 00:08:05.650 Read completed with error (sct=0, sc=8) 00:08:05.650 Read completed with error (sct=0, sc=8) 00:08:05.650 Read completed with error (sct=0, sc=8) 00:08:05.650 Read completed with error (sct=0, sc=8) 00:08:05.650 Write completed with error (sct=0, sc=8) 00:08:05.650 Read completed with error (sct=0, sc=8) 00:08:05.650 Read completed with error (sct=0, sc=8) 00:08:05.650 Read completed with error (sct=0, sc=8) 00:08:05.650 Read completed with error (sct=0, sc=8) 00:08:05.650 Read completed with error (sct=0, sc=8) 00:08:05.650 Read completed with error (sct=0, sc=8) 00:08:05.650 Read completed with error (sct=0, sc=8) 00:08:05.650 Read completed with error (sct=0, sc=8) 00:08:05.650 Read completed with error (sct=0, sc=8) 00:08:05.650 Read completed with error (sct=0, sc=8) 00:08:05.650 Read completed with error (sct=0, sc=8) 00:08:05.650 Read completed with error (sct=0, sc=8) 00:08:05.650 Read completed with error (sct=0, sc=8) 00:08:05.650 Read completed with error (sct=0, sc=8) 00:08:05.650 Write completed with error (sct=0, sc=8) 00:08:05.650 Read completed with error (sct=0, sc=8) 00:08:05.650 Write completed with error (sct=0, sc=8) 00:08:05.650 Read completed with error (sct=0, sc=8) 00:08:05.650 Write completed with error (sct=0, sc=8) 00:08:05.650 Read completed with error (sct=0, sc=8) 00:08:05.650 Read completed with error (sct=0, sc=8) 00:08:05.650 Read completed with error (sct=0, sc=8) 00:08:05.650 [2024-11-19 07:31:57.500976] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016380 is same with the state(6) to be set 00:08:05.650 Write completed with error (sct=0, sc=8) 00:08:05.650 Read completed with error (sct=0, sc=8) 00:08:05.650 Read completed with error (sct=0, sc=8) 00:08:05.650 Write completed with error (sct=0, sc=8) 00:08:05.650 Read completed with error (sct=0, sc=8) 00:08:05.650 Read completed with error (sct=0, sc=8) 00:08:05.650 Read completed with error (sct=0, sc=8) 00:08:05.650 Read completed with error (sct=0, sc=8) 00:08:05.650 Read completed with error (sct=0, sc=8) 00:08:05.650 Read completed with error (sct=0, sc=8) 00:08:05.650 Read completed with error (sct=0, sc=8) 00:08:05.650 Read completed with error (sct=0, sc=8) 00:08:05.650 Read completed with error (sct=0, sc=8) 00:08:05.650 [2024-11-19 07:31:57.505648] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020380 is same with the state(6) to be set 00:08:05.650 07:31:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.650 07:31:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:05.650 07:31:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2845514 00:08:05.650 07:31:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:05.650 Initializing NVMe Controllers 00:08:05.650 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:05.650 Controller IO queue size 128, less than required. 00:08:05.650 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:05.650 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:05.650 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:05.650 Initialization complete. Launching workers. 00:08:05.650 ======================================================== 00:08:05.650 Latency(us) 00:08:05.650 Device Information : IOPS MiB/s Average min max 00:08:05.650 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 194.93 0.10 945857.50 2489.33 1016838.84 00:08:05.650 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 148.92 0.07 902238.27 833.14 1017854.62 00:08:05.650 ======================================================== 00:08:05.650 Total : 343.84 0.17 926966.29 833.14 1017854.62 00:08:05.650 00:08:05.650 [2024-11-19 07:31:57.507619] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000015c00 (9): Bad file descriptor 00:08:05.650 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:06.237 07:31:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:06.237 07:31:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2845514 00:08:06.237 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2845514) - No such process 00:08:06.237 07:31:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2845514 00:08:06.237 07:31:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:08:06.237 07:31:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2845514 00:08:06.237 07:31:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:08:06.237 07:31:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:06.238 07:31:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:08:06.238 07:31:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:06.238 07:31:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 2845514 00:08:06.238 07:31:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:08:06.238 07:31:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:06.238 07:31:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:06.238 07:31:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:06.238 07:31:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:06.238 07:31:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.238 07:31:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:06.238 07:31:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.238 07:31:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:06.238 07:31:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.238 07:31:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:06.238 [2024-11-19 07:31:58.027531] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:06.238 07:31:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.238 07:31:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:06.238 07:31:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.238 07:31:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:06.238 07:31:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.238 07:31:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2846035 00:08:06.238 07:31:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:06.238 07:31:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:06.238 07:31:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2846035 00:08:06.238 07:31:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:06.238 [2024-11-19 07:31:58.144472] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:06.805 07:31:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:06.805 07:31:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2846035 00:08:06.805 07:31:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:07.369 07:31:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:07.369 07:31:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2846035 00:08:07.369 07:31:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:07.626 07:31:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:07.626 07:31:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2846035 00:08:07.626 07:31:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:08.191 07:32:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:08.191 07:32:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2846035 00:08:08.191 07:32:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:08.756 07:32:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:08.756 07:32:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2846035 00:08:08.756 07:32:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:09.321 07:32:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:09.321 07:32:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2846035 00:08:09.321 07:32:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:09.321 Initializing NVMe Controllers 00:08:09.321 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:09.321 Controller IO queue size 128, less than required. 00:08:09.321 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:09.321 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:09.321 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:09.321 Initialization complete. Launching workers. 00:08:09.321 ======================================================== 00:08:09.321 Latency(us) 00:08:09.321 Device Information : IOPS MiB/s Average min max 00:08:09.321 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1005671.84 1000272.31 1042010.03 00:08:09.321 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005384.50 1000248.10 1016126.77 00:08:09.321 ======================================================== 00:08:09.321 Total : 256.00 0.12 1005528.17 1000248.10 1042010.03 00:08:09.321 00:08:09.887 07:32:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:09.887 07:32:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2846035 00:08:09.887 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2846035) - No such process 00:08:09.887 07:32:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2846035 00:08:09.887 07:32:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:09.887 07:32:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:09.887 07:32:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:09.887 07:32:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:08:09.887 07:32:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:09.887 07:32:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:08:09.887 07:32:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:09.887 07:32:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:09.887 rmmod nvme_tcp 00:08:09.887 rmmod nvme_fabrics 00:08:09.887 rmmod nvme_keyring 00:08:09.887 07:32:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:09.887 07:32:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:08:09.887 07:32:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:08:09.887 07:32:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2845362 ']' 00:08:09.887 07:32:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2845362 00:08:09.887 07:32:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 2845362 ']' 00:08:09.887 07:32:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 2845362 00:08:09.887 07:32:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:08:09.887 07:32:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:09.887 07:32:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2845362 00:08:09.887 07:32:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:09.887 07:32:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:09.887 07:32:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2845362' 00:08:09.887 killing process with pid 2845362 00:08:09.887 07:32:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 2845362 00:08:09.887 07:32:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 2845362 00:08:11.263 07:32:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:11.263 07:32:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:11.263 07:32:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:11.263 07:32:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:08:11.263 07:32:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:08:11.263 07:32:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:11.263 07:32:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:08:11.263 07:32:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:11.263 07:32:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:11.263 07:32:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:11.263 07:32:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:11.263 07:32:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:13.166 07:32:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:13.166 00:08:13.166 real 0m14.086s 00:08:13.166 user 0m30.693s 00:08:13.166 sys 0m3.221s 00:08:13.166 07:32:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:13.166 07:32:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:13.166 ************************************ 00:08:13.166 END TEST nvmf_delete_subsystem 00:08:13.166 ************************************ 00:08:13.166 07:32:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:13.166 07:32:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:13.166 07:32:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:13.166 07:32:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:13.166 ************************************ 00:08:13.166 START TEST nvmf_host_management 00:08:13.166 ************************************ 00:08:13.166 07:32:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:13.166 * Looking for test storage... 00:08:13.166 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:13.166 07:32:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:13.166 07:32:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:08:13.166 07:32:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:13.166 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:13.166 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:13.166 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:13.166 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:13.166 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:08:13.166 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:08:13.166 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:08:13.166 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:08:13.166 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:08:13.166 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:08:13.166 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:08:13.166 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:13.166 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:08:13.166 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:08:13.166 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:13.166 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:13.166 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:08:13.166 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:08:13.166 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:13.166 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:08:13.166 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:08:13.166 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:08:13.166 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:08:13.166 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:13.166 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:08:13.166 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:08:13.166 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:13.166 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:13.166 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:08:13.166 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:13.166 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:13.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.166 --rc genhtml_branch_coverage=1 00:08:13.166 --rc genhtml_function_coverage=1 00:08:13.166 --rc genhtml_legend=1 00:08:13.166 --rc geninfo_all_blocks=1 00:08:13.166 --rc geninfo_unexecuted_blocks=1 00:08:13.166 00:08:13.166 ' 00:08:13.166 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:13.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.166 --rc genhtml_branch_coverage=1 00:08:13.166 --rc genhtml_function_coverage=1 00:08:13.166 --rc genhtml_legend=1 00:08:13.166 --rc geninfo_all_blocks=1 00:08:13.166 --rc geninfo_unexecuted_blocks=1 00:08:13.166 00:08:13.166 ' 00:08:13.166 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:13.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.166 --rc genhtml_branch_coverage=1 00:08:13.166 --rc genhtml_function_coverage=1 00:08:13.166 --rc genhtml_legend=1 00:08:13.166 --rc geninfo_all_blocks=1 00:08:13.166 --rc geninfo_unexecuted_blocks=1 00:08:13.166 00:08:13.166 ' 00:08:13.166 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:13.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.166 --rc genhtml_branch_coverage=1 00:08:13.166 --rc genhtml_function_coverage=1 00:08:13.166 --rc genhtml_legend=1 00:08:13.166 --rc geninfo_all_blocks=1 00:08:13.166 --rc geninfo_unexecuted_blocks=1 00:08:13.166 00:08:13.166 ' 00:08:13.166 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:13.166 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:13.166 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:13.166 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:13.166 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:13.166 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:13.166 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:13.167 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:13.167 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:13.167 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:13.167 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:13.167 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:13.167 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:13.167 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:13.167 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:13.167 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:13.167 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:13.167 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:13.167 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:13.167 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:08:13.167 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:13.167 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:13.167 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:13.167 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.167 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.167 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.167 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:13.167 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.167 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:08:13.167 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:13.167 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:13.167 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:13.167 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:13.167 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:13.167 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:13.167 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:13.167 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:13.167 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:13.167 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:13.167 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:13.167 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:13.167 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:13.167 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:13.167 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:13.167 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:13.167 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:13.167 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:13.167 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:13.167 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:13.167 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:13.167 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:13.167 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:13.167 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:08:13.167 07:32:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:15.697 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:15.697 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:08:15.697 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:15.697 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:15.697 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:15.697 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:15.697 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:15.697 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:08:15.697 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:15.697 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:15.698 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:15.698 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:15.698 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:15.698 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:15.698 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:15.698 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.317 ms 00:08:15.698 00:08:15.698 --- 10.0.0.2 ping statistics --- 00:08:15.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.698 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:15.698 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:15.698 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:08:15.698 00:08:15.698 --- 10.0.0.1 ping statistics --- 00:08:15.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.698 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:15.698 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:15.699 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:15.699 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:15.699 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:15.699 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:15.699 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:15.699 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:15.699 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:15.699 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:15.699 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:15.699 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2848522 00:08:15.699 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2848522 00:08:15.699 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:15.699 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2848522 ']' 00:08:15.699 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:15.699 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:15.699 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:15.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:15.699 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:15.699 07:32:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:15.699 [2024-11-19 07:32:07.332848] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:08:15.699 [2024-11-19 07:32:07.333007] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:15.699 [2024-11-19 07:32:07.479902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:15.699 [2024-11-19 07:32:07.610508] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:15.699 [2024-11-19 07:32:07.610584] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:15.699 [2024-11-19 07:32:07.610610] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:15.699 [2024-11-19 07:32:07.610635] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:15.699 [2024-11-19 07:32:07.610654] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:15.699 [2024-11-19 07:32:07.613431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:15.699 [2024-11-19 07:32:07.613525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:15.699 [2024-11-19 07:32:07.613586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:15.699 [2024-11-19 07:32:07.613592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:16.632 07:32:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:16.632 07:32:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:16.632 07:32:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:16.632 07:32:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:16.632 07:32:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:16.632 07:32:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:16.632 07:32:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:16.632 07:32:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.632 07:32:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:16.632 [2024-11-19 07:32:08.318031] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:16.632 07:32:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.632 07:32:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:16.632 07:32:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:16.632 07:32:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:16.632 07:32:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:16.632 07:32:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:16.632 07:32:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:16.632 07:32:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.632 07:32:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:16.632 Malloc0 00:08:16.633 [2024-11-19 07:32:08.451935] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:16.633 07:32:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.633 07:32:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:16.633 07:32:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:16.633 07:32:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:16.633 07:32:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2848703 00:08:16.633 07:32:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2848703 /var/tmp/bdevperf.sock 00:08:16.633 07:32:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2848703 ']' 00:08:16.633 07:32:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:16.633 07:32:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:16.633 07:32:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:16.633 07:32:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:16.633 07:32:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:16.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:16.633 07:32:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:16.633 07:32:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:16.633 07:32:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:16.633 07:32:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:16.633 07:32:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:16.633 07:32:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:16.633 { 00:08:16.633 "params": { 00:08:16.633 "name": "Nvme$subsystem", 00:08:16.633 "trtype": "$TEST_TRANSPORT", 00:08:16.633 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:16.633 "adrfam": "ipv4", 00:08:16.633 "trsvcid": "$NVMF_PORT", 00:08:16.633 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:16.633 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:16.633 "hdgst": ${hdgst:-false}, 00:08:16.633 "ddgst": ${ddgst:-false} 00:08:16.633 }, 00:08:16.633 "method": "bdev_nvme_attach_controller" 00:08:16.633 } 00:08:16.633 EOF 00:08:16.633 )") 00:08:16.633 07:32:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:16.633 07:32:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:16.633 07:32:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:16.633 07:32:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:16.633 "params": { 00:08:16.633 "name": "Nvme0", 00:08:16.633 "trtype": "tcp", 00:08:16.633 "traddr": "10.0.0.2", 00:08:16.633 "adrfam": "ipv4", 00:08:16.633 "trsvcid": "4420", 00:08:16.633 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:16.633 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:16.633 "hdgst": false, 00:08:16.633 "ddgst": false 00:08:16.633 }, 00:08:16.633 "method": "bdev_nvme_attach_controller" 00:08:16.633 }' 00:08:16.891 [2024-11-19 07:32:08.572780] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:08:16.891 [2024-11-19 07:32:08.572926] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2848703 ] 00:08:16.891 [2024-11-19 07:32:08.715001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.150 [2024-11-19 07:32:08.845300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.426 Running I/O for 10 seconds... 00:08:17.686 07:32:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:17.686 07:32:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:17.686 07:32:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:17.686 07:32:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.686 07:32:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:17.686 07:32:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.686 07:32:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:17.686 07:32:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:17.686 07:32:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:17.686 07:32:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:17.686 07:32:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:17.686 07:32:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:17.686 07:32:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:17.686 07:32:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:17.686 07:32:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:17.686 07:32:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:17.686 07:32:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.686 07:32:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:17.686 07:32:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.686 07:32:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=323 00:08:17.686 07:32:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 323 -ge 100 ']' 00:08:17.686 07:32:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:17.686 07:32:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:17.686 07:32:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:17.686 07:32:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:17.686 07:32:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.686 07:32:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:17.686 [2024-11-19 07:32:09.615962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:17.686 [2024-11-19 07:32:09.616067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:17.686 [2024-11-19 07:32:09.616090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:17.686 [2024-11-19 07:32:09.616109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:17.686 [2024-11-19 07:32:09.616127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:17.686 [2024-11-19 07:32:09.616145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:17.686 [2024-11-19 07:32:09.616163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:17.686 [2024-11-19 07:32:09.616181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:17.686 [2024-11-19 07:32:09.616200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:17.686 [2024-11-19 07:32:09.616218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:17.686 [2024-11-19 07:32:09.616235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:17.686 [2024-11-19 07:32:09.616253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:17.686 [2024-11-19 07:32:09.616281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:17.686 [2024-11-19 07:32:09.616301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:17.686 [2024-11-19 07:32:09.616318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:17.686 [2024-11-19 07:32:09.616336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:17.686 [2024-11-19 07:32:09.616354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:17.686 [2024-11-19 07:32:09.616373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:17.686 [2024-11-19 07:32:09.616390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:08:17.946 07:32:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.946 07:32:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:17.946 07:32:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.946 07:32:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:17.946 [2024-11-19 07:32:09.624817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.946 [2024-11-19 07:32:09.624874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.946 [2024-11-19 07:32:09.624918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:49280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.946 [2024-11-19 07:32:09.624944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.946 [2024-11-19 07:32:09.624971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:49408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.946 [2024-11-19 07:32:09.625021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.946 [2024-11-19 07:32:09.625048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:49536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.946 [2024-11-19 07:32:09.625071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.946 [2024-11-19 07:32:09.625097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:49664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.946 [2024-11-19 07:32:09.625118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.946 [2024-11-19 07:32:09.625143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:49792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.946 [2024-11-19 07:32:09.625165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.946 [2024-11-19 07:32:09.625190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:49920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.946 [2024-11-19 07:32:09.625213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.946 [2024-11-19 07:32:09.625258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:50048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.946 [2024-11-19 07:32:09.625286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.946 [2024-11-19 07:32:09.625312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:50176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.946 [2024-11-19 07:32:09.625335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.946 [2024-11-19 07:32:09.625359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:50304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.946 [2024-11-19 07:32:09.625382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.946 [2024-11-19 07:32:09.625406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:50432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.946 [2024-11-19 07:32:09.625429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.946 [2024-11-19 07:32:09.625453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:50560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.946 [2024-11-19 07:32:09.625475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.946 [2024-11-19 07:32:09.625500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:50688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.947 [2024-11-19 07:32:09.625522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.947 [2024-11-19 07:32:09.625547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:50816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.947 [2024-11-19 07:32:09.625570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.947 [2024-11-19 07:32:09.625594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:50944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.947 [2024-11-19 07:32:09.625616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.947 [2024-11-19 07:32:09.625641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:51072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.947 [2024-11-19 07:32:09.625663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.947 [2024-11-19 07:32:09.625714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:51200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.947 [2024-11-19 07:32:09.625747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.947 [2024-11-19 07:32:09.625772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:51328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.947 [2024-11-19 07:32:09.625795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.947 [2024-11-19 07:32:09.625821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:51456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.947 [2024-11-19 07:32:09.625844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.947 [2024-11-19 07:32:09.625869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:51584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.947 [2024-11-19 07:32:09.625892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.947 [2024-11-19 07:32:09.625922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:51712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.947 [2024-11-19 07:32:09.625947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.947 [2024-11-19 07:32:09.625972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:51840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.947 [2024-11-19 07:32:09.626022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.947 [2024-11-19 07:32:09.626056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:51968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.947 [2024-11-19 07:32:09.626078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.947 [2024-11-19 07:32:09.626102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:52096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.947 [2024-11-19 07:32:09.626126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.947 [2024-11-19 07:32:09.626151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:52224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.947 [2024-11-19 07:32:09.626173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.947 [2024-11-19 07:32:09.626198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:52352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.947 [2024-11-19 07:32:09.626220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.947 [2024-11-19 07:32:09.626245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:52480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.947 [2024-11-19 07:32:09.626267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.947 [2024-11-19 07:32:09.626291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:52608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.947 [2024-11-19 07:32:09.626313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.947 [2024-11-19 07:32:09.626337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:52736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.947 [2024-11-19 07:32:09.626359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.947 [2024-11-19 07:32:09.626384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:52864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.947 [2024-11-19 07:32:09.626405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.947 [2024-11-19 07:32:09.626431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:52992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.947 [2024-11-19 07:32:09.626452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.947 [2024-11-19 07:32:09.626477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:53120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.947 [2024-11-19 07:32:09.626499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.947 [2024-11-19 07:32:09.626523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:53248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.947 [2024-11-19 07:32:09.626549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.947 [2024-11-19 07:32:09.626573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:53376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.947 [2024-11-19 07:32:09.626595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.947 [2024-11-19 07:32:09.626619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:53504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.947 [2024-11-19 07:32:09.626641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.947 [2024-11-19 07:32:09.626665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:53632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.947 [2024-11-19 07:32:09.626710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.947 [2024-11-19 07:32:09.626747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:53760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.947 [2024-11-19 07:32:09.626770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.947 [2024-11-19 07:32:09.626795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:53888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.947 [2024-11-19 07:32:09.626818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.947 [2024-11-19 07:32:09.626842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:54016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.947 [2024-11-19 07:32:09.626864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.947 [2024-11-19 07:32:09.626890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:54144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.947 [2024-11-19 07:32:09.626912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.947 [2024-11-19 07:32:09.626938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:54272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.947 [2024-11-19 07:32:09.626960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.947 [2024-11-19 07:32:09.627010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:54400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.947 [2024-11-19 07:32:09.627040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.947 [2024-11-19 07:32:09.627074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:54528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.947 [2024-11-19 07:32:09.627096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.947 [2024-11-19 07:32:09.627120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:54656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.947 [2024-11-19 07:32:09.627141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.947 [2024-11-19 07:32:09.627165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:54784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.947 [2024-11-19 07:32:09.627186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.947 [2024-11-19 07:32:09.627216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:54912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.947 [2024-11-19 07:32:09.627239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.947 [2024-11-19 07:32:09.627262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:55040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.947 [2024-11-19 07:32:09.627283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.947 [2024-11-19 07:32:09.627308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:55168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.947 [2024-11-19 07:32:09.627329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.947 [2024-11-19 07:32:09.627353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:55296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.947 [2024-11-19 07:32:09.627375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.948 [2024-11-19 07:32:09.627398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:55424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.948 [2024-11-19 07:32:09.627420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.948 [2024-11-19 07:32:09.627445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:55552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.948 [2024-11-19 07:32:09.627467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.948 [2024-11-19 07:32:09.627507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:55680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.948 [2024-11-19 07:32:09.627530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.948 [2024-11-19 07:32:09.627555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:55808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.948 [2024-11-19 07:32:09.627577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.948 [2024-11-19 07:32:09.627601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:55936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.948 [2024-11-19 07:32:09.627623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.948 [2024-11-19 07:32:09.627649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:56064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.948 [2024-11-19 07:32:09.627671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.948 [2024-11-19 07:32:09.627704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:56192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.948 [2024-11-19 07:32:09.627737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.948 [2024-11-19 07:32:09.627762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:56320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.948 [2024-11-19 07:32:09.627784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.948 [2024-11-19 07:32:09.627809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:56448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.948 [2024-11-19 07:32:09.627841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.948 [2024-11-19 07:32:09.627867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:56576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.948 07:32:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.948 [2024-11-19 07:32:09.627890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.948 [2024-11-19 07:32:09.627914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:56704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.948 [2024-11-19 07:32:09.627936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.948 [2024-11-19 07:32:09.627961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:56832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.948 [2024-11-19 07:32:09.627989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.948 07:32:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:17.948 [2024-11-19 07:32:09.628013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:56960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.948 [2024-11-19 07:32:09.628036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.948 [2024-11-19 07:32:09.628071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:57088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.948 [2024-11-19 07:32:09.628094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.948 [2024-11-19 07:32:09.628120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:57216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.948 [2024-11-19 07:32:09.628142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.948 [2024-11-19 07:32:09.628564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:17.948 [2024-11-19 07:32:09.628596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.948 [2024-11-19 07:32:09.628622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:08:17.948 [2024-11-19 07:32:09.628644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.948 [2024-11-19 07:32:09.628666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:08:17.948 [2024-11-19 07:32:09.628703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.948 [2024-11-19 07:32:09.628728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:08:17.948 [2024-11-19 07:32:09.628749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:17.948 [2024-11-19 07:32:09.628769] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:08:17.948 [2024-11-19 07:32:09.630003] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:08:17.948 task offset: 49152 on job bdev=Nvme0n1 fails 00:08:17.948 00:08:17.948 Latency(us) 00:08:17.948 [2024-11-19T06:32:09.878Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:17.948 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:17.948 Job: Nvme0n1 ended in about 0.30 seconds with error 00:08:17.948 Verification LBA range: start 0x0 length 0x400 00:08:17.948 Nvme0n1 : 0.30 1266.57 79.16 211.10 0.00 41634.32 4417.61 41360.50 00:08:17.948 [2024-11-19T06:32:09.878Z] =================================================================================================================== 00:08:17.948 [2024-11-19T06:32:09.878Z] Total : 1266.57 79.16 211.10 0.00 41634.32 4417.61 41360.50 00:08:17.948 [2024-11-19 07:32:09.634942] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:17.948 [2024-11-19 07:32:09.635023] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:08:17.948 [2024-11-19 07:32:09.690634] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:08:18.884 07:32:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2848703 00:08:18.884 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2848703) - No such process 00:08:18.884 07:32:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:18.884 07:32:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:18.884 07:32:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:18.884 07:32:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:18.884 07:32:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:18.884 07:32:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:18.884 07:32:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:18.884 07:32:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:18.884 { 00:08:18.884 "params": { 00:08:18.884 "name": "Nvme$subsystem", 00:08:18.884 "trtype": "$TEST_TRANSPORT", 00:08:18.884 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:18.884 "adrfam": "ipv4", 00:08:18.884 "trsvcid": "$NVMF_PORT", 00:08:18.884 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:18.884 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:18.884 "hdgst": ${hdgst:-false}, 00:08:18.884 "ddgst": ${ddgst:-false} 00:08:18.884 }, 00:08:18.884 "method": "bdev_nvme_attach_controller" 00:08:18.884 } 00:08:18.884 EOF 00:08:18.884 )") 00:08:18.884 07:32:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:18.884 07:32:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:18.884 07:32:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:18.884 07:32:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:18.884 "params": { 00:08:18.884 "name": "Nvme0", 00:08:18.884 "trtype": "tcp", 00:08:18.884 "traddr": "10.0.0.2", 00:08:18.884 "adrfam": "ipv4", 00:08:18.884 "trsvcid": "4420", 00:08:18.884 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:18.884 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:18.884 "hdgst": false, 00:08:18.884 "ddgst": false 00:08:18.884 }, 00:08:18.884 "method": "bdev_nvme_attach_controller" 00:08:18.884 }' 00:08:18.884 [2024-11-19 07:32:10.718882] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:08:18.884 [2024-11-19 07:32:10.719037] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2848984 ] 00:08:19.143 [2024-11-19 07:32:10.856773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.143 [2024-11-19 07:32:10.987616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.710 Running I/O for 1 seconds... 00:08:20.645 1344.00 IOPS, 84.00 MiB/s 00:08:20.645 Latency(us) 00:08:20.645 [2024-11-19T06:32:12.575Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:20.645 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:20.645 Verification LBA range: start 0x0 length 0x400 00:08:20.645 Nvme0n1 : 1.02 1375.35 85.96 0.00 0.00 45735.86 8738.13 40389.59 00:08:20.645 [2024-11-19T06:32:12.575Z] =================================================================================================================== 00:08:20.645 [2024-11-19T06:32:12.575Z] Total : 1375.35 85.96 0.00 0.00 45735.86 8738.13 40389.59 00:08:21.578 07:32:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:21.578 07:32:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:21.578 07:32:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:08:21.578 07:32:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:21.578 07:32:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:21.578 07:32:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:21.578 07:32:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:08:21.578 07:32:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:21.578 07:32:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:08:21.578 07:32:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:21.578 07:32:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:21.578 rmmod nvme_tcp 00:08:21.578 rmmod nvme_fabrics 00:08:21.578 rmmod nvme_keyring 00:08:21.578 07:32:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:21.578 07:32:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:08:21.578 07:32:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:08:21.578 07:32:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2848522 ']' 00:08:21.578 07:32:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2848522 00:08:21.578 07:32:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 2848522 ']' 00:08:21.578 07:32:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 2848522 00:08:21.578 07:32:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:08:21.578 07:32:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:21.578 07:32:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2848522 00:08:21.578 07:32:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:21.578 07:32:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:21.578 07:32:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2848522' 00:08:21.578 killing process with pid 2848522 00:08:21.578 07:32:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 2848522 00:08:21.578 07:32:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 2848522 00:08:22.951 [2024-11-19 07:32:14.585628] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:22.951 07:32:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:22.951 07:32:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:22.951 07:32:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:22.951 07:32:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:08:22.951 07:32:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:08:22.951 07:32:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:22.951 07:32:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:08:22.951 07:32:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:22.951 07:32:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:22.951 07:32:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:22.951 07:32:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:22.951 07:32:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:24.852 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:24.853 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:24.853 00:08:24.853 real 0m11.824s 00:08:24.853 user 0m32.006s 00:08:24.853 sys 0m3.154s 00:08:24.853 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:24.853 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:24.853 ************************************ 00:08:24.853 END TEST nvmf_host_management 00:08:24.853 ************************************ 00:08:24.853 07:32:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:24.853 07:32:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:24.853 07:32:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:24.853 07:32:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:24.853 ************************************ 00:08:24.853 START TEST nvmf_lvol 00:08:24.853 ************************************ 00:08:24.853 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:25.111 * Looking for test storage... 00:08:25.111 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:25.111 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:25.111 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:08:25.111 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:25.111 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:25.111 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:25.111 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:25.111 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:25.111 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:08:25.111 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:08:25.111 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:08:25.111 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:08:25.111 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:08:25.111 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:08:25.111 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:08:25.111 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:25.111 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:08:25.111 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:08:25.111 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:25.111 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:25.111 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:08:25.111 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:08:25.111 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:25.111 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:08:25.111 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:08:25.111 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:08:25.111 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:08:25.111 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:25.111 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:08:25.111 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:08:25.111 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:25.111 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:25.112 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:08:25.112 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:25.112 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:25.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.112 --rc genhtml_branch_coverage=1 00:08:25.112 --rc genhtml_function_coverage=1 00:08:25.112 --rc genhtml_legend=1 00:08:25.112 --rc geninfo_all_blocks=1 00:08:25.112 --rc geninfo_unexecuted_blocks=1 00:08:25.112 00:08:25.112 ' 00:08:25.112 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:25.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.112 --rc genhtml_branch_coverage=1 00:08:25.112 --rc genhtml_function_coverage=1 00:08:25.112 --rc genhtml_legend=1 00:08:25.112 --rc geninfo_all_blocks=1 00:08:25.112 --rc geninfo_unexecuted_blocks=1 00:08:25.112 00:08:25.112 ' 00:08:25.112 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:25.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.112 --rc genhtml_branch_coverage=1 00:08:25.112 --rc genhtml_function_coverage=1 00:08:25.112 --rc genhtml_legend=1 00:08:25.112 --rc geninfo_all_blocks=1 00:08:25.112 --rc geninfo_unexecuted_blocks=1 00:08:25.112 00:08:25.112 ' 00:08:25.112 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:25.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.112 --rc genhtml_branch_coverage=1 00:08:25.112 --rc genhtml_function_coverage=1 00:08:25.112 --rc genhtml_legend=1 00:08:25.112 --rc geninfo_all_blocks=1 00:08:25.112 --rc geninfo_unexecuted_blocks=1 00:08:25.112 00:08:25.112 ' 00:08:25.112 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:25.112 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:25.112 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:25.112 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:25.112 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:25.112 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:25.112 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:25.112 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:25.112 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:25.112 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:25.112 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:25.112 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:25.112 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:25.112 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:25.112 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:25.112 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:25.112 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:25.112 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:25.112 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:25.112 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:08:25.112 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:25.112 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:25.112 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:25.112 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.112 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.112 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.112 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:25.112 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.112 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:08:25.112 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:25.112 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:25.112 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:25.112 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:25.112 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:25.112 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:25.112 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:25.112 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:25.112 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:25.112 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:25.112 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:25.112 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:25.112 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:25.112 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:25.112 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:25.112 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:25.112 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:25.112 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:25.112 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:25.112 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:25.112 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:25.113 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:25.113 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:25.113 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:25.113 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:25.113 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:25.113 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:08:25.113 07:32:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:27.642 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:27.642 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:08:27.642 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:27.642 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:27.642 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:27.642 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:27.642 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:27.642 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:08:27.642 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:27.642 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:08:27.642 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:08:27.642 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:08:27.642 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:08:27.642 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:08:27.642 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:08:27.642 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:27.642 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:27.642 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:27.643 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:27.643 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:27.643 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:27.643 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:27.643 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:27.643 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.295 ms 00:08:27.643 00:08:27.643 --- 10.0.0.2 ping statistics --- 00:08:27.643 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:27.643 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:27.643 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:27.643 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:08:27.643 00:08:27.643 --- 10.0.0.1 ping statistics --- 00:08:27.643 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:27.643 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2851344 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2851344 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 2851344 ']' 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:27.643 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:27.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:27.644 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:27.644 07:32:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:27.644 [2024-11-19 07:32:19.328055] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:08:27.644 [2024-11-19 07:32:19.328187] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:27.644 [2024-11-19 07:32:19.480649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:27.901 [2024-11-19 07:32:19.625707] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:27.901 [2024-11-19 07:32:19.625816] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:27.901 [2024-11-19 07:32:19.625844] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:27.901 [2024-11-19 07:32:19.625869] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:27.901 [2024-11-19 07:32:19.625889] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:27.901 [2024-11-19 07:32:19.628575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:27.901 [2024-11-19 07:32:19.628642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.901 [2024-11-19 07:32:19.628648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:28.465 07:32:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:28.465 07:32:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:08:28.465 07:32:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:28.465 07:32:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:28.465 07:32:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:28.465 07:32:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:28.465 07:32:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:28.722 [2024-11-19 07:32:20.625283] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:28.722 07:32:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:29.288 07:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:29.288 07:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:29.545 07:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:29.545 07:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:29.802 07:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:30.060 07:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=74a0fe65-6078-49b3-8c42-e6361b911319 00:08:30.060 07:32:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 74a0fe65-6078-49b3-8c42-e6361b911319 lvol 20 00:08:30.317 07:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=0228c725-eba1-46dc-8edd-6bf935d3f352 00:08:30.317 07:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:30.574 07:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0228c725-eba1-46dc-8edd-6bf935d3f352 00:08:30.831 07:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:31.090 [2024-11-19 07:32:22.975462] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:31.090 07:32:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:31.347 07:32:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2851902 00:08:31.347 07:32:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:31.347 07:32:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:32.722 07:32:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 0228c725-eba1-46dc-8edd-6bf935d3f352 MY_SNAPSHOT 00:08:32.722 07:32:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=3aa0d817-d99f-49db-832d-662932bd0e6f 00:08:32.722 07:32:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 0228c725-eba1-46dc-8edd-6bf935d3f352 30 00:08:33.288 07:32:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 3aa0d817-d99f-49db-832d-662932bd0e6f MY_CLONE 00:08:33.546 07:32:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=15b1b797-4bab-4823-af95-c55c7173ed23 00:08:33.546 07:32:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 15b1b797-4bab-4823-af95-c55c7173ed23 00:08:34.482 07:32:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2851902 00:08:42.616 Initializing NVMe Controllers 00:08:42.616 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:42.616 Controller IO queue size 128, less than required. 00:08:42.616 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:42.616 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:42.616 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:42.616 Initialization complete. Launching workers. 00:08:42.616 ======================================================== 00:08:42.616 Latency(us) 00:08:42.616 Device Information : IOPS MiB/s Average min max 00:08:42.616 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 8148.44 31.83 15718.44 341.73 191916.73 00:08:42.616 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 8066.35 31.51 15875.44 3365.73 143018.15 00:08:42.616 ======================================================== 00:08:42.616 Total : 16214.79 63.34 15796.54 341.73 191916.73 00:08:42.616 00:08:42.616 07:32:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:42.616 07:32:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 0228c725-eba1-46dc-8edd-6bf935d3f352 00:08:42.616 07:32:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 74a0fe65-6078-49b3-8c42-e6361b911319 00:08:42.875 07:32:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:42.875 07:32:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:42.875 07:32:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:42.875 07:32:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:42.875 07:32:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:42.875 07:32:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:42.875 07:32:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:42.875 07:32:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:42.875 07:32:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:42.875 rmmod nvme_tcp 00:08:42.875 rmmod nvme_fabrics 00:08:42.875 rmmod nvme_keyring 00:08:42.875 07:32:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:42.875 07:32:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:42.875 07:32:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:42.875 07:32:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2851344 ']' 00:08:42.875 07:32:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2851344 00:08:42.875 07:32:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 2851344 ']' 00:08:42.875 07:32:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 2851344 00:08:42.875 07:32:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:08:42.875 07:32:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:42.875 07:32:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2851344 00:08:42.875 07:32:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:42.875 07:32:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:42.875 07:32:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2851344' 00:08:42.875 killing process with pid 2851344 00:08:42.875 07:32:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 2851344 00:08:42.875 07:32:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 2851344 00:08:44.250 07:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:44.250 07:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:44.250 07:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:44.250 07:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:44.250 07:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:08:44.250 07:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:44.250 07:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:08:44.250 07:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:44.250 07:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:44.250 07:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:44.250 07:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:44.250 07:32:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:46.781 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:46.781 00:08:46.781 real 0m21.381s 00:08:46.781 user 1m11.686s 00:08:46.781 sys 0m5.212s 00:08:46.781 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:46.781 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:46.781 ************************************ 00:08:46.781 END TEST nvmf_lvol 00:08:46.781 ************************************ 00:08:46.781 07:32:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:46.781 07:32:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:46.781 07:32:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:46.781 07:32:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:46.781 ************************************ 00:08:46.781 START TEST nvmf_lvs_grow 00:08:46.781 ************************************ 00:08:46.781 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:46.781 * Looking for test storage... 00:08:46.781 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:46.781 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:46.781 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:08:46.781 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:46.781 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:46.781 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:46.781 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:46.781 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:46.781 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:46.781 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:46.781 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:46.781 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:46.781 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:46.781 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:46.781 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:46.781 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:46.781 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:46.781 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:46.781 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:46.781 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:46.781 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:46.781 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:46.781 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:46.781 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:46.781 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:46.781 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:46.781 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:46.781 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:46.781 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:46.781 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:46.781 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:46.781 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:46.781 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:46.781 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:46.782 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:46.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.782 --rc genhtml_branch_coverage=1 00:08:46.782 --rc genhtml_function_coverage=1 00:08:46.782 --rc genhtml_legend=1 00:08:46.782 --rc geninfo_all_blocks=1 00:08:46.782 --rc geninfo_unexecuted_blocks=1 00:08:46.782 00:08:46.782 ' 00:08:46.782 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:46.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.782 --rc genhtml_branch_coverage=1 00:08:46.782 --rc genhtml_function_coverage=1 00:08:46.782 --rc genhtml_legend=1 00:08:46.782 --rc geninfo_all_blocks=1 00:08:46.782 --rc geninfo_unexecuted_blocks=1 00:08:46.782 00:08:46.782 ' 00:08:46.782 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:46.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.782 --rc genhtml_branch_coverage=1 00:08:46.782 --rc genhtml_function_coverage=1 00:08:46.782 --rc genhtml_legend=1 00:08:46.782 --rc geninfo_all_blocks=1 00:08:46.782 --rc geninfo_unexecuted_blocks=1 00:08:46.782 00:08:46.782 ' 00:08:46.782 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:46.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.782 --rc genhtml_branch_coverage=1 00:08:46.782 --rc genhtml_function_coverage=1 00:08:46.782 --rc genhtml_legend=1 00:08:46.782 --rc geninfo_all_blocks=1 00:08:46.782 --rc geninfo_unexecuted_blocks=1 00:08:46.782 00:08:46.782 ' 00:08:46.782 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:46.782 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:46.782 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:46.782 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:46.782 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:46.782 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:46.782 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:46.782 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:46.782 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:46.782 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:46.782 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:46.782 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:46.782 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:46.782 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:46.782 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:46.782 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:46.782 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:46.782 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:46.782 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:46.782 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:46.782 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:46.782 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:46.782 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:46.782 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.782 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.782 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.782 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:46.782 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.782 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:46.782 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:46.782 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:46.782 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:46.782 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:46.782 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:46.782 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:46.782 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:46.782 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:46.782 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:46.782 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:46.782 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:46.782 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:46.782 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:46.782 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:46.782 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:46.782 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:46.782 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:46.782 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:46.782 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:46.782 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:46.782 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:46.782 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:46.782 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:46.782 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:08:46.782 07:32:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:48.682 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:48.682 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:08:48.682 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:48.682 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:48.682 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:48.682 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:48.682 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:48.682 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:08:48.682 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:48.682 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:08:48.682 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:08:48.682 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:08:48.682 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:08:48.682 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:08:48.682 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:08:48.682 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:48.682 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:48.682 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:48.682 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:48.682 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:48.682 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:48.682 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:48.682 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:48.682 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:48.682 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:48.682 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:48.682 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:48.682 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:48.682 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:48.682 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:48.682 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:48.682 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:48.682 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:48.682 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:48.682 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:48.682 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:48.682 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:48.682 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:48.682 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:48.682 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:48.682 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:48.682 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:48.682 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:48.682 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:48.682 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:48.682 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:48.682 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:48.682 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:48.682 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:48.682 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:48.682 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:48.682 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:48.682 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:48.682 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:48.682 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:48.682 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:48.682 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:48.682 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:48.682 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:48.682 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:48.682 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:48.682 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:48.682 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:48.682 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:48.683 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:48.683 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:48.683 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:48.683 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:48.683 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:48.683 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:48.683 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:48.683 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:48.683 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:48.683 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:08:48.683 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:48.683 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:48.683 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:48.683 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:48.683 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:48.683 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:48.683 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:48.683 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:48.683 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:48.683 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:48.683 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:48.683 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:48.683 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:48.683 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:48.683 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:48.683 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:48.683 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:48.683 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:48.683 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:48.683 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:48.683 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:48.683 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:48.683 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:48.683 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:48.683 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:48.683 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:48.683 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:48.683 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:08:48.683 00:08:48.683 --- 10.0.0.2 ping statistics --- 00:08:48.683 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.683 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:08:48.683 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:48.683 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:48.683 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:08:48.683 00:08:48.683 --- 10.0.0.1 ping statistics --- 00:08:48.683 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.683 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:08:48.683 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:48.683 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:08:48.683 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:48.683 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:48.683 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:48.683 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:48.683 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:48.683 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:48.683 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:48.683 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:48.683 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:48.683 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:48.683 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:48.683 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2855318 00:08:48.683 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:48.683 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2855318 00:08:48.683 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 2855318 ']' 00:08:48.683 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:48.683 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:48.683 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:48.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:48.683 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:48.683 07:32:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:48.941 [2024-11-19 07:32:40.638714] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:08:48.941 [2024-11-19 07:32:40.638879] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:48.941 [2024-11-19 07:32:40.783758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.199 [2024-11-19 07:32:40.919373] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:49.199 [2024-11-19 07:32:40.919459] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:49.199 [2024-11-19 07:32:40.919486] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:49.199 [2024-11-19 07:32:40.919510] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:49.199 [2024-11-19 07:32:40.919530] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:49.199 [2024-11-19 07:32:40.921170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.764 07:32:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:49.764 07:32:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:08:49.764 07:32:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:49.764 07:32:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:49.764 07:32:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:49.764 07:32:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:49.764 07:32:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:50.022 [2024-11-19 07:32:41.942011] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:50.279 07:32:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:50.279 07:32:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:50.279 07:32:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:50.279 07:32:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:50.279 ************************************ 00:08:50.279 START TEST lvs_grow_clean 00:08:50.279 ************************************ 00:08:50.279 07:32:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:08:50.279 07:32:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:50.279 07:32:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:50.279 07:32:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:50.279 07:32:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:50.279 07:32:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:50.279 07:32:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:50.279 07:32:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:50.279 07:32:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:50.279 07:32:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:50.538 07:32:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:50.538 07:32:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:50.795 07:32:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=718bdb70-ec7c-4d60-8c91-90d85a510eea 00:08:50.795 07:32:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 718bdb70-ec7c-4d60-8c91-90d85a510eea 00:08:50.795 07:32:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:51.053 07:32:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:51.053 07:32:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:51.053 07:32:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 718bdb70-ec7c-4d60-8c91-90d85a510eea lvol 150 00:08:51.310 07:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=197e96a6-10d9-4df8-ac36-6626074a0c35 00:08:51.310 07:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:51.310 07:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:51.568 [2024-11-19 07:32:43.377695] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:51.568 [2024-11-19 07:32:43.377850] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:51.568 true 00:08:51.568 07:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 718bdb70-ec7c-4d60-8c91-90d85a510eea 00:08:51.568 07:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:51.826 07:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:51.826 07:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:52.083 07:32:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 197e96a6-10d9-4df8-ac36-6626074a0c35 00:08:52.341 07:32:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:52.599 [2024-11-19 07:32:44.501336] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:52.600 07:32:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:53.167 07:32:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2855887 00:08:53.167 07:32:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:53.167 07:32:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:53.167 07:32:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2855887 /var/tmp/bdevperf.sock 00:08:53.167 07:32:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2855887 ']' 00:08:53.167 07:32:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:53.167 07:32:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:53.167 07:32:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:53.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:53.167 07:32:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:53.167 07:32:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:53.167 [2024-11-19 07:32:44.887825] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:08:53.167 [2024-11-19 07:32:44.887984] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2855887 ] 00:08:53.167 [2024-11-19 07:32:45.039703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.426 [2024-11-19 07:32:45.175287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:53.992 07:32:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:53.992 07:32:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:08:53.992 07:32:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:54.558 Nvme0n1 00:08:54.558 07:32:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:54.815 [ 00:08:54.815 { 00:08:54.815 "name": "Nvme0n1", 00:08:54.815 "aliases": [ 00:08:54.815 "197e96a6-10d9-4df8-ac36-6626074a0c35" 00:08:54.815 ], 00:08:54.815 "product_name": "NVMe disk", 00:08:54.815 "block_size": 4096, 00:08:54.815 "num_blocks": 38912, 00:08:54.815 "uuid": "197e96a6-10d9-4df8-ac36-6626074a0c35", 00:08:54.815 "numa_id": 0, 00:08:54.815 "assigned_rate_limits": { 00:08:54.815 "rw_ios_per_sec": 0, 00:08:54.815 "rw_mbytes_per_sec": 0, 00:08:54.815 "r_mbytes_per_sec": 0, 00:08:54.815 "w_mbytes_per_sec": 0 00:08:54.815 }, 00:08:54.815 "claimed": false, 00:08:54.815 "zoned": false, 00:08:54.815 "supported_io_types": { 00:08:54.815 "read": true, 00:08:54.815 "write": true, 00:08:54.815 "unmap": true, 00:08:54.815 "flush": true, 00:08:54.815 "reset": true, 00:08:54.815 "nvme_admin": true, 00:08:54.815 "nvme_io": true, 00:08:54.815 "nvme_io_md": false, 00:08:54.815 "write_zeroes": true, 00:08:54.815 "zcopy": false, 00:08:54.815 "get_zone_info": false, 00:08:54.815 "zone_management": false, 00:08:54.815 "zone_append": false, 00:08:54.815 "compare": true, 00:08:54.815 "compare_and_write": true, 00:08:54.815 "abort": true, 00:08:54.815 "seek_hole": false, 00:08:54.815 "seek_data": false, 00:08:54.815 "copy": true, 00:08:54.815 "nvme_iov_md": false 00:08:54.815 }, 00:08:54.815 "memory_domains": [ 00:08:54.815 { 00:08:54.815 "dma_device_id": "system", 00:08:54.815 "dma_device_type": 1 00:08:54.815 } 00:08:54.815 ], 00:08:54.815 "driver_specific": { 00:08:54.815 "nvme": [ 00:08:54.815 { 00:08:54.815 "trid": { 00:08:54.815 "trtype": "TCP", 00:08:54.815 "adrfam": "IPv4", 00:08:54.815 "traddr": "10.0.0.2", 00:08:54.815 "trsvcid": "4420", 00:08:54.815 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:54.815 }, 00:08:54.815 "ctrlr_data": { 00:08:54.815 "cntlid": 1, 00:08:54.815 "vendor_id": "0x8086", 00:08:54.815 "model_number": "SPDK bdev Controller", 00:08:54.815 "serial_number": "SPDK0", 00:08:54.815 "firmware_revision": "25.01", 00:08:54.815 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:54.815 "oacs": { 00:08:54.815 "security": 0, 00:08:54.815 "format": 0, 00:08:54.815 "firmware": 0, 00:08:54.815 "ns_manage": 0 00:08:54.815 }, 00:08:54.815 "multi_ctrlr": true, 00:08:54.815 "ana_reporting": false 00:08:54.815 }, 00:08:54.815 "vs": { 00:08:54.815 "nvme_version": "1.3" 00:08:54.815 }, 00:08:54.815 "ns_data": { 00:08:54.815 "id": 1, 00:08:54.815 "can_share": true 00:08:54.815 } 00:08:54.815 } 00:08:54.815 ], 00:08:54.815 "mp_policy": "active_passive" 00:08:54.816 } 00:08:54.816 } 00:08:54.816 ] 00:08:54.816 07:32:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2856157 00:08:54.816 07:32:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:54.816 07:32:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:55.073 Running I/O for 10 seconds... 00:08:56.007 Latency(us) 00:08:56.007 [2024-11-19T06:32:47.937Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:56.007 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:56.007 Nvme0n1 : 1.00 10542.00 41.18 0.00 0.00 0.00 0.00 0.00 00:08:56.007 [2024-11-19T06:32:47.937Z] =================================================================================================================== 00:08:56.007 [2024-11-19T06:32:47.937Z] Total : 10542.00 41.18 0.00 0.00 0.00 0.00 0.00 00:08:56.007 00:08:56.942 07:32:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 718bdb70-ec7c-4d60-8c91-90d85a510eea 00:08:56.942 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:56.942 Nvme0n1 : 2.00 10732.00 41.92 0.00 0.00 0.00 0.00 0.00 00:08:56.942 [2024-11-19T06:32:48.872Z] =================================================================================================================== 00:08:56.942 [2024-11-19T06:32:48.872Z] Total : 10732.00 41.92 0.00 0.00 0.00 0.00 0.00 00:08:56.942 00:08:57.200 true 00:08:57.200 07:32:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 718bdb70-ec7c-4d60-8c91-90d85a510eea 00:08:57.200 07:32:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:57.458 07:32:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:57.458 07:32:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:57.458 07:32:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2856157 00:08:58.023 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:58.023 Nvme0n1 : 3.00 10795.33 42.17 0.00 0.00 0.00 0.00 0.00 00:08:58.023 [2024-11-19T06:32:49.953Z] =================================================================================================================== 00:08:58.023 [2024-11-19T06:32:49.953Z] Total : 10795.33 42.17 0.00 0.00 0.00 0.00 0.00 00:08:58.023 00:08:58.963 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:58.963 Nvme0n1 : 4.00 10827.75 42.30 0.00 0.00 0.00 0.00 0.00 00:08:58.963 [2024-11-19T06:32:50.893Z] =================================================================================================================== 00:08:58.963 [2024-11-19T06:32:50.893Z] Total : 10827.75 42.30 0.00 0.00 0.00 0.00 0.00 00:08:58.963 00:08:59.896 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:59.896 Nvme0n1 : 5.00 10846.60 42.37 0.00 0.00 0.00 0.00 0.00 00:08:59.896 [2024-11-19T06:32:51.826Z] =================================================================================================================== 00:08:59.896 [2024-11-19T06:32:51.826Z] Total : 10846.60 42.37 0.00 0.00 0.00 0.00 0.00 00:08:59.896 00:09:01.271 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:01.271 Nvme0n1 : 6.00 10891.17 42.54 0.00 0.00 0.00 0.00 0.00 00:09:01.271 [2024-11-19T06:32:53.201Z] =================================================================================================================== 00:09:01.271 [2024-11-19T06:32:53.201Z] Total : 10891.17 42.54 0.00 0.00 0.00 0.00 0.00 00:09:01.271 00:09:02.205 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:02.205 Nvme0n1 : 7.00 10922.86 42.67 0.00 0.00 0.00 0.00 0.00 00:09:02.205 [2024-11-19T06:32:54.135Z] =================================================================================================================== 00:09:02.205 [2024-11-19T06:32:54.135Z] Total : 10922.86 42.67 0.00 0.00 0.00 0.00 0.00 00:09:02.205 00:09:03.140 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:03.140 Nvme0n1 : 8.00 10946.62 42.76 0.00 0.00 0.00 0.00 0.00 00:09:03.140 [2024-11-19T06:32:55.070Z] =================================================================================================================== 00:09:03.140 [2024-11-19T06:32:55.070Z] Total : 10946.62 42.76 0.00 0.00 0.00 0.00 0.00 00:09:03.140 00:09:04.075 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:04.075 Nvme0n1 : 9.00 10972.11 42.86 0.00 0.00 0.00 0.00 0.00 00:09:04.075 [2024-11-19T06:32:56.005Z] =================================================================================================================== 00:09:04.075 [2024-11-19T06:32:56.005Z] Total : 10972.11 42.86 0.00 0.00 0.00 0.00 0.00 00:09:04.075 00:09:05.009 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:05.009 Nvme0n1 : 10.00 10986.30 42.92 0.00 0.00 0.00 0.00 0.00 00:09:05.009 [2024-11-19T06:32:56.939Z] =================================================================================================================== 00:09:05.009 [2024-11-19T06:32:56.939Z] Total : 10986.30 42.92 0.00 0.00 0.00 0.00 0.00 00:09:05.009 00:09:05.009 00:09:05.009 Latency(us) 00:09:05.009 [2024-11-19T06:32:56.939Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:05.009 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:05.009 Nvme0n1 : 10.01 10985.97 42.91 0.00 0.00 11644.38 5849.69 22913.33 00:09:05.009 [2024-11-19T06:32:56.939Z] =================================================================================================================== 00:09:05.009 [2024-11-19T06:32:56.939Z] Total : 10985.97 42.91 0.00 0.00 11644.38 5849.69 22913.33 00:09:05.009 { 00:09:05.009 "results": [ 00:09:05.009 { 00:09:05.009 "job": "Nvme0n1", 00:09:05.009 "core_mask": "0x2", 00:09:05.009 "workload": "randwrite", 00:09:05.009 "status": "finished", 00:09:05.009 "queue_depth": 128, 00:09:05.009 "io_size": 4096, 00:09:05.009 "runtime": 10.011949, 00:09:05.009 "iops": 10985.972861028356, 00:09:05.009 "mibps": 42.91395648839202, 00:09:05.009 "io_failed": 0, 00:09:05.009 "io_timeout": 0, 00:09:05.009 "avg_latency_us": 11644.37868973118, 00:09:05.009 "min_latency_us": 5849.694814814815, 00:09:05.009 "max_latency_us": 22913.327407407407 00:09:05.009 } 00:09:05.009 ], 00:09:05.009 "core_count": 1 00:09:05.009 } 00:09:05.009 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2855887 00:09:05.009 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2855887 ']' 00:09:05.009 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2855887 00:09:05.009 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:09:05.009 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:05.009 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2855887 00:09:05.009 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:05.009 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:05.009 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2855887' 00:09:05.009 killing process with pid 2855887 00:09:05.009 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2855887 00:09:05.009 Received shutdown signal, test time was about 10.000000 seconds 00:09:05.009 00:09:05.009 Latency(us) 00:09:05.009 [2024-11-19T06:32:56.939Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:05.009 [2024-11-19T06:32:56.940Z] =================================================================================================================== 00:09:05.010 [2024-11-19T06:32:56.940Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:05.010 07:32:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2855887 00:09:05.943 07:32:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:06.201 07:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:06.775 07:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 718bdb70-ec7c-4d60-8c91-90d85a510eea 00:09:06.775 07:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:06.775 07:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:06.775 07:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:06.775 07:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:07.101 [2024-11-19 07:32:58.924903] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:07.101 07:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 718bdb70-ec7c-4d60-8c91-90d85a510eea 00:09:07.101 07:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:09:07.101 07:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 718bdb70-ec7c-4d60-8c91-90d85a510eea 00:09:07.101 07:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:07.101 07:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:07.101 07:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:07.101 07:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:07.101 07:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:07.101 07:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:07.101 07:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:07.101 07:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:07.101 07:32:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 718bdb70-ec7c-4d60-8c91-90d85a510eea 00:09:07.383 request: 00:09:07.383 { 00:09:07.383 "uuid": "718bdb70-ec7c-4d60-8c91-90d85a510eea", 00:09:07.383 "method": "bdev_lvol_get_lvstores", 00:09:07.383 "req_id": 1 00:09:07.383 } 00:09:07.383 Got JSON-RPC error response 00:09:07.383 response: 00:09:07.383 { 00:09:07.383 "code": -19, 00:09:07.383 "message": "No such device" 00:09:07.383 } 00:09:07.383 07:32:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:09:07.383 07:32:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:07.383 07:32:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:07.383 07:32:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:07.383 07:32:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:07.988 aio_bdev 00:09:07.988 07:32:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 197e96a6-10d9-4df8-ac36-6626074a0c35 00:09:07.988 07:32:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=197e96a6-10d9-4df8-ac36-6626074a0c35 00:09:07.988 07:32:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:07.988 07:32:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:09:07.988 07:32:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:07.988 07:32:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:07.988 07:32:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:07.988 07:32:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 197e96a6-10d9-4df8-ac36-6626074a0c35 -t 2000 00:09:08.246 [ 00:09:08.246 { 00:09:08.246 "name": "197e96a6-10d9-4df8-ac36-6626074a0c35", 00:09:08.246 "aliases": [ 00:09:08.246 "lvs/lvol" 00:09:08.246 ], 00:09:08.246 "product_name": "Logical Volume", 00:09:08.246 "block_size": 4096, 00:09:08.246 "num_blocks": 38912, 00:09:08.246 "uuid": "197e96a6-10d9-4df8-ac36-6626074a0c35", 00:09:08.246 "assigned_rate_limits": { 00:09:08.246 "rw_ios_per_sec": 0, 00:09:08.246 "rw_mbytes_per_sec": 0, 00:09:08.246 "r_mbytes_per_sec": 0, 00:09:08.246 "w_mbytes_per_sec": 0 00:09:08.246 }, 00:09:08.246 "claimed": false, 00:09:08.246 "zoned": false, 00:09:08.246 "supported_io_types": { 00:09:08.246 "read": true, 00:09:08.246 "write": true, 00:09:08.246 "unmap": true, 00:09:08.246 "flush": false, 00:09:08.246 "reset": true, 00:09:08.246 "nvme_admin": false, 00:09:08.246 "nvme_io": false, 00:09:08.246 "nvme_io_md": false, 00:09:08.246 "write_zeroes": true, 00:09:08.246 "zcopy": false, 00:09:08.246 "get_zone_info": false, 00:09:08.246 "zone_management": false, 00:09:08.246 "zone_append": false, 00:09:08.246 "compare": false, 00:09:08.246 "compare_and_write": false, 00:09:08.246 "abort": false, 00:09:08.246 "seek_hole": true, 00:09:08.246 "seek_data": true, 00:09:08.246 "copy": false, 00:09:08.246 "nvme_iov_md": false 00:09:08.246 }, 00:09:08.246 "driver_specific": { 00:09:08.246 "lvol": { 00:09:08.246 "lvol_store_uuid": "718bdb70-ec7c-4d60-8c91-90d85a510eea", 00:09:08.246 "base_bdev": "aio_bdev", 00:09:08.246 "thin_provision": false, 00:09:08.246 "num_allocated_clusters": 38, 00:09:08.246 "snapshot": false, 00:09:08.246 "clone": false, 00:09:08.246 "esnap_clone": false 00:09:08.246 } 00:09:08.246 } 00:09:08.246 } 00:09:08.246 ] 00:09:08.246 07:33:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:09:08.246 07:33:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 718bdb70-ec7c-4d60-8c91-90d85a510eea 00:09:08.246 07:33:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:08.812 07:33:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:08.812 07:33:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 718bdb70-ec7c-4d60-8c91-90d85a510eea 00:09:08.812 07:33:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:08.812 07:33:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:08.812 07:33:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 197e96a6-10d9-4df8-ac36-6626074a0c35 00:09:09.378 07:33:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 718bdb70-ec7c-4d60-8c91-90d85a510eea 00:09:09.635 07:33:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:09.892 07:33:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:09.892 00:09:09.892 real 0m19.629s 00:09:09.892 user 0m19.413s 00:09:09.892 sys 0m1.961s 00:09:09.892 07:33:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:09.892 07:33:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:09.892 ************************************ 00:09:09.892 END TEST lvs_grow_clean 00:09:09.893 ************************************ 00:09:09.893 07:33:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:09.893 07:33:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:09.893 07:33:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:09.893 07:33:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:09.893 ************************************ 00:09:09.893 START TEST lvs_grow_dirty 00:09:09.893 ************************************ 00:09:09.893 07:33:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:09:09.893 07:33:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:09.893 07:33:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:09.893 07:33:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:09.893 07:33:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:09.893 07:33:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:09.893 07:33:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:09.893 07:33:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:09.893 07:33:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:09.893 07:33:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:10.150 07:33:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:10.150 07:33:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:10.408 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=50aa5df0-9277-45e1-a459-f5ead00326dc 00:09:10.408 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 50aa5df0-9277-45e1-a459-f5ead00326dc 00:09:10.408 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:10.666 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:10.666 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:10.666 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 50aa5df0-9277-45e1-a459-f5ead00326dc lvol 150 00:09:10.925 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=b137d94d-f3eb-42b1-8422-aa6e4c0660ab 00:09:10.925 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:10.925 07:33:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:11.184 [2024-11-19 07:33:03.054672] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:11.184 [2024-11-19 07:33:03.054833] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:11.184 true 00:09:11.184 07:33:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 50aa5df0-9277-45e1-a459-f5ead00326dc 00:09:11.184 07:33:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:11.443 07:33:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:11.443 07:33:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:12.010 07:33:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b137d94d-f3eb-42b1-8422-aa6e4c0660ab 00:09:12.010 07:33:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:12.268 [2024-11-19 07:33:04.170362] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:12.268 07:33:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:12.527 07:33:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2858332 00:09:12.527 07:33:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:12.527 07:33:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:12.527 07:33:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2858332 /var/tmp/bdevperf.sock 00:09:12.527 07:33:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2858332 ']' 00:09:12.527 07:33:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:12.527 07:33:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:12.527 07:33:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:12.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:12.527 07:33:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:12.527 07:33:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:12.785 [2024-11-19 07:33:04.545429] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:09:12.785 [2024-11-19 07:33:04.545564] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2858332 ] 00:09:12.785 [2024-11-19 07:33:04.690581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.044 [2024-11-19 07:33:04.821946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:13.610 07:33:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:13.610 07:33:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:13.610 07:33:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:14.177 Nvme0n1 00:09:14.177 07:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:14.435 [ 00:09:14.435 { 00:09:14.435 "name": "Nvme0n1", 00:09:14.435 "aliases": [ 00:09:14.435 "b137d94d-f3eb-42b1-8422-aa6e4c0660ab" 00:09:14.435 ], 00:09:14.435 "product_name": "NVMe disk", 00:09:14.435 "block_size": 4096, 00:09:14.435 "num_blocks": 38912, 00:09:14.435 "uuid": "b137d94d-f3eb-42b1-8422-aa6e4c0660ab", 00:09:14.435 "numa_id": 0, 00:09:14.435 "assigned_rate_limits": { 00:09:14.435 "rw_ios_per_sec": 0, 00:09:14.435 "rw_mbytes_per_sec": 0, 00:09:14.435 "r_mbytes_per_sec": 0, 00:09:14.435 "w_mbytes_per_sec": 0 00:09:14.435 }, 00:09:14.435 "claimed": false, 00:09:14.435 "zoned": false, 00:09:14.435 "supported_io_types": { 00:09:14.435 "read": true, 00:09:14.435 "write": true, 00:09:14.435 "unmap": true, 00:09:14.435 "flush": true, 00:09:14.435 "reset": true, 00:09:14.435 "nvme_admin": true, 00:09:14.435 "nvme_io": true, 00:09:14.435 "nvme_io_md": false, 00:09:14.435 "write_zeroes": true, 00:09:14.435 "zcopy": false, 00:09:14.435 "get_zone_info": false, 00:09:14.435 "zone_management": false, 00:09:14.435 "zone_append": false, 00:09:14.435 "compare": true, 00:09:14.435 "compare_and_write": true, 00:09:14.435 "abort": true, 00:09:14.435 "seek_hole": false, 00:09:14.435 "seek_data": false, 00:09:14.435 "copy": true, 00:09:14.435 "nvme_iov_md": false 00:09:14.435 }, 00:09:14.435 "memory_domains": [ 00:09:14.435 { 00:09:14.435 "dma_device_id": "system", 00:09:14.435 "dma_device_type": 1 00:09:14.435 } 00:09:14.435 ], 00:09:14.435 "driver_specific": { 00:09:14.435 "nvme": [ 00:09:14.435 { 00:09:14.435 "trid": { 00:09:14.435 "trtype": "TCP", 00:09:14.435 "adrfam": "IPv4", 00:09:14.435 "traddr": "10.0.0.2", 00:09:14.435 "trsvcid": "4420", 00:09:14.435 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:14.435 }, 00:09:14.435 "ctrlr_data": { 00:09:14.435 "cntlid": 1, 00:09:14.435 "vendor_id": "0x8086", 00:09:14.435 "model_number": "SPDK bdev Controller", 00:09:14.435 "serial_number": "SPDK0", 00:09:14.435 "firmware_revision": "25.01", 00:09:14.435 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:14.435 "oacs": { 00:09:14.435 "security": 0, 00:09:14.435 "format": 0, 00:09:14.435 "firmware": 0, 00:09:14.435 "ns_manage": 0 00:09:14.435 }, 00:09:14.435 "multi_ctrlr": true, 00:09:14.435 "ana_reporting": false 00:09:14.435 }, 00:09:14.435 "vs": { 00:09:14.435 "nvme_version": "1.3" 00:09:14.435 }, 00:09:14.435 "ns_data": { 00:09:14.435 "id": 1, 00:09:14.435 "can_share": true 00:09:14.435 } 00:09:14.435 } 00:09:14.435 ], 00:09:14.435 "mp_policy": "active_passive" 00:09:14.435 } 00:09:14.435 } 00:09:14.435 ] 00:09:14.695 07:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2858601 00:09:14.695 07:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:14.695 07:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:14.695 Running I/O for 10 seconds... 00:09:15.631 Latency(us) 00:09:15.631 [2024-11-19T06:33:07.561Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:15.631 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:15.631 Nvme0n1 : 1.00 10542.00 41.18 0.00 0.00 0.00 0.00 0.00 00:09:15.631 [2024-11-19T06:33:07.561Z] =================================================================================================================== 00:09:15.631 [2024-11-19T06:33:07.561Z] Total : 10542.00 41.18 0.00 0.00 0.00 0.00 0.00 00:09:15.631 00:09:16.567 07:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 50aa5df0-9277-45e1-a459-f5ead00326dc 00:09:16.567 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:16.567 Nvme0n1 : 2.00 10670.00 41.68 0.00 0.00 0.00 0.00 0.00 00:09:16.567 [2024-11-19T06:33:08.497Z] =================================================================================================================== 00:09:16.567 [2024-11-19T06:33:08.497Z] Total : 10670.00 41.68 0.00 0.00 0.00 0.00 0.00 00:09:16.567 00:09:16.825 true 00:09:16.825 07:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 50aa5df0-9277-45e1-a459-f5ead00326dc 00:09:16.825 07:33:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:17.085 07:33:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:17.085 07:33:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:17.085 07:33:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2858601 00:09:17.652 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:17.652 Nvme0n1 : 3.00 10669.33 41.68 0.00 0.00 0.00 0.00 0.00 00:09:17.652 [2024-11-19T06:33:09.582Z] =================================================================================================================== 00:09:17.652 [2024-11-19T06:33:09.582Z] Total : 10669.33 41.68 0.00 0.00 0.00 0.00 0.00 00:09:17.652 00:09:18.588 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:18.588 Nvme0n1 : 4.00 10717.75 41.87 0.00 0.00 0.00 0.00 0.00 00:09:18.588 [2024-11-19T06:33:10.518Z] =================================================================================================================== 00:09:18.588 [2024-11-19T06:33:10.518Z] Total : 10717.75 41.87 0.00 0.00 0.00 0.00 0.00 00:09:18.588 00:09:19.965 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:19.965 Nvme0n1 : 5.00 10784.00 42.12 0.00 0.00 0.00 0.00 0.00 00:09:19.965 [2024-11-19T06:33:11.895Z] =================================================================================================================== 00:09:19.965 [2024-11-19T06:33:11.895Z] Total : 10784.00 42.12 0.00 0.00 0.00 0.00 0.00 00:09:19.965 00:09:20.901 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:20.901 Nvme0n1 : 6.00 10817.83 42.26 0.00 0.00 0.00 0.00 0.00 00:09:20.901 [2024-11-19T06:33:12.831Z] =================================================================================================================== 00:09:20.901 [2024-11-19T06:33:12.831Z] Total : 10817.83 42.26 0.00 0.00 0.00 0.00 0.00 00:09:20.901 00:09:21.838 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:21.838 Nvme0n1 : 7.00 10850.86 42.39 0.00 0.00 0.00 0.00 0.00 00:09:21.838 [2024-11-19T06:33:13.768Z] =================================================================================================================== 00:09:21.838 [2024-11-19T06:33:13.768Z] Total : 10850.86 42.39 0.00 0.00 0.00 0.00 0.00 00:09:21.838 00:09:22.775 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:22.775 Nvme0n1 : 8.00 10884.12 42.52 0.00 0.00 0.00 0.00 0.00 00:09:22.775 [2024-11-19T06:33:14.705Z] =================================================================================================================== 00:09:22.775 [2024-11-19T06:33:14.705Z] Total : 10884.12 42.52 0.00 0.00 0.00 0.00 0.00 00:09:22.775 00:09:23.712 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:23.712 Nvme0n1 : 9.00 10902.44 42.59 0.00 0.00 0.00 0.00 0.00 00:09:23.712 [2024-11-19T06:33:15.642Z] =================================================================================================================== 00:09:23.712 [2024-11-19T06:33:15.642Z] Total : 10902.44 42.59 0.00 0.00 0.00 0.00 0.00 00:09:23.712 00:09:24.650 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:24.650 Nvme0n1 : 10.00 10923.90 42.67 0.00 0.00 0.00 0.00 0.00 00:09:24.650 [2024-11-19T06:33:16.580Z] =================================================================================================================== 00:09:24.650 [2024-11-19T06:33:16.580Z] Total : 10923.90 42.67 0.00 0.00 0.00 0.00 0.00 00:09:24.650 00:09:24.650 00:09:24.650 Latency(us) 00:09:24.650 [2024-11-19T06:33:16.580Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:24.650 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:24.650 Nvme0n1 : 10.01 10928.09 42.69 0.00 0.00 11705.70 6747.78 23690.05 00:09:24.650 [2024-11-19T06:33:16.580Z] =================================================================================================================== 00:09:24.650 [2024-11-19T06:33:16.580Z] Total : 10928.09 42.69 0.00 0.00 11705.70 6747.78 23690.05 00:09:24.650 { 00:09:24.650 "results": [ 00:09:24.650 { 00:09:24.650 "job": "Nvme0n1", 00:09:24.650 "core_mask": "0x2", 00:09:24.650 "workload": "randwrite", 00:09:24.650 "status": "finished", 00:09:24.650 "queue_depth": 128, 00:09:24.650 "io_size": 4096, 00:09:24.650 "runtime": 10.007875, 00:09:24.650 "iops": 10928.094125875872, 00:09:24.650 "mibps": 42.687867679202625, 00:09:24.650 "io_failed": 0, 00:09:24.650 "io_timeout": 0, 00:09:24.650 "avg_latency_us": 11705.698634993494, 00:09:24.650 "min_latency_us": 6747.780740740741, 00:09:24.650 "max_latency_us": 23690.05037037037 00:09:24.650 } 00:09:24.650 ], 00:09:24.650 "core_count": 1 00:09:24.650 } 00:09:24.650 07:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2858332 00:09:24.650 07:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 2858332 ']' 00:09:24.650 07:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 2858332 00:09:24.650 07:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:09:24.650 07:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:24.650 07:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2858332 00:09:24.650 07:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:24.650 07:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:24.650 07:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2858332' 00:09:24.650 killing process with pid 2858332 00:09:24.650 07:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 2858332 00:09:24.650 Received shutdown signal, test time was about 10.000000 seconds 00:09:24.650 00:09:24.650 Latency(us) 00:09:24.650 [2024-11-19T06:33:16.580Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:24.650 [2024-11-19T06:33:16.580Z] =================================================================================================================== 00:09:24.650 [2024-11-19T06:33:16.580Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:24.650 07:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 2858332 00:09:25.587 07:33:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:26.153 07:33:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:26.153 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 50aa5df0-9277-45e1-a459-f5ead00326dc 00:09:26.153 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:26.720 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:26.720 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:26.720 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2855318 00:09:26.720 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2855318 00:09:26.720 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2855318 Killed "${NVMF_APP[@]}" "$@" 00:09:26.720 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:26.720 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:26.720 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:26.720 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:26.720 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:26.720 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2860566 00:09:26.720 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:26.720 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2860566 00:09:26.720 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2860566 ']' 00:09:26.720 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:26.720 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:26.720 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:26.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:26.720 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:26.720 07:33:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:26.720 [2024-11-19 07:33:18.521702] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:09:26.720 [2024-11-19 07:33:18.521866] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:26.978 [2024-11-19 07:33:18.677403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.978 [2024-11-19 07:33:18.799022] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:26.979 [2024-11-19 07:33:18.799149] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:26.979 [2024-11-19 07:33:18.799172] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:26.979 [2024-11-19 07:33:18.799193] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:26.979 [2024-11-19 07:33:18.799209] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:26.979 [2024-11-19 07:33:18.800761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.912 07:33:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:27.912 07:33:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:27.912 07:33:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:27.912 07:33:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:27.912 07:33:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:27.912 07:33:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:27.912 07:33:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:27.912 [2024-11-19 07:33:19.811668] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:27.912 [2024-11-19 07:33:19.811931] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:27.912 [2024-11-19 07:33:19.812014] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:27.912 07:33:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:27.912 07:33:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev b137d94d-f3eb-42b1-8422-aa6e4c0660ab 00:09:27.912 07:33:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=b137d94d-f3eb-42b1-8422-aa6e4c0660ab 00:09:27.912 07:33:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:27.912 07:33:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:27.912 07:33:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:27.912 07:33:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:27.912 07:33:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:28.480 07:33:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b137d94d-f3eb-42b1-8422-aa6e4c0660ab -t 2000 00:09:28.738 [ 00:09:28.738 { 00:09:28.738 "name": "b137d94d-f3eb-42b1-8422-aa6e4c0660ab", 00:09:28.738 "aliases": [ 00:09:28.738 "lvs/lvol" 00:09:28.738 ], 00:09:28.738 "product_name": "Logical Volume", 00:09:28.738 "block_size": 4096, 00:09:28.738 "num_blocks": 38912, 00:09:28.738 "uuid": "b137d94d-f3eb-42b1-8422-aa6e4c0660ab", 00:09:28.738 "assigned_rate_limits": { 00:09:28.738 "rw_ios_per_sec": 0, 00:09:28.738 "rw_mbytes_per_sec": 0, 00:09:28.738 "r_mbytes_per_sec": 0, 00:09:28.738 "w_mbytes_per_sec": 0 00:09:28.738 }, 00:09:28.738 "claimed": false, 00:09:28.738 "zoned": false, 00:09:28.738 "supported_io_types": { 00:09:28.738 "read": true, 00:09:28.738 "write": true, 00:09:28.738 "unmap": true, 00:09:28.738 "flush": false, 00:09:28.738 "reset": true, 00:09:28.738 "nvme_admin": false, 00:09:28.738 "nvme_io": false, 00:09:28.738 "nvme_io_md": false, 00:09:28.738 "write_zeroes": true, 00:09:28.738 "zcopy": false, 00:09:28.738 "get_zone_info": false, 00:09:28.738 "zone_management": false, 00:09:28.738 "zone_append": false, 00:09:28.738 "compare": false, 00:09:28.738 "compare_and_write": false, 00:09:28.738 "abort": false, 00:09:28.738 "seek_hole": true, 00:09:28.738 "seek_data": true, 00:09:28.738 "copy": false, 00:09:28.738 "nvme_iov_md": false 00:09:28.738 }, 00:09:28.738 "driver_specific": { 00:09:28.738 "lvol": { 00:09:28.738 "lvol_store_uuid": "50aa5df0-9277-45e1-a459-f5ead00326dc", 00:09:28.738 "base_bdev": "aio_bdev", 00:09:28.738 "thin_provision": false, 00:09:28.738 "num_allocated_clusters": 38, 00:09:28.738 "snapshot": false, 00:09:28.738 "clone": false, 00:09:28.738 "esnap_clone": false 00:09:28.738 } 00:09:28.738 } 00:09:28.738 } 00:09:28.738 ] 00:09:28.738 07:33:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:28.738 07:33:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 50aa5df0-9277-45e1-a459-f5ead00326dc 00:09:28.738 07:33:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:28.997 07:33:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:28.997 07:33:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 50aa5df0-9277-45e1-a459-f5ead00326dc 00:09:28.997 07:33:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:29.254 07:33:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:29.254 07:33:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:29.513 [2024-11-19 07:33:21.320861] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:29.513 07:33:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 50aa5df0-9277-45e1-a459-f5ead00326dc 00:09:29.513 07:33:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:09:29.513 07:33:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 50aa5df0-9277-45e1-a459-f5ead00326dc 00:09:29.513 07:33:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:29.513 07:33:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:29.513 07:33:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:29.513 07:33:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:29.513 07:33:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:29.513 07:33:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:29.513 07:33:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:29.513 07:33:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:29.513 07:33:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 50aa5df0-9277-45e1-a459-f5ead00326dc 00:09:29.772 request: 00:09:29.772 { 00:09:29.772 "uuid": "50aa5df0-9277-45e1-a459-f5ead00326dc", 00:09:29.772 "method": "bdev_lvol_get_lvstores", 00:09:29.772 "req_id": 1 00:09:29.772 } 00:09:29.772 Got JSON-RPC error response 00:09:29.772 response: 00:09:29.772 { 00:09:29.772 "code": -19, 00:09:29.772 "message": "No such device" 00:09:29.772 } 00:09:29.772 07:33:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:09:29.772 07:33:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:29.772 07:33:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:29.772 07:33:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:29.772 07:33:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:30.031 aio_bdev 00:09:30.031 07:33:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev b137d94d-f3eb-42b1-8422-aa6e4c0660ab 00:09:30.031 07:33:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=b137d94d-f3eb-42b1-8422-aa6e4c0660ab 00:09:30.031 07:33:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:30.031 07:33:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:30.031 07:33:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:30.031 07:33:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:30.031 07:33:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:30.599 07:33:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b137d94d-f3eb-42b1-8422-aa6e4c0660ab -t 2000 00:09:30.599 [ 00:09:30.599 { 00:09:30.599 "name": "b137d94d-f3eb-42b1-8422-aa6e4c0660ab", 00:09:30.599 "aliases": [ 00:09:30.599 "lvs/lvol" 00:09:30.599 ], 00:09:30.599 "product_name": "Logical Volume", 00:09:30.599 "block_size": 4096, 00:09:30.599 "num_blocks": 38912, 00:09:30.599 "uuid": "b137d94d-f3eb-42b1-8422-aa6e4c0660ab", 00:09:30.599 "assigned_rate_limits": { 00:09:30.599 "rw_ios_per_sec": 0, 00:09:30.599 "rw_mbytes_per_sec": 0, 00:09:30.599 "r_mbytes_per_sec": 0, 00:09:30.599 "w_mbytes_per_sec": 0 00:09:30.599 }, 00:09:30.599 "claimed": false, 00:09:30.599 "zoned": false, 00:09:30.599 "supported_io_types": { 00:09:30.599 "read": true, 00:09:30.599 "write": true, 00:09:30.599 "unmap": true, 00:09:30.599 "flush": false, 00:09:30.599 "reset": true, 00:09:30.599 "nvme_admin": false, 00:09:30.599 "nvme_io": false, 00:09:30.599 "nvme_io_md": false, 00:09:30.599 "write_zeroes": true, 00:09:30.599 "zcopy": false, 00:09:30.599 "get_zone_info": false, 00:09:30.599 "zone_management": false, 00:09:30.599 "zone_append": false, 00:09:30.599 "compare": false, 00:09:30.599 "compare_and_write": false, 00:09:30.599 "abort": false, 00:09:30.599 "seek_hole": true, 00:09:30.599 "seek_data": true, 00:09:30.599 "copy": false, 00:09:30.599 "nvme_iov_md": false 00:09:30.599 }, 00:09:30.599 "driver_specific": { 00:09:30.599 "lvol": { 00:09:30.599 "lvol_store_uuid": "50aa5df0-9277-45e1-a459-f5ead00326dc", 00:09:30.599 "base_bdev": "aio_bdev", 00:09:30.599 "thin_provision": false, 00:09:30.599 "num_allocated_clusters": 38, 00:09:30.599 "snapshot": false, 00:09:30.599 "clone": false, 00:09:30.599 "esnap_clone": false 00:09:30.599 } 00:09:30.599 } 00:09:30.599 } 00:09:30.599 ] 00:09:30.599 07:33:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:30.599 07:33:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 50aa5df0-9277-45e1-a459-f5ead00326dc 00:09:30.599 07:33:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:30.857 07:33:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:31.116 07:33:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 50aa5df0-9277-45e1-a459-f5ead00326dc 00:09:31.116 07:33:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:31.374 07:33:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:31.374 07:33:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b137d94d-f3eb-42b1-8422-aa6e4c0660ab 00:09:31.632 07:33:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 50aa5df0-9277-45e1-a459-f5ead00326dc 00:09:31.891 07:33:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:32.149 07:33:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:32.149 00:09:32.149 real 0m22.293s 00:09:32.150 user 0m56.216s 00:09:32.150 sys 0m4.747s 00:09:32.150 07:33:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:32.150 07:33:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:32.150 ************************************ 00:09:32.150 END TEST lvs_grow_dirty 00:09:32.150 ************************************ 00:09:32.150 07:33:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:32.150 07:33:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:09:32.150 07:33:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:09:32.150 07:33:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:09:32.150 07:33:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:32.150 07:33:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:09:32.150 07:33:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:09:32.150 07:33:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:09:32.150 07:33:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:32.150 nvmf_trace.0 00:09:32.150 07:33:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:09:32.150 07:33:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:32.150 07:33:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:32.150 07:33:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:32.150 07:33:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:32.150 07:33:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:32.150 07:33:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:32.150 07:33:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:32.150 rmmod nvme_tcp 00:09:32.150 rmmod nvme_fabrics 00:09:32.150 rmmod nvme_keyring 00:09:32.462 07:33:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:32.462 07:33:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:32.462 07:33:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:32.462 07:33:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2860566 ']' 00:09:32.462 07:33:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2860566 00:09:32.462 07:33:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 2860566 ']' 00:09:32.462 07:33:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 2860566 00:09:32.462 07:33:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:09:32.462 07:33:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:32.462 07:33:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2860566 00:09:32.462 07:33:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:32.462 07:33:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:32.462 07:33:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2860566' 00:09:32.462 killing process with pid 2860566 00:09:32.462 07:33:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 2860566 00:09:32.462 07:33:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 2860566 00:09:33.436 07:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:33.436 07:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:33.436 07:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:33.436 07:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:09:33.436 07:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:09:33.436 07:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:33.436 07:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:09:33.436 07:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:33.436 07:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:33.436 07:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:33.436 07:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:33.436 07:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:35.970 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:35.970 00:09:35.970 real 0m49.131s 00:09:35.970 user 1m23.846s 00:09:35.970 sys 0m8.813s 00:09:35.970 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:35.970 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:35.970 ************************************ 00:09:35.970 END TEST nvmf_lvs_grow 00:09:35.970 ************************************ 00:09:35.970 07:33:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:35.970 07:33:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:35.970 07:33:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:35.970 07:33:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:35.970 ************************************ 00:09:35.970 START TEST nvmf_bdev_io_wait 00:09:35.970 ************************************ 00:09:35.970 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:35.970 * Looking for test storage... 00:09:35.970 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:35.970 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:35.970 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:09:35.970 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:35.970 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:35.970 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:35.970 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:35.970 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:35.970 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:35.970 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:35.970 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:35.970 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:35.970 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:35.970 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:35.970 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:35.970 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:35.970 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:35.970 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:35.970 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:35.970 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:35.970 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:35.970 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:35.970 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:35.970 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:35.970 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:35.970 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:35.970 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:35.970 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:35.970 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:35.970 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:35.970 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:35.970 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:35.970 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:35.970 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:35.970 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:35.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.970 --rc genhtml_branch_coverage=1 00:09:35.970 --rc genhtml_function_coverage=1 00:09:35.970 --rc genhtml_legend=1 00:09:35.970 --rc geninfo_all_blocks=1 00:09:35.970 --rc geninfo_unexecuted_blocks=1 00:09:35.970 00:09:35.970 ' 00:09:35.970 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:35.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.970 --rc genhtml_branch_coverage=1 00:09:35.970 --rc genhtml_function_coverage=1 00:09:35.970 --rc genhtml_legend=1 00:09:35.970 --rc geninfo_all_blocks=1 00:09:35.970 --rc geninfo_unexecuted_blocks=1 00:09:35.970 00:09:35.970 ' 00:09:35.970 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:35.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.970 --rc genhtml_branch_coverage=1 00:09:35.970 --rc genhtml_function_coverage=1 00:09:35.970 --rc genhtml_legend=1 00:09:35.970 --rc geninfo_all_blocks=1 00:09:35.970 --rc geninfo_unexecuted_blocks=1 00:09:35.970 00:09:35.970 ' 00:09:35.970 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:35.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.970 --rc genhtml_branch_coverage=1 00:09:35.970 --rc genhtml_function_coverage=1 00:09:35.970 --rc genhtml_legend=1 00:09:35.970 --rc geninfo_all_blocks=1 00:09:35.970 --rc geninfo_unexecuted_blocks=1 00:09:35.970 00:09:35.970 ' 00:09:35.970 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:35.970 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:35.970 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:35.970 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:35.970 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:35.970 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:35.970 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:35.970 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:35.970 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:35.970 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:35.970 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:35.970 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:35.970 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:35.970 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:35.970 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:35.970 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:35.970 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:35.970 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:35.971 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:35.971 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:35.971 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:35.971 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:35.971 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:35.971 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.971 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.971 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.971 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:35.971 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.971 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:35.971 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:35.971 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:35.971 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:35.971 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:35.971 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:35.971 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:35.971 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:35.971 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:35.971 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:35.971 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:35.971 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:35.971 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:35.971 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:35.971 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:35.971 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:35.971 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:35.971 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:35.971 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:35.971 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:35.971 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:35.971 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:35.971 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:35.971 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:35.971 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:09:35.971 07:33:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:37.874 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:37.874 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:09:37.874 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:37.874 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:37.874 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:37.874 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:37.875 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:37.875 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:37.875 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:37.875 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:37.875 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:37.875 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.269 ms 00:09:37.875 00:09:37.875 --- 10.0.0.2 ping statistics --- 00:09:37.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:37.875 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:37.875 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:37.875 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:09:37.875 00:09:37.875 --- 10.0.0.1 ping statistics --- 00:09:37.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:37.875 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:37.875 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:37.876 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:37.876 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:37.876 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:37.876 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:37.876 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:37.876 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:37.876 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:37.876 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:37.876 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:37.876 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2863371 00:09:37.876 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:37.876 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2863371 00:09:37.876 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 2863371 ']' 00:09:37.876 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:37.876 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:37.876 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:37.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:37.876 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:37.876 07:33:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:37.876 [2024-11-19 07:33:29.762447] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:09:37.876 [2024-11-19 07:33:29.762587] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:38.134 [2024-11-19 07:33:29.913536] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:38.392 [2024-11-19 07:33:30.080407] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:38.392 [2024-11-19 07:33:30.080495] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:38.392 [2024-11-19 07:33:30.080523] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:38.392 [2024-11-19 07:33:30.080547] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:38.392 [2024-11-19 07:33:30.080567] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:38.392 [2024-11-19 07:33:30.083449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:38.392 [2024-11-19 07:33:30.083517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:38.392 [2024-11-19 07:33:30.083606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.392 [2024-11-19 07:33:30.083619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:38.958 07:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:38.958 07:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:09:38.958 07:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:38.958 07:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:38.958 07:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:38.958 07:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:38.958 07:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:38.958 07:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.958 07:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:38.958 07:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.958 07:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:38.958 07:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.958 07:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:39.218 07:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.218 07:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:39.218 07:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.218 07:33:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:39.218 [2024-11-19 07:33:30.996092] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:39.218 07:33:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.218 07:33:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:39.218 07:33:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.218 07:33:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:39.218 Malloc0 00:09:39.218 07:33:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.218 07:33:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:39.218 07:33:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.218 07:33:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:39.218 07:33:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.218 07:33:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:39.218 07:33:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.218 07:33:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:39.218 07:33:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.218 07:33:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:39.218 07:33:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.218 07:33:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:39.218 [2024-11-19 07:33:31.103309] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:39.218 07:33:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.218 07:33:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2863529 00:09:39.218 07:33:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2863530 00:09:39.218 07:33:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:39.218 07:33:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:39.218 07:33:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:39.218 07:33:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:39.218 07:33:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2863533 00:09:39.218 07:33:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:39.218 07:33:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:39.218 07:33:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:39.218 07:33:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:39.218 { 00:09:39.218 "params": { 00:09:39.218 "name": "Nvme$subsystem", 00:09:39.218 "trtype": "$TEST_TRANSPORT", 00:09:39.218 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:39.218 "adrfam": "ipv4", 00:09:39.218 "trsvcid": "$NVMF_PORT", 00:09:39.218 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:39.218 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:39.218 "hdgst": ${hdgst:-false}, 00:09:39.218 "ddgst": ${ddgst:-false} 00:09:39.218 }, 00:09:39.218 "method": "bdev_nvme_attach_controller" 00:09:39.218 } 00:09:39.218 EOF 00:09:39.218 )") 00:09:39.218 07:33:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:39.218 07:33:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:39.218 07:33:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:39.218 07:33:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:39.218 { 00:09:39.218 "params": { 00:09:39.218 "name": "Nvme$subsystem", 00:09:39.218 "trtype": "$TEST_TRANSPORT", 00:09:39.218 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:39.218 "adrfam": "ipv4", 00:09:39.218 "trsvcid": "$NVMF_PORT", 00:09:39.218 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:39.218 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:39.218 "hdgst": ${hdgst:-false}, 00:09:39.218 "ddgst": ${ddgst:-false} 00:09:39.218 }, 00:09:39.218 "method": "bdev_nvme_attach_controller" 00:09:39.218 } 00:09:39.218 EOF 00:09:39.218 )") 00:09:39.218 07:33:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2863535 00:09:39.218 07:33:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:39.218 07:33:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:39.218 07:33:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:39.218 07:33:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:39.218 07:33:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:39.218 07:33:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:39.218 07:33:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:39.218 07:33:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:39.218 07:33:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:39.218 07:33:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:39.218 { 00:09:39.218 "params": { 00:09:39.218 "name": "Nvme$subsystem", 00:09:39.218 "trtype": "$TEST_TRANSPORT", 00:09:39.218 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:39.218 "adrfam": "ipv4", 00:09:39.218 "trsvcid": "$NVMF_PORT", 00:09:39.218 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:39.218 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:39.218 "hdgst": ${hdgst:-false}, 00:09:39.218 "ddgst": ${ddgst:-false} 00:09:39.218 }, 00:09:39.218 "method": "bdev_nvme_attach_controller" 00:09:39.218 } 00:09:39.218 EOF 00:09:39.218 )") 00:09:39.218 07:33:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:39.218 07:33:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:39.218 07:33:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:39.218 07:33:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:39.218 07:33:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:39.218 { 00:09:39.219 "params": { 00:09:39.219 "name": "Nvme$subsystem", 00:09:39.219 "trtype": "$TEST_TRANSPORT", 00:09:39.219 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:39.219 "adrfam": "ipv4", 00:09:39.219 "trsvcid": "$NVMF_PORT", 00:09:39.219 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:39.219 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:39.219 "hdgst": ${hdgst:-false}, 00:09:39.219 "ddgst": ${ddgst:-false} 00:09:39.219 }, 00:09:39.219 "method": "bdev_nvme_attach_controller" 00:09:39.219 } 00:09:39.219 EOF 00:09:39.219 )") 00:09:39.219 07:33:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:39.219 07:33:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2863529 00:09:39.219 07:33:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:39.219 07:33:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:39.219 07:33:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:39.219 07:33:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:39.219 07:33:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:39.219 07:33:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:39.219 07:33:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:39.219 "params": { 00:09:39.219 "name": "Nvme1", 00:09:39.219 "trtype": "tcp", 00:09:39.219 "traddr": "10.0.0.2", 00:09:39.219 "adrfam": "ipv4", 00:09:39.219 "trsvcid": "4420", 00:09:39.219 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:39.219 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:39.219 "hdgst": false, 00:09:39.219 "ddgst": false 00:09:39.219 }, 00:09:39.219 "method": "bdev_nvme_attach_controller" 00:09:39.219 }' 00:09:39.219 07:33:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:39.219 07:33:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:39.219 "params": { 00:09:39.219 "name": "Nvme1", 00:09:39.219 "trtype": "tcp", 00:09:39.219 "traddr": "10.0.0.2", 00:09:39.219 "adrfam": "ipv4", 00:09:39.219 "trsvcid": "4420", 00:09:39.219 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:39.219 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:39.219 "hdgst": false, 00:09:39.219 "ddgst": false 00:09:39.219 }, 00:09:39.219 "method": "bdev_nvme_attach_controller" 00:09:39.219 }' 00:09:39.219 07:33:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:39.219 07:33:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:39.219 "params": { 00:09:39.219 "name": "Nvme1", 00:09:39.219 "trtype": "tcp", 00:09:39.219 "traddr": "10.0.0.2", 00:09:39.219 "adrfam": "ipv4", 00:09:39.219 "trsvcid": "4420", 00:09:39.219 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:39.219 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:39.219 "hdgst": false, 00:09:39.219 "ddgst": false 00:09:39.219 }, 00:09:39.219 "method": "bdev_nvme_attach_controller" 00:09:39.219 }' 00:09:39.219 07:33:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:39.219 07:33:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:39.219 "params": { 00:09:39.219 "name": "Nvme1", 00:09:39.219 "trtype": "tcp", 00:09:39.219 "traddr": "10.0.0.2", 00:09:39.219 "adrfam": "ipv4", 00:09:39.219 "trsvcid": "4420", 00:09:39.219 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:39.219 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:39.219 "hdgst": false, 00:09:39.219 "ddgst": false 00:09:39.219 }, 00:09:39.219 "method": "bdev_nvme_attach_controller" 00:09:39.219 }' 00:09:39.477 [2024-11-19 07:33:31.194753] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:09:39.477 [2024-11-19 07:33:31.194756] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:09:39.477 [2024-11-19 07:33:31.194753] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:09:39.477 [2024-11-19 07:33:31.194892] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-19 07:33:31.194892] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-19 07:33:31.194892] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:39.477 --proc-type=auto ] 00:09:39.477 --proc-type=auto ] 00:09:39.477 [2024-11-19 07:33:31.196730] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:09:39.477 [2024-11-19 07:33:31.196861] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:39.736 [2024-11-19 07:33:31.443965] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.736 [2024-11-19 07:33:31.545430] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.736 [2024-11-19 07:33:31.567310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:39.736 [2024-11-19 07:33:31.649231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.994 [2024-11-19 07:33:31.702182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:39.994 [2024-11-19 07:33:31.728491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.994 [2024-11-19 07:33:31.770023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:39.994 [2024-11-19 07:33:31.844311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:09:40.252 Running I/O for 1 seconds... 00:09:40.252 Running I/O for 1 seconds... 00:09:40.510 Running I/O for 1 seconds... 00:09:40.510 Running I/O for 1 seconds... 00:09:41.446 5390.00 IOPS, 21.05 MiB/s [2024-11-19T06:33:33.376Z] 151016.00 IOPS, 589.91 MiB/s 00:09:41.447 Latency(us) 00:09:41.447 [2024-11-19T06:33:33.377Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:41.447 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:41.447 Nvme1n1 : 1.00 150704.06 588.69 0.00 0.00 844.99 388.36 2026.76 00:09:41.447 [2024-11-19T06:33:33.377Z] =================================================================================================================== 00:09:41.447 [2024-11-19T06:33:33.377Z] Total : 150704.06 588.69 0.00 0.00 844.99 388.36 2026.76 00:09:41.447 00:09:41.447 Latency(us) 00:09:41.447 [2024-11-19T06:33:33.377Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:41.447 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:41.447 Nvme1n1 : 1.03 5371.00 20.98 0.00 0.00 23568.39 6359.42 43496.49 00:09:41.447 [2024-11-19T06:33:33.377Z] =================================================================================================================== 00:09:41.447 [2024-11-19T06:33:33.377Z] Total : 5371.00 20.98 0.00 0.00 23568.39 6359.42 43496.49 00:09:41.447 5057.00 IOPS, 19.75 MiB/s 00:09:41.447 Latency(us) 00:09:41.447 [2024-11-19T06:33:33.377Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:41.447 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:41.447 Nvme1n1 : 1.01 5150.72 20.12 0.00 0.00 24731.29 7281.78 50486.99 00:09:41.447 [2024-11-19T06:33:33.377Z] =================================================================================================================== 00:09:41.447 [2024-11-19T06:33:33.377Z] Total : 5150.72 20.12 0.00 0.00 24731.29 7281.78 50486.99 00:09:41.706 7309.00 IOPS, 28.55 MiB/s 00:09:41.706 Latency(us) 00:09:41.706 [2024-11-19T06:33:33.636Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:41.706 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:41.706 Nvme1n1 : 1.01 7373.68 28.80 0.00 0.00 17268.57 4174.89 25243.50 00:09:41.706 [2024-11-19T06:33:33.636Z] =================================================================================================================== 00:09:41.706 [2024-11-19T06:33:33.636Z] Total : 7373.68 28.80 0.00 0.00 17268.57 4174.89 25243.50 00:09:42.272 07:33:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2863530 00:09:42.272 07:33:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2863533 00:09:42.272 07:33:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2863535 00:09:42.272 07:33:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:42.272 07:33:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.272 07:33:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:42.272 07:33:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.272 07:33:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:42.272 07:33:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:42.272 07:33:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:42.272 07:33:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:42.272 07:33:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:42.272 07:33:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:42.272 07:33:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:42.272 07:33:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:42.272 rmmod nvme_tcp 00:09:42.272 rmmod nvme_fabrics 00:09:42.272 rmmod nvme_keyring 00:09:42.272 07:33:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:42.272 07:33:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:42.272 07:33:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:42.272 07:33:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2863371 ']' 00:09:42.272 07:33:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2863371 00:09:42.272 07:33:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 2863371 ']' 00:09:42.272 07:33:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 2863371 00:09:42.272 07:33:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:09:42.272 07:33:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:42.272 07:33:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2863371 00:09:42.272 07:33:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:42.272 07:33:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:42.272 07:33:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2863371' 00:09:42.272 killing process with pid 2863371 00:09:42.272 07:33:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 2863371 00:09:42.272 07:33:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 2863371 00:09:43.648 07:33:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:43.648 07:33:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:43.648 07:33:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:43.648 07:33:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:09:43.648 07:33:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:09:43.648 07:33:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:09:43.648 07:33:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:43.648 07:33:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:43.648 07:33:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:43.648 07:33:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:43.648 07:33:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:43.648 07:33:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:45.554 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:45.554 00:09:45.554 real 0m9.806s 00:09:45.554 user 0m28.538s 00:09:45.554 sys 0m4.027s 00:09:45.554 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:45.554 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:45.554 ************************************ 00:09:45.554 END TEST nvmf_bdev_io_wait 00:09:45.554 ************************************ 00:09:45.554 07:33:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:45.554 07:33:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:45.554 07:33:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:45.554 07:33:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:45.554 ************************************ 00:09:45.554 START TEST nvmf_queue_depth 00:09:45.554 ************************************ 00:09:45.554 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:45.554 * Looking for test storage... 00:09:45.554 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:45.554 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:45.554 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:09:45.554 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:45.554 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:45.554 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:45.554 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:45.554 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:45.554 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:45.554 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:45.554 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:45.554 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:45.554 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:45.554 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:45.554 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:45.554 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:45.554 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:45.554 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:45.554 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:45.554 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:45.554 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:45.554 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:45.554 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:45.554 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:45.554 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:45.554 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:45.554 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:45.554 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:45.554 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:45.554 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:45.554 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:45.554 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:45.554 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:45.554 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:45.554 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:45.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.554 --rc genhtml_branch_coverage=1 00:09:45.554 --rc genhtml_function_coverage=1 00:09:45.554 --rc genhtml_legend=1 00:09:45.554 --rc geninfo_all_blocks=1 00:09:45.554 --rc geninfo_unexecuted_blocks=1 00:09:45.554 00:09:45.554 ' 00:09:45.554 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:45.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.554 --rc genhtml_branch_coverage=1 00:09:45.554 --rc genhtml_function_coverage=1 00:09:45.554 --rc genhtml_legend=1 00:09:45.554 --rc geninfo_all_blocks=1 00:09:45.554 --rc geninfo_unexecuted_blocks=1 00:09:45.554 00:09:45.554 ' 00:09:45.554 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:45.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.554 --rc genhtml_branch_coverage=1 00:09:45.554 --rc genhtml_function_coverage=1 00:09:45.554 --rc genhtml_legend=1 00:09:45.554 --rc geninfo_all_blocks=1 00:09:45.554 --rc geninfo_unexecuted_blocks=1 00:09:45.554 00:09:45.554 ' 00:09:45.554 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:45.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.554 --rc genhtml_branch_coverage=1 00:09:45.554 --rc genhtml_function_coverage=1 00:09:45.554 --rc genhtml_legend=1 00:09:45.554 --rc geninfo_all_blocks=1 00:09:45.554 --rc geninfo_unexecuted_blocks=1 00:09:45.554 00:09:45.554 ' 00:09:45.554 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:45.554 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:45.555 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:45.555 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:45.555 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:45.555 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:45.555 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:45.555 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:45.555 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:45.555 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:45.555 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:45.555 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:45.555 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:45.555 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:45.555 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:45.555 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:45.555 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:45.555 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:45.555 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:45.555 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:45.555 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:45.555 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:45.555 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:45.555 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.555 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.555 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.555 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:45.555 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.555 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:45.555 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:45.555 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:45.555 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:45.555 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:45.555 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:45.555 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:45.555 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:45.555 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:45.555 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:45.555 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:45.555 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:45.555 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:45.555 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:45.555 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:45.555 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:45.555 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:45.555 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:45.555 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:45.555 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:45.555 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:45.555 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:45.555 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:45.555 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:45.555 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:45.555 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:09:45.555 07:33:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:47.456 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:47.456 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:47.456 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:47.456 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:47.456 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:47.715 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:47.715 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:47.715 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:47.715 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:47.715 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:47.715 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:09:47.715 00:09:47.715 --- 10.0.0.2 ping statistics --- 00:09:47.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:47.715 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:09:47.715 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:47.715 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:47.715 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.067 ms 00:09:47.715 00:09:47.715 --- 10.0.0.1 ping statistics --- 00:09:47.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:47.715 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:09:47.715 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:47.715 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:09:47.715 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:47.715 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:47.715 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:47.715 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:47.715 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:47.715 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:47.715 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:47.715 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:47.715 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:47.715 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:47.715 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:47.715 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2865908 00:09:47.715 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:47.715 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2865908 00:09:47.715 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2865908 ']' 00:09:47.715 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:47.715 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:47.715 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:47.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:47.715 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:47.715 07:33:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:47.715 [2024-11-19 07:33:39.558057] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:09:47.715 [2024-11-19 07:33:39.558217] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:47.974 [2024-11-19 07:33:39.748298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:47.974 [2024-11-19 07:33:39.888910] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:47.974 [2024-11-19 07:33:39.889019] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:47.974 [2024-11-19 07:33:39.889042] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:47.974 [2024-11-19 07:33:39.889062] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:47.974 [2024-11-19 07:33:39.889078] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:47.974 [2024-11-19 07:33:39.890417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:48.909 07:33:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:48.909 07:33:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:48.909 07:33:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:48.909 07:33:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:48.909 07:33:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:48.909 07:33:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:48.909 07:33:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:48.909 07:33:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.909 07:33:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:48.909 [2024-11-19 07:33:40.642801] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:48.909 07:33:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.909 07:33:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:48.909 07:33:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.909 07:33:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:48.909 Malloc0 00:09:48.909 07:33:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.909 07:33:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:48.910 07:33:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.910 07:33:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:48.910 07:33:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.910 07:33:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:48.910 07:33:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.910 07:33:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:48.910 07:33:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.910 07:33:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:48.910 07:33:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.910 07:33:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:48.910 [2024-11-19 07:33:40.765280] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:48.910 07:33:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.910 07:33:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2866113 00:09:48.910 07:33:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:48.910 07:33:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:48.910 07:33:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2866113 /var/tmp/bdevperf.sock 00:09:48.910 07:33:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2866113 ']' 00:09:48.910 07:33:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:48.910 07:33:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:48.910 07:33:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:48.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:48.910 07:33:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:48.910 07:33:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:49.168 [2024-11-19 07:33:40.854753] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:09:49.168 [2024-11-19 07:33:40.854899] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2866113 ] 00:09:49.168 [2024-11-19 07:33:40.998774] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.427 [2024-11-19 07:33:41.135623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.995 07:33:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:49.995 07:33:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:49.995 07:33:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:49.995 07:33:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.995 07:33:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:49.995 NVMe0n1 00:09:49.995 07:33:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.995 07:33:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:50.254 Running I/O for 10 seconds... 00:09:52.126 5826.00 IOPS, 22.76 MiB/s [2024-11-19T06:33:45.432Z] 5810.50 IOPS, 22.70 MiB/s [2024-11-19T06:33:46.368Z] 5827.00 IOPS, 22.76 MiB/s [2024-11-19T06:33:47.302Z] 5888.00 IOPS, 23.00 MiB/s [2024-11-19T06:33:48.237Z] 5939.20 IOPS, 23.20 MiB/s [2024-11-19T06:33:49.171Z] 5970.33 IOPS, 23.32 MiB/s [2024-11-19T06:33:50.105Z] 5988.00 IOPS, 23.39 MiB/s [2024-11-19T06:33:51.481Z] 5984.62 IOPS, 23.38 MiB/s [2024-11-19T06:33:52.416Z] 5988.11 IOPS, 23.39 MiB/s [2024-11-19T06:33:52.416Z] 5991.40 IOPS, 23.40 MiB/s 00:10:00.486 Latency(us) 00:10:00.486 [2024-11-19T06:33:52.416Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:00.486 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:00.486 Verification LBA range: start 0x0 length 0x4000 00:10:00.486 NVMe0n1 : 10.15 5990.86 23.40 0.00 0.00 169109.26 27573.67 99032.18 00:10:00.486 [2024-11-19T06:33:52.416Z] =================================================================================================================== 00:10:00.486 [2024-11-19T06:33:52.416Z] Total : 5990.86 23.40 0.00 0.00 169109.26 27573.67 99032.18 00:10:00.486 { 00:10:00.486 "results": [ 00:10:00.486 { 00:10:00.486 "job": "NVMe0n1", 00:10:00.486 "core_mask": "0x1", 00:10:00.486 "workload": "verify", 00:10:00.486 "status": "finished", 00:10:00.486 "verify_range": { 00:10:00.486 "start": 0, 00:10:00.486 "length": 16384 00:10:00.486 }, 00:10:00.486 "queue_depth": 1024, 00:10:00.486 "io_size": 4096, 00:10:00.486 "runtime": 10.1508, 00:10:00.486 "iops": 5990.857863419632, 00:10:00.486 "mibps": 23.401788528982937, 00:10:00.487 "io_failed": 0, 00:10:00.487 "io_timeout": 0, 00:10:00.487 "avg_latency_us": 169109.2584631201, 00:10:00.487 "min_latency_us": 27573.665185185186, 00:10:00.487 "max_latency_us": 99032.17777777778 00:10:00.487 } 00:10:00.487 ], 00:10:00.487 "core_count": 1 00:10:00.487 } 00:10:00.487 07:33:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2866113 00:10:00.487 07:33:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2866113 ']' 00:10:00.487 07:33:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2866113 00:10:00.487 07:33:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:10:00.487 07:33:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:00.487 07:33:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2866113 00:10:00.487 07:33:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:00.487 07:33:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:00.487 07:33:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2866113' 00:10:00.487 killing process with pid 2866113 00:10:00.487 07:33:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2866113 00:10:00.487 Received shutdown signal, test time was about 10.000000 seconds 00:10:00.487 00:10:00.487 Latency(us) 00:10:00.487 [2024-11-19T06:33:52.417Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:00.487 [2024-11-19T06:33:52.417Z] =================================================================================================================== 00:10:00.487 [2024-11-19T06:33:52.417Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:00.487 07:33:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2866113 00:10:01.485 07:33:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:01.485 07:33:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:01.485 07:33:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:01.485 07:33:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:10:01.485 07:33:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:01.485 07:33:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:10:01.485 07:33:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:01.485 07:33:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:01.485 rmmod nvme_tcp 00:10:01.485 rmmod nvme_fabrics 00:10:01.485 rmmod nvme_keyring 00:10:01.485 07:33:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:01.486 07:33:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:10:01.486 07:33:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:10:01.486 07:33:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2865908 ']' 00:10:01.486 07:33:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2865908 00:10:01.486 07:33:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2865908 ']' 00:10:01.486 07:33:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2865908 00:10:01.486 07:33:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:10:01.486 07:33:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:01.486 07:33:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2865908 00:10:01.486 07:33:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:01.486 07:33:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:01.486 07:33:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2865908' 00:10:01.486 killing process with pid 2865908 00:10:01.486 07:33:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2865908 00:10:01.486 07:33:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2865908 00:10:02.862 07:33:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:02.862 07:33:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:02.862 07:33:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:02.862 07:33:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:10:02.862 07:33:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:10:02.862 07:33:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:02.862 07:33:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:10:02.862 07:33:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:02.862 07:33:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:02.862 07:33:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:02.862 07:33:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:02.862 07:33:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:04.768 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:04.768 00:10:04.768 real 0m19.411s 00:10:04.768 user 0m27.835s 00:10:04.768 sys 0m3.220s 00:10:04.768 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:04.768 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:04.768 ************************************ 00:10:04.768 END TEST nvmf_queue_depth 00:10:04.768 ************************************ 00:10:04.768 07:33:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:04.768 07:33:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:04.768 07:33:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:04.768 07:33:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:05.027 ************************************ 00:10:05.027 START TEST nvmf_target_multipath 00:10:05.027 ************************************ 00:10:05.027 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:05.027 * Looking for test storage... 00:10:05.027 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:05.027 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:05.027 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:10:05.027 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:05.027 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:05.027 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:05.027 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:05.027 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:05.027 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:10:05.028 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:10:05.028 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:10:05.028 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:10:05.028 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:10:05.028 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:10:05.028 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:10:05.028 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:05.028 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:10:05.028 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:10:05.028 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:05.028 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:05.028 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:10:05.028 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:10:05.028 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:05.028 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:10:05.028 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:10:05.028 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:10:05.028 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:10:05.028 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:05.028 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:10:05.028 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:10:05.028 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:05.028 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:05.028 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:10:05.028 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:05.028 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:05.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.028 --rc genhtml_branch_coverage=1 00:10:05.028 --rc genhtml_function_coverage=1 00:10:05.028 --rc genhtml_legend=1 00:10:05.028 --rc geninfo_all_blocks=1 00:10:05.028 --rc geninfo_unexecuted_blocks=1 00:10:05.028 00:10:05.028 ' 00:10:05.028 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:05.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.028 --rc genhtml_branch_coverage=1 00:10:05.028 --rc genhtml_function_coverage=1 00:10:05.028 --rc genhtml_legend=1 00:10:05.028 --rc geninfo_all_blocks=1 00:10:05.028 --rc geninfo_unexecuted_blocks=1 00:10:05.028 00:10:05.028 ' 00:10:05.028 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:05.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.028 --rc genhtml_branch_coverage=1 00:10:05.028 --rc genhtml_function_coverage=1 00:10:05.028 --rc genhtml_legend=1 00:10:05.028 --rc geninfo_all_blocks=1 00:10:05.028 --rc geninfo_unexecuted_blocks=1 00:10:05.028 00:10:05.028 ' 00:10:05.028 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:05.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.028 --rc genhtml_branch_coverage=1 00:10:05.028 --rc genhtml_function_coverage=1 00:10:05.028 --rc genhtml_legend=1 00:10:05.028 --rc geninfo_all_blocks=1 00:10:05.028 --rc geninfo_unexecuted_blocks=1 00:10:05.028 00:10:05.028 ' 00:10:05.028 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:05.028 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:05.028 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:05.028 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:05.028 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:05.028 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:05.028 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:05.028 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:05.028 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:05.028 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:05.028 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:05.028 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:05.028 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:05.028 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:05.028 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:05.028 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:05.028 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:05.028 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:05.028 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:05.028 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:10:05.028 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:05.028 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:05.028 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:05.028 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.028 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.028 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.028 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:05.028 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.029 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:10:05.029 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:05.029 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:05.029 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:05.029 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:05.029 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:05.029 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:05.029 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:05.029 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:05.029 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:05.029 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:05.029 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:05.029 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:05.029 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:05.029 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:05.029 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:05.029 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:05.029 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:05.029 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:05.029 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:05.029 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:05.029 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:05.029 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:05.029 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:05.029 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:05.029 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:05.029 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:10:05.029 07:33:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:07.560 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:07.560 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:07.560 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:07.560 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:07.560 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:07.561 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:07.561 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:07.561 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:07.561 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:10:07.561 00:10:07.561 --- 10.0.0.2 ping statistics --- 00:10:07.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:07.561 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:10:07.561 07:33:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:07.561 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:07.561 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:10:07.561 00:10:07.561 --- 10.0.0.1 ping statistics --- 00:10:07.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:07.561 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:10:07.561 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:07.561 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:10:07.561 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:07.561 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:07.561 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:07.561 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:07.561 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:07.561 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:07.561 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:07.561 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:10:07.561 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:10:07.561 only one NIC for nvmf test 00:10:07.561 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:10:07.561 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:07.561 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:07.561 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:07.561 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:07.561 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:07.561 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:07.561 rmmod nvme_tcp 00:10:07.561 rmmod nvme_fabrics 00:10:07.561 rmmod nvme_keyring 00:10:07.561 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:07.561 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:07.561 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:07.561 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:07.561 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:07.561 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:07.561 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:07.561 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:07.561 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:10:07.561 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:07.561 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:10:07.561 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:07.561 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:07.561 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:07.561 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:07.561 07:33:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:09.464 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:09.464 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:10:09.464 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:10:09.464 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:09.464 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:09.464 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:09.464 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:09.464 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:09.464 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:09.464 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:09.464 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:09.464 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:09.464 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:09.464 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:09.464 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:09.464 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:09.464 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:09.464 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:10:09.464 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:09.464 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:10:09.464 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:09.464 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:09.464 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:09.464 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:09.464 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:09.464 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:09.464 00:10:09.464 real 0m4.460s 00:10:09.464 user 0m0.894s 00:10:09.464 sys 0m1.576s 00:10:09.464 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:09.464 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:09.464 ************************************ 00:10:09.464 END TEST nvmf_target_multipath 00:10:09.464 ************************************ 00:10:09.464 07:34:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:09.464 07:34:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:09.464 07:34:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:09.464 07:34:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:09.464 ************************************ 00:10:09.464 START TEST nvmf_zcopy 00:10:09.464 ************************************ 00:10:09.464 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:09.464 * Looking for test storage... 00:10:09.464 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:09.464 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:09.464 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:10:09.464 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:09.464 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:09.464 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:09.464 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:09.464 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:09.464 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:10:09.464 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:10:09.464 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:10:09.464 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:10:09.464 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:10:09.464 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:10:09.464 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:10:09.464 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:09.464 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:10:09.464 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:10:09.464 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:09.464 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:09.464 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:10:09.464 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:10:09.464 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:09.464 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:10:09.465 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:10:09.465 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:10:09.465 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:10:09.465 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:09.465 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:10:09.465 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:10:09.465 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:09.465 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:09.465 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:10:09.465 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:09.465 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:09.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.465 --rc genhtml_branch_coverage=1 00:10:09.465 --rc genhtml_function_coverage=1 00:10:09.465 --rc genhtml_legend=1 00:10:09.465 --rc geninfo_all_blocks=1 00:10:09.465 --rc geninfo_unexecuted_blocks=1 00:10:09.465 00:10:09.465 ' 00:10:09.465 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:09.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.465 --rc genhtml_branch_coverage=1 00:10:09.465 --rc genhtml_function_coverage=1 00:10:09.465 --rc genhtml_legend=1 00:10:09.465 --rc geninfo_all_blocks=1 00:10:09.465 --rc geninfo_unexecuted_blocks=1 00:10:09.465 00:10:09.465 ' 00:10:09.465 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:09.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.465 --rc genhtml_branch_coverage=1 00:10:09.465 --rc genhtml_function_coverage=1 00:10:09.465 --rc genhtml_legend=1 00:10:09.465 --rc geninfo_all_blocks=1 00:10:09.465 --rc geninfo_unexecuted_blocks=1 00:10:09.465 00:10:09.465 ' 00:10:09.465 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:09.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.465 --rc genhtml_branch_coverage=1 00:10:09.465 --rc genhtml_function_coverage=1 00:10:09.465 --rc genhtml_legend=1 00:10:09.465 --rc geninfo_all_blocks=1 00:10:09.465 --rc geninfo_unexecuted_blocks=1 00:10:09.465 00:10:09.465 ' 00:10:09.465 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:09.465 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:09.465 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:09.465 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:09.465 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:09.465 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:09.465 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:09.465 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:09.465 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:09.465 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:09.465 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:09.465 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:09.465 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:09.465 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:09.465 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:09.465 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:09.465 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:09.465 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:09.465 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:09.465 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:10:09.465 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:09.465 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:09.465 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:09.465 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.465 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.465 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.465 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:09.465 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.465 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:10:09.465 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:09.465 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:09.465 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:09.465 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:09.465 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:09.465 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:09.465 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:09.465 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:09.465 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:09.465 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:09.465 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:09.465 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:09.465 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:09.465 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:09.465 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:09.465 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:09.465 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:09.465 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:09.465 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:09.465 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:09.465 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:09.465 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:10:09.724 07:34:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:11.626 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:11.626 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:10:11.626 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:11.626 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:11.626 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:11.626 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:11.626 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:11.626 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:10:11.626 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:11.626 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:10:11.626 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:10:11.626 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:10:11.626 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:10:11.626 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:10:11.626 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:10:11.626 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:11.626 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:11.626 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:11.626 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:11.626 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:11.626 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:11.626 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:11.626 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:11.626 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:11.626 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:11.626 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:11.626 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:11.626 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:11.626 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:11.626 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:11.626 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:11.626 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:11.626 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:11.626 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:11.626 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:11.626 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:11.626 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:11.626 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:11.626 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:11.626 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:11.626 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:11.626 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:11.626 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:11.626 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:11.626 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:11.626 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:11.626 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:11.626 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:11.626 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:11.626 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:11.626 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:11.626 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:11.626 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:11.626 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:11.626 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:11.626 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:11.626 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:11.626 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:11.626 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:11.626 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:11.626 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:11.626 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:11.626 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:11.626 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:11.626 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:11.626 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:11.626 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:11.626 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:11.626 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:11.626 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:11.626 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:11.626 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:11.626 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:11.626 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:10:11.626 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:11.626 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:11.626 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:11.626 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:11.627 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:11.627 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:11.627 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:11.627 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:11.627 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:11.627 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:11.627 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:11.627 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:11.627 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:11.627 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:11.627 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:11.627 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:11.627 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:11.627 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:11.627 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:11.627 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:11.627 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:11.627 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:11.627 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:11.627 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:11.627 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:11.627 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:11.627 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:11.627 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:10:11.627 00:10:11.627 --- 10.0.0.2 ping statistics --- 00:10:11.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:11.627 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:10:11.627 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:11.627 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:11.627 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.068 ms 00:10:11.627 00:10:11.627 --- 10.0.0.1 ping statistics --- 00:10:11.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:11.627 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:10:11.627 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:11.627 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:10:11.627 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:11.627 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:11.627 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:11.627 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:11.627 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:11.627 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:11.627 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:11.885 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:11.885 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:11.885 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:11.885 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:11.885 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2871653 00:10:11.885 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:11.885 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2871653 00:10:11.885 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 2871653 ']' 00:10:11.885 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:11.885 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:11.885 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:11.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:11.885 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:11.885 07:34:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:11.885 [2024-11-19 07:34:03.674227] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:10:11.885 [2024-11-19 07:34:03.674368] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:12.144 [2024-11-19 07:34:03.825905] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:12.144 [2024-11-19 07:34:03.962453] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:12.144 [2024-11-19 07:34:03.962558] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:12.144 [2024-11-19 07:34:03.962583] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:12.144 [2024-11-19 07:34:03.962610] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:12.144 [2024-11-19 07:34:03.962630] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:12.144 [2024-11-19 07:34:03.964320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:13.080 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:13.080 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:10:13.080 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:13.080 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:13.080 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:13.080 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:13.080 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:13.080 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:13.080 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.080 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:13.080 [2024-11-19 07:34:04.699944] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:13.080 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.080 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:13.080 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.080 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:13.080 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.080 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:13.080 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.080 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:13.081 [2024-11-19 07:34:04.716167] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:13.081 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.081 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:13.081 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.081 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:13.081 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.081 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:13.081 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.081 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:13.081 malloc0 00:10:13.081 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.081 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:13.081 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.081 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:13.081 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.081 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:13.081 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:13.081 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:13.081 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:13.081 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:13.081 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:13.081 { 00:10:13.081 "params": { 00:10:13.081 "name": "Nvme$subsystem", 00:10:13.081 "trtype": "$TEST_TRANSPORT", 00:10:13.081 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:13.081 "adrfam": "ipv4", 00:10:13.081 "trsvcid": "$NVMF_PORT", 00:10:13.081 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:13.081 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:13.081 "hdgst": ${hdgst:-false}, 00:10:13.081 "ddgst": ${ddgst:-false} 00:10:13.081 }, 00:10:13.081 "method": "bdev_nvme_attach_controller" 00:10:13.081 } 00:10:13.081 EOF 00:10:13.081 )") 00:10:13.081 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:13.081 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:13.081 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:13.081 07:34:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:13.081 "params": { 00:10:13.081 "name": "Nvme1", 00:10:13.081 "trtype": "tcp", 00:10:13.081 "traddr": "10.0.0.2", 00:10:13.081 "adrfam": "ipv4", 00:10:13.081 "trsvcid": "4420", 00:10:13.081 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:13.081 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:13.081 "hdgst": false, 00:10:13.081 "ddgst": false 00:10:13.081 }, 00:10:13.081 "method": "bdev_nvme_attach_controller" 00:10:13.081 }' 00:10:13.081 [2024-11-19 07:34:04.877456] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:10:13.081 [2024-11-19 07:34:04.877609] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2871827 ] 00:10:13.340 [2024-11-19 07:34:05.029562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.340 [2024-11-19 07:34:05.167397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.907 Running I/O for 10 seconds... 00:10:15.780 4152.00 IOPS, 32.44 MiB/s [2024-11-19T06:34:09.087Z] 4211.00 IOPS, 32.90 MiB/s [2024-11-19T06:34:10.022Z] 4229.67 IOPS, 33.04 MiB/s [2024-11-19T06:34:10.959Z] 4224.00 IOPS, 33.00 MiB/s [2024-11-19T06:34:11.895Z] 4219.40 IOPS, 32.96 MiB/s [2024-11-19T06:34:12.831Z] 4221.67 IOPS, 32.98 MiB/s [2024-11-19T06:34:13.767Z] 4231.71 IOPS, 33.06 MiB/s [2024-11-19T06:34:14.703Z] 4231.25 IOPS, 33.06 MiB/s [2024-11-19T06:34:16.078Z] 4235.56 IOPS, 33.09 MiB/s [2024-11-19T06:34:16.078Z] 4234.00 IOPS, 33.08 MiB/s 00:10:24.148 Latency(us) 00:10:24.148 [2024-11-19T06:34:16.078Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:24.148 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:24.148 Verification LBA range: start 0x0 length 0x1000 00:10:24.148 Nvme1n1 : 10.02 4239.04 33.12 0.00 0.00 30106.21 761.55 39030.33 00:10:24.148 [2024-11-19T06:34:16.078Z] =================================================================================================================== 00:10:24.148 [2024-11-19T06:34:16.078Z] Total : 4239.04 33.12 0.00 0.00 30106.21 761.55 39030.33 00:10:24.715 07:34:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2873164 00:10:24.715 07:34:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:24.715 07:34:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:24.715 07:34:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:24.715 07:34:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:24.715 07:34:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:24.715 07:34:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:24.715 07:34:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:24.715 07:34:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:24.715 { 00:10:24.715 "params": { 00:10:24.715 "name": "Nvme$subsystem", 00:10:24.715 "trtype": "$TEST_TRANSPORT", 00:10:24.715 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:24.715 "adrfam": "ipv4", 00:10:24.715 "trsvcid": "$NVMF_PORT", 00:10:24.715 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:24.715 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:24.715 "hdgst": ${hdgst:-false}, 00:10:24.715 "ddgst": ${ddgst:-false} 00:10:24.715 }, 00:10:24.715 "method": "bdev_nvme_attach_controller" 00:10:24.715 } 00:10:24.715 EOF 00:10:24.715 )") 00:10:24.715 07:34:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:24.715 [2024-11-19 07:34:16.577868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.715 [2024-11-19 07:34:16.577926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.715 07:34:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:24.715 07:34:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:24.715 07:34:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:24.715 "params": { 00:10:24.715 "name": "Nvme1", 00:10:24.715 "trtype": "tcp", 00:10:24.715 "traddr": "10.0.0.2", 00:10:24.715 "adrfam": "ipv4", 00:10:24.715 "trsvcid": "4420", 00:10:24.715 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:24.715 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:24.715 "hdgst": false, 00:10:24.715 "ddgst": false 00:10:24.715 }, 00:10:24.716 "method": "bdev_nvme_attach_controller" 00:10:24.716 }' 00:10:24.716 [2024-11-19 07:34:16.585820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.716 [2024-11-19 07:34:16.585852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.716 [2024-11-19 07:34:16.593781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.716 [2024-11-19 07:34:16.593809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.716 [2024-11-19 07:34:16.601820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.716 [2024-11-19 07:34:16.601849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.716 [2024-11-19 07:34:16.609872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.716 [2024-11-19 07:34:16.609902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.716 [2024-11-19 07:34:16.617856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.716 [2024-11-19 07:34:16.617885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.716 [2024-11-19 07:34:16.625908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.716 [2024-11-19 07:34:16.625939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.716 [2024-11-19 07:34:16.633904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.716 [2024-11-19 07:34:16.633932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.716 [2024-11-19 07:34:16.641908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.716 [2024-11-19 07:34:16.641936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.974 [2024-11-19 07:34:16.650001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.975 [2024-11-19 07:34:16.650041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.975 [2024-11-19 07:34:16.657883] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:10:24.975 [2024-11-19 07:34:16.657956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.975 [2024-11-19 07:34:16.658007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.975 [2024-11-19 07:34:16.658017] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2873164 ] 00:10:24.975 [2024-11-19 07:34:16.666020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.975 [2024-11-19 07:34:16.666056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.975 [2024-11-19 07:34:16.674066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.975 [2024-11-19 07:34:16.674101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.975 [2024-11-19 07:34:16.682031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.975 [2024-11-19 07:34:16.682087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.975 [2024-11-19 07:34:16.690107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.975 [2024-11-19 07:34:16.690142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.975 [2024-11-19 07:34:16.698116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.975 [2024-11-19 07:34:16.698150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.975 [2024-11-19 07:34:16.706148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.975 [2024-11-19 07:34:16.706183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.975 [2024-11-19 07:34:16.714172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.975 [2024-11-19 07:34:16.714206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.975 [2024-11-19 07:34:16.722184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.975 [2024-11-19 07:34:16.722218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.975 [2024-11-19 07:34:16.730223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.975 [2024-11-19 07:34:16.730257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.975 [2024-11-19 07:34:16.738256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.975 [2024-11-19 07:34:16.738290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.975 [2024-11-19 07:34:16.746250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.975 [2024-11-19 07:34:16.746284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.975 [2024-11-19 07:34:16.754296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.975 [2024-11-19 07:34:16.754330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.975 [2024-11-19 07:34:16.762313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.975 [2024-11-19 07:34:16.762346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.975 [2024-11-19 07:34:16.770319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.975 [2024-11-19 07:34:16.770352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.975 [2024-11-19 07:34:16.778363] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.975 [2024-11-19 07:34:16.778397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.975 [2024-11-19 07:34:16.786371] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.975 [2024-11-19 07:34:16.786404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.975 [2024-11-19 07:34:16.794410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.975 [2024-11-19 07:34:16.794443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.975 [2024-11-19 07:34:16.800492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:24.975 [2024-11-19 07:34:16.802454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.975 [2024-11-19 07:34:16.802488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.975 [2024-11-19 07:34:16.810442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.975 [2024-11-19 07:34:16.810476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.975 [2024-11-19 07:34:16.818543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.975 [2024-11-19 07:34:16.818598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.975 [2024-11-19 07:34:16.826592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.975 [2024-11-19 07:34:16.826660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.975 [2024-11-19 07:34:16.834506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.975 [2024-11-19 07:34:16.834538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.975 [2024-11-19 07:34:16.842551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.975 [2024-11-19 07:34:16.842585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.975 [2024-11-19 07:34:16.850549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.975 [2024-11-19 07:34:16.850582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.975 [2024-11-19 07:34:16.858610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.975 [2024-11-19 07:34:16.858644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.975 [2024-11-19 07:34:16.866614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.975 [2024-11-19 07:34:16.866648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.975 [2024-11-19 07:34:16.874612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.975 [2024-11-19 07:34:16.874645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.975 [2024-11-19 07:34:16.882671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.975 [2024-11-19 07:34:16.882713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.975 [2024-11-19 07:34:16.890696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.975 [2024-11-19 07:34:16.890743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.975 [2024-11-19 07:34:16.898733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.975 [2024-11-19 07:34:16.898761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.975 [2024-11-19 07:34:16.906769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.975 [2024-11-19 07:34:16.906803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.234 [2024-11-19 07:34:16.914788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.234 [2024-11-19 07:34:16.914822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.234 [2024-11-19 07:34:16.922796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.234 [2024-11-19 07:34:16.922827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.234 [2024-11-19 07:34:16.930813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.234 [2024-11-19 07:34:16.930842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.234 [2024-11-19 07:34:16.938814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.234 [2024-11-19 07:34:16.938843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.234 [2024-11-19 07:34:16.939609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.234 [2024-11-19 07:34:16.946861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.234 [2024-11-19 07:34:16.946891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.234 [2024-11-19 07:34:16.954916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.234 [2024-11-19 07:34:16.954957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.234 [2024-11-19 07:34:16.962939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.234 [2024-11-19 07:34:16.963015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.234 [2024-11-19 07:34:16.970956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.234 [2024-11-19 07:34:16.971011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.234 [2024-11-19 07:34:16.978921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.234 [2024-11-19 07:34:16.978949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.234 [2024-11-19 07:34:16.986956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.234 [2024-11-19 07:34:16.987005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.234 [2024-11-19 07:34:16.995022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.234 [2024-11-19 07:34:16.995057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.234 [2024-11-19 07:34:17.002998] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.234 [2024-11-19 07:34:17.003032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.234 [2024-11-19 07:34:17.011046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.234 [2024-11-19 07:34:17.011081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.234 [2024-11-19 07:34:17.019071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.234 [2024-11-19 07:34:17.019105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.234 [2024-11-19 07:34:17.027089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.234 [2024-11-19 07:34:17.027124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.234 [2024-11-19 07:34:17.035208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.234 [2024-11-19 07:34:17.035267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.234 [2024-11-19 07:34:17.043199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.234 [2024-11-19 07:34:17.043260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.234 [2024-11-19 07:34:17.051256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.234 [2024-11-19 07:34:17.051317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.234 [2024-11-19 07:34:17.059270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.234 [2024-11-19 07:34:17.059333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.234 [2024-11-19 07:34:17.067186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.234 [2024-11-19 07:34:17.067220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.234 [2024-11-19 07:34:17.075229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.234 [2024-11-19 07:34:17.075263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.234 [2024-11-19 07:34:17.083249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.234 [2024-11-19 07:34:17.083282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.234 [2024-11-19 07:34:17.091285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.234 [2024-11-19 07:34:17.091318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.234 [2024-11-19 07:34:17.099298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.234 [2024-11-19 07:34:17.099332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.234 [2024-11-19 07:34:17.107292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.234 [2024-11-19 07:34:17.107324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.234 [2024-11-19 07:34:17.115340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.234 [2024-11-19 07:34:17.115374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.234 [2024-11-19 07:34:17.123369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.234 [2024-11-19 07:34:17.123403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.234 [2024-11-19 07:34:17.131361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.234 [2024-11-19 07:34:17.131393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.234 [2024-11-19 07:34:17.139410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.234 [2024-11-19 07:34:17.139444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.234 [2024-11-19 07:34:17.147431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.234 [2024-11-19 07:34:17.147465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.234 [2024-11-19 07:34:17.155444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.234 [2024-11-19 07:34:17.155477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.234 [2024-11-19 07:34:17.163535] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.234 [2024-11-19 07:34:17.163583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.494 [2024-11-19 07:34:17.171503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.494 [2024-11-19 07:34:17.171542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.494 [2024-11-19 07:34:17.179535] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.494 [2024-11-19 07:34:17.179571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.494 [2024-11-19 07:34:17.187623] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.494 [2024-11-19 07:34:17.187674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.494 [2024-11-19 07:34:17.195632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.494 [2024-11-19 07:34:17.195698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.494 [2024-11-19 07:34:17.203706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.494 [2024-11-19 07:34:17.203774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.494 [2024-11-19 07:34:17.211626] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.494 [2024-11-19 07:34:17.211662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.494 [2024-11-19 07:34:17.219626] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.494 [2024-11-19 07:34:17.219659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.494 [2024-11-19 07:34:17.227665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.494 [2024-11-19 07:34:17.227709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.494 [2024-11-19 07:34:17.235670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.494 [2024-11-19 07:34:17.235712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.494 [2024-11-19 07:34:17.243742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.494 [2024-11-19 07:34:17.243771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.494 [2024-11-19 07:34:17.251758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.494 [2024-11-19 07:34:17.251787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.494 [2024-11-19 07:34:17.259761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.494 [2024-11-19 07:34:17.259789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.494 [2024-11-19 07:34:17.267798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.494 [2024-11-19 07:34:17.267827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.494 [2024-11-19 07:34:17.275809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.494 [2024-11-19 07:34:17.275838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.494 [2024-11-19 07:34:17.283832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.494 [2024-11-19 07:34:17.283860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.494 [2024-11-19 07:34:17.291851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.494 [2024-11-19 07:34:17.291881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.494 [2024-11-19 07:34:17.299849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.494 [2024-11-19 07:34:17.299877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.494 [2024-11-19 07:34:17.307944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.494 [2024-11-19 07:34:17.308003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.494 [2024-11-19 07:34:17.315911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.494 [2024-11-19 07:34:17.315941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.494 [2024-11-19 07:34:17.324002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.494 [2024-11-19 07:34:17.324041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.494 [2024-11-19 07:34:17.332028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.494 [2024-11-19 07:34:17.332066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.494 [2024-11-19 07:34:17.340021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.494 [2024-11-19 07:34:17.340058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.494 [2024-11-19 07:34:17.348058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.494 [2024-11-19 07:34:17.348097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.494 [2024-11-19 07:34:17.356088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.494 [2024-11-19 07:34:17.356126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.494 [2024-11-19 07:34:17.364086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.494 [2024-11-19 07:34:17.364120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.494 [2024-11-19 07:34:17.372157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.494 [2024-11-19 07:34:17.372192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.494 [2024-11-19 07:34:17.380153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.494 [2024-11-19 07:34:17.380189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.494 [2024-11-19 07:34:17.388175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.494 [2024-11-19 07:34:17.388208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.494 [2024-11-19 07:34:17.396228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.494 [2024-11-19 07:34:17.396262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.494 [2024-11-19 07:34:17.404252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.494 [2024-11-19 07:34:17.404290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.494 [2024-11-19 07:34:17.412247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.494 [2024-11-19 07:34:17.412283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.494 [2024-11-19 07:34:17.420298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.494 [2024-11-19 07:34:17.420333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.753 [2024-11-19 07:34:17.428302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.753 [2024-11-19 07:34:17.428353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.753 [2024-11-19 07:34:17.436352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.753 [2024-11-19 07:34:17.436390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.753 [2024-11-19 07:34:17.444378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.753 [2024-11-19 07:34:17.444417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.753 [2024-11-19 07:34:17.452374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.753 [2024-11-19 07:34:17.452409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.753 [2024-11-19 07:34:17.460420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.753 [2024-11-19 07:34:17.460455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.753 [2024-11-19 07:34:17.468478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.753 [2024-11-19 07:34:17.468513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.753 [2024-11-19 07:34:17.476451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.753 [2024-11-19 07:34:17.476485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.753 [2024-11-19 07:34:17.484505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.753 [2024-11-19 07:34:17.484540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.753 [2024-11-19 07:34:17.492512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.753 [2024-11-19 07:34:17.492547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.753 [2024-11-19 07:34:17.500583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.753 [2024-11-19 07:34:17.500623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.753 [2024-11-19 07:34:17.508596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.753 [2024-11-19 07:34:17.508634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.753 Running I/O for 5 seconds... 00:10:25.753 [2024-11-19 07:34:17.516585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.753 [2024-11-19 07:34:17.516620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.753 [2024-11-19 07:34:17.534737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.753 [2024-11-19 07:34:17.534793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.753 [2024-11-19 07:34:17.549405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.753 [2024-11-19 07:34:17.549445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.753 [2024-11-19 07:34:17.563990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.753 [2024-11-19 07:34:17.564031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.753 [2024-11-19 07:34:17.578759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.753 [2024-11-19 07:34:17.578796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.753 [2024-11-19 07:34:17.593548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.753 [2024-11-19 07:34:17.593588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.753 [2024-11-19 07:34:17.608324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.753 [2024-11-19 07:34:17.608365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.753 [2024-11-19 07:34:17.622622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.753 [2024-11-19 07:34:17.622663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.753 [2024-11-19 07:34:17.636994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.753 [2024-11-19 07:34:17.637053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.753 [2024-11-19 07:34:17.651551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.753 [2024-11-19 07:34:17.651592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.753 [2024-11-19 07:34:17.667048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.753 [2024-11-19 07:34:17.667103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.753 [2024-11-19 07:34:17.682897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.753 [2024-11-19 07:34:17.682958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.012 [2024-11-19 07:34:17.697976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.012 [2024-11-19 07:34:17.698029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.012 [2024-11-19 07:34:17.711801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.012 [2024-11-19 07:34:17.711838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.012 [2024-11-19 07:34:17.726337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.012 [2024-11-19 07:34:17.726389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.012 [2024-11-19 07:34:17.741482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.012 [2024-11-19 07:34:17.741520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.012 [2024-11-19 07:34:17.755879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.012 [2024-11-19 07:34:17.755916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.012 [2024-11-19 07:34:17.770717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.012 [2024-11-19 07:34:17.770754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.012 [2024-11-19 07:34:17.785040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.012 [2024-11-19 07:34:17.785092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.012 [2024-11-19 07:34:17.798982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.012 [2024-11-19 07:34:17.799033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.012 [2024-11-19 07:34:17.813571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.012 [2024-11-19 07:34:17.813623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.012 [2024-11-19 07:34:17.827794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.012 [2024-11-19 07:34:17.827831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.012 [2024-11-19 07:34:17.841757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.012 [2024-11-19 07:34:17.841794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.012 [2024-11-19 07:34:17.855864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.012 [2024-11-19 07:34:17.855901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.013 [2024-11-19 07:34:17.870377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.013 [2024-11-19 07:34:17.870430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.013 [2024-11-19 07:34:17.885030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.013 [2024-11-19 07:34:17.885066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.013 [2024-11-19 07:34:17.898910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.013 [2024-11-19 07:34:17.898947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.013 [2024-11-19 07:34:17.912736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.013 [2024-11-19 07:34:17.912778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.013 [2024-11-19 07:34:17.926798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.013 [2024-11-19 07:34:17.926836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.013 [2024-11-19 07:34:17.941342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.013 [2024-11-19 07:34:17.941395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.272 [2024-11-19 07:34:17.956658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.272 [2024-11-19 07:34:17.956708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.272 [2024-11-19 07:34:17.972325] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.272 [2024-11-19 07:34:17.972368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.272 [2024-11-19 07:34:17.987301] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.272 [2024-11-19 07:34:17.987343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.272 [2024-11-19 07:34:18.002215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.272 [2024-11-19 07:34:18.002255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.272 [2024-11-19 07:34:18.017792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.272 [2024-11-19 07:34:18.017828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.272 [2024-11-19 07:34:18.033432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.272 [2024-11-19 07:34:18.033474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.272 [2024-11-19 07:34:18.048638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.272 [2024-11-19 07:34:18.048679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.272 [2024-11-19 07:34:18.061429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.272 [2024-11-19 07:34:18.061470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.272 [2024-11-19 07:34:18.075782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.272 [2024-11-19 07:34:18.075819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.272 [2024-11-19 07:34:18.090594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.272 [2024-11-19 07:34:18.090635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.272 [2024-11-19 07:34:18.105870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.272 [2024-11-19 07:34:18.105907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.272 [2024-11-19 07:34:18.120830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.272 [2024-11-19 07:34:18.120866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.272 [2024-11-19 07:34:18.135755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.272 [2024-11-19 07:34:18.135792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.272 [2024-11-19 07:34:18.150937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.272 [2024-11-19 07:34:18.150990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.272 [2024-11-19 07:34:18.166136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.272 [2024-11-19 07:34:18.166177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.272 [2024-11-19 07:34:18.181162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.272 [2024-11-19 07:34:18.181202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.272 [2024-11-19 07:34:18.196597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.272 [2024-11-19 07:34:18.196648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.531 [2024-11-19 07:34:18.212879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.531 [2024-11-19 07:34:18.212919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.531 [2024-11-19 07:34:18.228682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.531 [2024-11-19 07:34:18.228740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.531 [2024-11-19 07:34:18.243865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.531 [2024-11-19 07:34:18.243902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.531 [2024-11-19 07:34:18.259015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.531 [2024-11-19 07:34:18.259069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.531 [2024-11-19 07:34:18.274115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.531 [2024-11-19 07:34:18.274156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.531 [2024-11-19 07:34:18.288939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.531 [2024-11-19 07:34:18.288994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.531 [2024-11-19 07:34:18.304106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.531 [2024-11-19 07:34:18.304147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.531 [2024-11-19 07:34:18.318530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.531 [2024-11-19 07:34:18.318572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.531 [2024-11-19 07:34:18.333140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.531 [2024-11-19 07:34:18.333181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.531 [2024-11-19 07:34:18.348351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.531 [2024-11-19 07:34:18.348392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.531 [2024-11-19 07:34:18.363594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.531 [2024-11-19 07:34:18.363635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.531 [2024-11-19 07:34:18.379155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.531 [2024-11-19 07:34:18.379196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.531 [2024-11-19 07:34:18.394464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.531 [2024-11-19 07:34:18.394504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.531 [2024-11-19 07:34:18.410251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.531 [2024-11-19 07:34:18.410292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.531 [2024-11-19 07:34:18.425784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.531 [2024-11-19 07:34:18.425821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.531 [2024-11-19 07:34:18.440703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.531 [2024-11-19 07:34:18.440758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.531 [2024-11-19 07:34:18.456132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.531 [2024-11-19 07:34:18.456174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.790 [2024-11-19 07:34:18.469932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.790 [2024-11-19 07:34:18.469978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.790 [2024-11-19 07:34:18.484974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.790 [2024-11-19 07:34:18.485015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.790 [2024-11-19 07:34:18.499869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.790 [2024-11-19 07:34:18.499906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.790 [2024-11-19 07:34:18.515404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.790 [2024-11-19 07:34:18.515445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.790 8450.00 IOPS, 66.02 MiB/s [2024-11-19T06:34:18.720Z] [2024-11-19 07:34:18.530882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.790 [2024-11-19 07:34:18.530917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.790 [2024-11-19 07:34:18.546259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.790 [2024-11-19 07:34:18.546299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.790 [2024-11-19 07:34:18.560922] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.790 [2024-11-19 07:34:18.560959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.790 [2024-11-19 07:34:18.576290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.790 [2024-11-19 07:34:18.576330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.790 [2024-11-19 07:34:18.590927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.790 [2024-11-19 07:34:18.590981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.790 [2024-11-19 07:34:18.605874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.791 [2024-11-19 07:34:18.605910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.791 [2024-11-19 07:34:18.620782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.791 [2024-11-19 07:34:18.620846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.791 [2024-11-19 07:34:18.635896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.791 [2024-11-19 07:34:18.635933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.791 [2024-11-19 07:34:18.650754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.791 [2024-11-19 07:34:18.650790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.791 [2024-11-19 07:34:18.665843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.791 [2024-11-19 07:34:18.665878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.791 [2024-11-19 07:34:18.680679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.791 [2024-11-19 07:34:18.680744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.791 [2024-11-19 07:34:18.695431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.791 [2024-11-19 07:34:18.695472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.791 [2024-11-19 07:34:18.710158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.791 [2024-11-19 07:34:18.710198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.049 [2024-11-19 07:34:18.726203] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.049 [2024-11-19 07:34:18.726244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.049 [2024-11-19 07:34:18.741482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.049 [2024-11-19 07:34:18.741523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.049 [2024-11-19 07:34:18.756216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.049 [2024-11-19 07:34:18.756257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.049 [2024-11-19 07:34:18.771875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.049 [2024-11-19 07:34:18.771912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.049 [2024-11-19 07:34:18.787110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.049 [2024-11-19 07:34:18.787151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.049 [2024-11-19 07:34:18.802116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.049 [2024-11-19 07:34:18.802156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.049 [2024-11-19 07:34:18.816826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.049 [2024-11-19 07:34:18.816862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.049 [2024-11-19 07:34:18.832165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.049 [2024-11-19 07:34:18.832207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.050 [2024-11-19 07:34:18.847443] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.050 [2024-11-19 07:34:18.847484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.050 [2024-11-19 07:34:18.862383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.050 [2024-11-19 07:34:18.862423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.050 [2024-11-19 07:34:18.877532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.050 [2024-11-19 07:34:18.877573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.050 [2024-11-19 07:34:18.890276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.050 [2024-11-19 07:34:18.890317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.050 [2024-11-19 07:34:18.905137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.050 [2024-11-19 07:34:18.905178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.050 [2024-11-19 07:34:18.919829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.050 [2024-11-19 07:34:18.919875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.050 [2024-11-19 07:34:18.934986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.050 [2024-11-19 07:34:18.935027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.050 [2024-11-19 07:34:18.950675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.050 [2024-11-19 07:34:18.950725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.050 [2024-11-19 07:34:18.965154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.050 [2024-11-19 07:34:18.965195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.050 [2024-11-19 07:34:18.980351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.050 [2024-11-19 07:34:18.980391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.308 [2024-11-19 07:34:18.996729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.308 [2024-11-19 07:34:18.996784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.308 [2024-11-19 07:34:19.012081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.308 [2024-11-19 07:34:19.012122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.308 [2024-11-19 07:34:19.026755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.308 [2024-11-19 07:34:19.026804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.309 [2024-11-19 07:34:19.042017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.309 [2024-11-19 07:34:19.042057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.309 [2024-11-19 07:34:19.054220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.309 [2024-11-19 07:34:19.054261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.309 [2024-11-19 07:34:19.068799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.309 [2024-11-19 07:34:19.068836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.309 [2024-11-19 07:34:19.084206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.309 [2024-11-19 07:34:19.084246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.309 [2024-11-19 07:34:19.099317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.309 [2024-11-19 07:34:19.099357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.309 [2024-11-19 07:34:19.113921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.309 [2024-11-19 07:34:19.113957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.309 [2024-11-19 07:34:19.129036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.309 [2024-11-19 07:34:19.129077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.309 [2024-11-19 07:34:19.143738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.309 [2024-11-19 07:34:19.143790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.309 [2024-11-19 07:34:19.158768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.309 [2024-11-19 07:34:19.158804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.309 [2024-11-19 07:34:19.173549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.309 [2024-11-19 07:34:19.173590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.309 [2024-11-19 07:34:19.188331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.309 [2024-11-19 07:34:19.188371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.309 [2024-11-19 07:34:19.202533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.309 [2024-11-19 07:34:19.202573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.309 [2024-11-19 07:34:19.217382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.309 [2024-11-19 07:34:19.217424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.309 [2024-11-19 07:34:19.232577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.309 [2024-11-19 07:34:19.232617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.567 [2024-11-19 07:34:19.249007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.567 [2024-11-19 07:34:19.249048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.567 [2024-11-19 07:34:19.264160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.567 [2024-11-19 07:34:19.264200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.567 [2024-11-19 07:34:19.276985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.567 [2024-11-19 07:34:19.277025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.567 [2024-11-19 07:34:19.291892] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.567 [2024-11-19 07:34:19.291929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.567 [2024-11-19 07:34:19.307120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.567 [2024-11-19 07:34:19.307162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.567 [2024-11-19 07:34:19.322179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.567 [2024-11-19 07:34:19.322229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.567 [2024-11-19 07:34:19.337707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.567 [2024-11-19 07:34:19.337760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.567 [2024-11-19 07:34:19.352602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.567 [2024-11-19 07:34:19.352642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.567 [2024-11-19 07:34:19.368206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.567 [2024-11-19 07:34:19.368246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.567 [2024-11-19 07:34:19.383153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.567 [2024-11-19 07:34:19.383195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.567 [2024-11-19 07:34:19.398145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.567 [2024-11-19 07:34:19.398185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.567 [2024-11-19 07:34:19.412700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.567 [2024-11-19 07:34:19.412753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.567 [2024-11-19 07:34:19.427165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.567 [2024-11-19 07:34:19.427205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.567 [2024-11-19 07:34:19.442296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.567 [2024-11-19 07:34:19.442336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.567 [2024-11-19 07:34:19.458274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.567 [2024-11-19 07:34:19.458315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.567 [2024-11-19 07:34:19.473815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.567 [2024-11-19 07:34:19.473853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.567 [2024-11-19 07:34:19.489409] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.567 [2024-11-19 07:34:19.489449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.826 [2024-11-19 07:34:19.505184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.826 [2024-11-19 07:34:19.505225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.826 [2024-11-19 07:34:19.520348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.826 [2024-11-19 07:34:19.520388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.826 8440.00 IOPS, 65.94 MiB/s [2024-11-19T06:34:19.756Z] [2024-11-19 07:34:19.535447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.826 [2024-11-19 07:34:19.535488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.826 [2024-11-19 07:34:19.550293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.826 [2024-11-19 07:34:19.550334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.826 [2024-11-19 07:34:19.565352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.826 [2024-11-19 07:34:19.565392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.826 [2024-11-19 07:34:19.580150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.826 [2024-11-19 07:34:19.580205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.826 [2024-11-19 07:34:19.595117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.826 [2024-11-19 07:34:19.595158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.826 [2024-11-19 07:34:19.611068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.826 [2024-11-19 07:34:19.611118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.826 [2024-11-19 07:34:19.626734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.826 [2024-11-19 07:34:19.626779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.826 [2024-11-19 07:34:19.642713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.826 [2024-11-19 07:34:19.642767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.826 [2024-11-19 07:34:19.658269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.826 [2024-11-19 07:34:19.658310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.826 [2024-11-19 07:34:19.673534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.826 [2024-11-19 07:34:19.673574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.826 [2024-11-19 07:34:19.688602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.826 [2024-11-19 07:34:19.688641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.826 [2024-11-19 07:34:19.703518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.826 [2024-11-19 07:34:19.703558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.826 [2024-11-19 07:34:19.718918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.826 [2024-11-19 07:34:19.718955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.826 [2024-11-19 07:34:19.733847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.826 [2024-11-19 07:34:19.733883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.826 [2024-11-19 07:34:19.749260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.826 [2024-11-19 07:34:19.749300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.084 [2024-11-19 07:34:19.765824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.084 [2024-11-19 07:34:19.765861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.084 [2024-11-19 07:34:19.781471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.084 [2024-11-19 07:34:19.781511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.084 [2024-11-19 07:34:19.797166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.084 [2024-11-19 07:34:19.797207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.084 [2024-11-19 07:34:19.812560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.084 [2024-11-19 07:34:19.812599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.084 [2024-11-19 07:34:19.826531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.084 [2024-11-19 07:34:19.826572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.084 [2024-11-19 07:34:19.841744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.084 [2024-11-19 07:34:19.841781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.084 [2024-11-19 07:34:19.856535] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.084 [2024-11-19 07:34:19.856576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.084 [2024-11-19 07:34:19.871818] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.084 [2024-11-19 07:34:19.871855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.084 [2024-11-19 07:34:19.886497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.084 [2024-11-19 07:34:19.886537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.084 [2024-11-19 07:34:19.902058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.084 [2024-11-19 07:34:19.902106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.084 [2024-11-19 07:34:19.916808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.084 [2024-11-19 07:34:19.916845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.084 [2024-11-19 07:34:19.931789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.085 [2024-11-19 07:34:19.931825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.085 [2024-11-19 07:34:19.946614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.085 [2024-11-19 07:34:19.946655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.085 [2024-11-19 07:34:19.961432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.085 [2024-11-19 07:34:19.961472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.085 [2024-11-19 07:34:19.977042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.085 [2024-11-19 07:34:19.977082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.085 [2024-11-19 07:34:19.991531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.085 [2024-11-19 07:34:19.991572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.085 [2024-11-19 07:34:20.006795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.085 [2024-11-19 07:34:20.006847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.343 [2024-11-19 07:34:20.023820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.343 [2024-11-19 07:34:20.023885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.343 [2024-11-19 07:34:20.040465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.343 [2024-11-19 07:34:20.040532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.343 [2024-11-19 07:34:20.057507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.343 [2024-11-19 07:34:20.057564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.343 [2024-11-19 07:34:20.074004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.343 [2024-11-19 07:34:20.074060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.343 [2024-11-19 07:34:20.090223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.343 [2024-11-19 07:34:20.090265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.343 [2024-11-19 07:34:20.106015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.343 [2024-11-19 07:34:20.106056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.343 [2024-11-19 07:34:20.121307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.343 [2024-11-19 07:34:20.121348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.343 [2024-11-19 07:34:20.136958] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.343 [2024-11-19 07:34:20.137014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.343 [2024-11-19 07:34:20.152583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.343 [2024-11-19 07:34:20.152624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.343 [2024-11-19 07:34:20.167871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.343 [2024-11-19 07:34:20.167908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.343 [2024-11-19 07:34:20.182339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.343 [2024-11-19 07:34:20.182379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.343 [2024-11-19 07:34:20.197851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.343 [2024-11-19 07:34:20.197888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.343 [2024-11-19 07:34:20.213138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.343 [2024-11-19 07:34:20.213179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.343 [2024-11-19 07:34:20.227936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.343 [2024-11-19 07:34:20.227999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.343 [2024-11-19 07:34:20.243178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.343 [2024-11-19 07:34:20.243219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.343 [2024-11-19 07:34:20.258798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.343 [2024-11-19 07:34:20.258839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.343 [2024-11-19 07:34:20.270571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.343 [2024-11-19 07:34:20.270611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.602 [2024-11-19 07:34:20.285785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.602 [2024-11-19 07:34:20.285831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.602 [2024-11-19 07:34:20.301328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.602 [2024-11-19 07:34:20.301371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.602 [2024-11-19 07:34:20.315000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.602 [2024-11-19 07:34:20.315043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.602 [2024-11-19 07:34:20.330148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.602 [2024-11-19 07:34:20.330191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.602 [2024-11-19 07:34:20.345719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.602 [2024-11-19 07:34:20.345776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.602 [2024-11-19 07:34:20.359118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.602 [2024-11-19 07:34:20.359159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.602 [2024-11-19 07:34:20.373884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.602 [2024-11-19 07:34:20.373922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.602 [2024-11-19 07:34:20.388254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.602 [2024-11-19 07:34:20.388291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.602 [2024-11-19 07:34:20.402168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.602 [2024-11-19 07:34:20.402206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.602 [2024-11-19 07:34:20.416430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.602 [2024-11-19 07:34:20.416467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.602 [2024-11-19 07:34:20.430853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.602 [2024-11-19 07:34:20.430891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.602 [2024-11-19 07:34:20.445191] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.602 [2024-11-19 07:34:20.445228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.602 [2024-11-19 07:34:20.459730] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.602 [2024-11-19 07:34:20.459767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.602 [2024-11-19 07:34:20.473937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.602 [2024-11-19 07:34:20.473974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.602 [2024-11-19 07:34:20.487920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.602 [2024-11-19 07:34:20.487958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.602 [2024-11-19 07:34:20.502457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.602 [2024-11-19 07:34:20.502495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.602 [2024-11-19 07:34:20.516253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.602 [2024-11-19 07:34:20.516290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.602 8405.33 IOPS, 65.67 MiB/s [2024-11-19T06:34:20.532Z] [2024-11-19 07:34:20.531085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.602 [2024-11-19 07:34:20.531121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.861 [2024-11-19 07:34:20.546000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.861 [2024-11-19 07:34:20.546065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.861 [2024-11-19 07:34:20.560493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.861 [2024-11-19 07:34:20.560531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.861 [2024-11-19 07:34:20.574869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.861 [2024-11-19 07:34:20.574922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.861 [2024-11-19 07:34:20.589770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.861 [2024-11-19 07:34:20.589807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.861 [2024-11-19 07:34:20.603591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.861 [2024-11-19 07:34:20.603630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.861 [2024-11-19 07:34:20.618096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.861 [2024-11-19 07:34:20.618134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.861 [2024-11-19 07:34:20.632163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.861 [2024-11-19 07:34:20.632199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.861 [2024-11-19 07:34:20.646064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.861 [2024-11-19 07:34:20.646102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.861 [2024-11-19 07:34:20.660381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.861 [2024-11-19 07:34:20.660423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.861 [2024-11-19 07:34:20.676082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.861 [2024-11-19 07:34:20.676123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.861 [2024-11-19 07:34:20.691380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.861 [2024-11-19 07:34:20.691422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.861 [2024-11-19 07:34:20.706478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.861 [2024-11-19 07:34:20.706518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.861 [2024-11-19 07:34:20.721115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.861 [2024-11-19 07:34:20.721156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.861 [2024-11-19 07:34:20.735771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.861 [2024-11-19 07:34:20.735816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.861 [2024-11-19 07:34:20.751516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.861 [2024-11-19 07:34:20.751558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.861 [2024-11-19 07:34:20.766953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.861 [2024-11-19 07:34:20.767018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.861 [2024-11-19 07:34:20.782759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.861 [2024-11-19 07:34:20.782810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.123 [2024-11-19 07:34:20.798317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.123 [2024-11-19 07:34:20.798358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.123 [2024-11-19 07:34:20.813703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.123 [2024-11-19 07:34:20.813758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.123 [2024-11-19 07:34:20.828516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.123 [2024-11-19 07:34:20.828557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.123 [2024-11-19 07:34:20.843374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.123 [2024-11-19 07:34:20.843414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.123 [2024-11-19 07:34:20.857891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.123 [2024-11-19 07:34:20.857928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.123 [2024-11-19 07:34:20.873323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.123 [2024-11-19 07:34:20.873364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.123 [2024-11-19 07:34:20.889364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.123 [2024-11-19 07:34:20.889405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.123 [2024-11-19 07:34:20.903529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.123 [2024-11-19 07:34:20.903569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.123 [2024-11-19 07:34:20.918290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.123 [2024-11-19 07:34:20.918331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.123 [2024-11-19 07:34:20.933070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.123 [2024-11-19 07:34:20.933111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.123 [2024-11-19 07:34:20.948707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.123 [2024-11-19 07:34:20.948760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.123 [2024-11-19 07:34:20.963944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.123 [2024-11-19 07:34:20.963998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.123 [2024-11-19 07:34:20.978811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.123 [2024-11-19 07:34:20.978848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.123 [2024-11-19 07:34:20.994370] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.123 [2024-11-19 07:34:20.994412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.123 [2024-11-19 07:34:21.010129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.123 [2024-11-19 07:34:21.010170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.123 [2024-11-19 07:34:21.025221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.123 [2024-11-19 07:34:21.025272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.123 [2024-11-19 07:34:21.040083] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.124 [2024-11-19 07:34:21.040124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.425 [2024-11-19 07:34:21.056003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.425 [2024-11-19 07:34:21.056044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.425 [2024-11-19 07:34:21.072434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.425 [2024-11-19 07:34:21.072481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.425 [2024-11-19 07:34:21.088158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.425 [2024-11-19 07:34:21.088200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.425 [2024-11-19 07:34:21.102826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.425 [2024-11-19 07:34:21.102862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.425 [2024-11-19 07:34:21.118147] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.425 [2024-11-19 07:34:21.118188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.425 [2024-11-19 07:34:21.130290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.425 [2024-11-19 07:34:21.130331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.425 [2024-11-19 07:34:21.144885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.425 [2024-11-19 07:34:21.144922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.425 [2024-11-19 07:34:21.159902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.425 [2024-11-19 07:34:21.159940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.425 [2024-11-19 07:34:21.175758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.425 [2024-11-19 07:34:21.175795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.425 [2024-11-19 07:34:21.189897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.425 [2024-11-19 07:34:21.189933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.425 [2024-11-19 07:34:21.205824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.425 [2024-11-19 07:34:21.205860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.425 [2024-11-19 07:34:21.221017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.425 [2024-11-19 07:34:21.221059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.425 [2024-11-19 07:34:21.236153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.425 [2024-11-19 07:34:21.236194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.425 [2024-11-19 07:34:21.251437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.425 [2024-11-19 07:34:21.251478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.425 [2024-11-19 07:34:21.267858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.425 [2024-11-19 07:34:21.267896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.425 [2024-11-19 07:34:21.282997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.425 [2024-11-19 07:34:21.283038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.425 [2024-11-19 07:34:21.298780] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.425 [2024-11-19 07:34:21.298817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.425 [2024-11-19 07:34:21.314358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.425 [2024-11-19 07:34:21.314407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.425 [2024-11-19 07:34:21.330483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.425 [2024-11-19 07:34:21.330531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.709 [2024-11-19 07:34:21.347710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.709 [2024-11-19 07:34:21.347765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.709 [2024-11-19 07:34:21.364426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.709 [2024-11-19 07:34:21.364469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.709 [2024-11-19 07:34:21.379158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.709 [2024-11-19 07:34:21.379200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.709 [2024-11-19 07:34:21.394329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.709 [2024-11-19 07:34:21.394370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.709 [2024-11-19 07:34:21.409127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.709 [2024-11-19 07:34:21.409168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.709 [2024-11-19 07:34:21.424422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.709 [2024-11-19 07:34:21.424462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.709 [2024-11-19 07:34:21.439388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.709 [2024-11-19 07:34:21.439429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.709 [2024-11-19 07:34:21.454031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.709 [2024-11-19 07:34:21.454072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.709 [2024-11-19 07:34:21.468879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.709 [2024-11-19 07:34:21.468920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.709 [2024-11-19 07:34:21.484179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.709 [2024-11-19 07:34:21.484220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.709 [2024-11-19 07:34:21.499097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.709 [2024-11-19 07:34:21.499138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.709 [2024-11-19 07:34:21.514265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.709 [2024-11-19 07:34:21.514320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.709 8403.00 IOPS, 65.65 MiB/s [2024-11-19T06:34:21.639Z] [2024-11-19 07:34:21.529659] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.709 [2024-11-19 07:34:21.529708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.709 [2024-11-19 07:34:21.544420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.710 [2024-11-19 07:34:21.544460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.710 [2024-11-19 07:34:21.559223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.710 [2024-11-19 07:34:21.559264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.710 [2024-11-19 07:34:21.574775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.710 [2024-11-19 07:34:21.574813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.710 [2024-11-19 07:34:21.589646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.710 [2024-11-19 07:34:21.589687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.710 [2024-11-19 07:34:21.604492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.710 [2024-11-19 07:34:21.604532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.710 [2024-11-19 07:34:21.619566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.710 [2024-11-19 07:34:21.619606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.710 [2024-11-19 07:34:21.634211] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.710 [2024-11-19 07:34:21.634253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.968 [2024-11-19 07:34:21.650265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.968 [2024-11-19 07:34:21.650306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.968 [2024-11-19 07:34:21.663549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.968 [2024-11-19 07:34:21.663589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.968 [2024-11-19 07:34:21.678704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.968 [2024-11-19 07:34:21.678758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.968 [2024-11-19 07:34:21.693397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.968 [2024-11-19 07:34:21.693437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.969 [2024-11-19 07:34:21.708604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.969 [2024-11-19 07:34:21.708644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.969 [2024-11-19 07:34:21.724519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.969 [2024-11-19 07:34:21.724560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.969 [2024-11-19 07:34:21.739996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.969 [2024-11-19 07:34:21.740047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.969 [2024-11-19 07:34:21.755113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.969 [2024-11-19 07:34:21.755155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.969 [2024-11-19 07:34:21.770481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.969 [2024-11-19 07:34:21.770522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.969 [2024-11-19 07:34:21.785736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.969 [2024-11-19 07:34:21.785774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.969 [2024-11-19 07:34:21.800416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.969 [2024-11-19 07:34:21.800456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.969 [2024-11-19 07:34:21.815785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.969 [2024-11-19 07:34:21.815822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.969 [2024-11-19 07:34:21.831230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.969 [2024-11-19 07:34:21.831272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.969 [2024-11-19 07:34:21.846256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.969 [2024-11-19 07:34:21.846298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.969 [2024-11-19 07:34:21.861096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.969 [2024-11-19 07:34:21.861136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.969 [2024-11-19 07:34:21.876277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.969 [2024-11-19 07:34:21.876317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.969 [2024-11-19 07:34:21.890922] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.969 [2024-11-19 07:34:21.890960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.228 [2024-11-19 07:34:21.906255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.228 [2024-11-19 07:34:21.906296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.228 [2024-11-19 07:34:21.921587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.228 [2024-11-19 07:34:21.921627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.228 [2024-11-19 07:34:21.936993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.228 [2024-11-19 07:34:21.937033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.228 [2024-11-19 07:34:21.951422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.228 [2024-11-19 07:34:21.951463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.228 [2024-11-19 07:34:21.966302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.228 [2024-11-19 07:34:21.966341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.228 [2024-11-19 07:34:21.981392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.228 [2024-11-19 07:34:21.981431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.228 [2024-11-19 07:34:21.996702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.228 [2024-11-19 07:34:21.996756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.228 [2024-11-19 07:34:22.011786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.228 [2024-11-19 07:34:22.011834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.228 [2024-11-19 07:34:22.026713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.228 [2024-11-19 07:34:22.026767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.228 [2024-11-19 07:34:22.041189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.228 [2024-11-19 07:34:22.041231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.228 [2024-11-19 07:34:22.056594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.228 [2024-11-19 07:34:22.056634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.228 [2024-11-19 07:34:22.071819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.228 [2024-11-19 07:34:22.071856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.228 [2024-11-19 07:34:22.086857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.228 [2024-11-19 07:34:22.086894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.228 [2024-11-19 07:34:22.101893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.228 [2024-11-19 07:34:22.101929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.228 [2024-11-19 07:34:22.117845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.228 [2024-11-19 07:34:22.117882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.228 [2024-11-19 07:34:22.132543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.228 [2024-11-19 07:34:22.132584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.228 [2024-11-19 07:34:22.146938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.228 [2024-11-19 07:34:22.146991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.487 [2024-11-19 07:34:22.162133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.487 [2024-11-19 07:34:22.162176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.487 [2024-11-19 07:34:22.174882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.487 [2024-11-19 07:34:22.174919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.487 [2024-11-19 07:34:22.189535] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.487 [2024-11-19 07:34:22.189575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.487 [2024-11-19 07:34:22.204150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.487 [2024-11-19 07:34:22.204191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.487 [2024-11-19 07:34:22.219544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.487 [2024-11-19 07:34:22.219586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.487 [2024-11-19 07:34:22.231706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.487 [2024-11-19 07:34:22.231759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.487 [2024-11-19 07:34:22.245476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.487 [2024-11-19 07:34:22.245516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.487 [2024-11-19 07:34:22.260518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.487 [2024-11-19 07:34:22.260559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.487 [2024-11-19 07:34:22.276345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.487 [2024-11-19 07:34:22.276387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.487 [2024-11-19 07:34:22.292168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.487 [2024-11-19 07:34:22.292208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.487 [2024-11-19 07:34:22.307767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.487 [2024-11-19 07:34:22.307803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.487 [2024-11-19 07:34:22.320992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.487 [2024-11-19 07:34:22.321033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.487 [2024-11-19 07:34:22.336234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.487 [2024-11-19 07:34:22.336274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.487 [2024-11-19 07:34:22.351341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.487 [2024-11-19 07:34:22.351381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.487 [2024-11-19 07:34:22.367094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.487 [2024-11-19 07:34:22.367136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.487 [2024-11-19 07:34:22.382304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.487 [2024-11-19 07:34:22.382345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.487 [2024-11-19 07:34:22.395569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.487 [2024-11-19 07:34:22.395610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.487 [2024-11-19 07:34:22.410599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.487 [2024-11-19 07:34:22.410639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.746 [2024-11-19 07:34:22.426884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.746 [2024-11-19 07:34:22.426922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.746 [2024-11-19 07:34:22.441487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.746 [2024-11-19 07:34:22.441527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.746 [2024-11-19 07:34:22.456454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.746 [2024-11-19 07:34:22.456495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.746 [2024-11-19 07:34:22.471471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.746 [2024-11-19 07:34:22.471526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.746 [2024-11-19 07:34:22.486705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.746 [2024-11-19 07:34:22.486758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.746 [2024-11-19 07:34:22.502066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.746 [2024-11-19 07:34:22.502107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.746 [2024-11-19 07:34:22.517168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.746 [2024-11-19 07:34:22.517209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.746 8399.40 IOPS, 65.62 MiB/s [2024-11-19T06:34:22.676Z] [2024-11-19 07:34:22.533084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.746 [2024-11-19 07:34:22.533125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.746 [2024-11-19 07:34:22.542199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.746 [2024-11-19 07:34:22.542238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.746 00:10:30.746 Latency(us) 00:10:30.746 [2024-11-19T06:34:22.676Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:30.746 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:30.746 Nvme1n1 : 5.02 8401.83 65.64 0.00 0.00 15208.99 6796.33 28350.39 00:10:30.746 [2024-11-19T06:34:22.676Z] =================================================================================================================== 00:10:30.746 [2024-11-19T06:34:22.676Z] Total : 8401.83 65.64 0.00 0.00 15208.99 6796.33 28350.39 00:10:30.746 [2024-11-19 07:34:22.548922] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.746 [2024-11-19 07:34:22.548953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.746 [2024-11-19 07:34:22.556986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.746 [2024-11-19 07:34:22.557024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.746 [2024-11-19 07:34:22.565003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.746 [2024-11-19 07:34:22.565040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.746 [2024-11-19 07:34:22.572997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.746 [2024-11-19 07:34:22.573031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.746 [2024-11-19 07:34:22.581081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.746 [2024-11-19 07:34:22.581117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.746 [2024-11-19 07:34:22.589055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.746 [2024-11-19 07:34:22.589090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.746 [2024-11-19 07:34:22.597275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.746 [2024-11-19 07:34:22.597345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.746 [2024-11-19 07:34:22.605284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.746 [2024-11-19 07:34:22.605359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.746 [2024-11-19 07:34:22.613157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.746 [2024-11-19 07:34:22.613204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.746 [2024-11-19 07:34:22.621198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.746 [2024-11-19 07:34:22.621233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.746 [2024-11-19 07:34:22.629215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.746 [2024-11-19 07:34:22.629249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.746 [2024-11-19 07:34:22.637210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.746 [2024-11-19 07:34:22.637243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.746 [2024-11-19 07:34:22.645263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.747 [2024-11-19 07:34:22.645297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.747 [2024-11-19 07:34:22.653264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.747 [2024-11-19 07:34:22.653297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.747 [2024-11-19 07:34:22.661308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.747 [2024-11-19 07:34:22.661342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.747 [2024-11-19 07:34:22.669339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.747 [2024-11-19 07:34:22.669374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.747 [2024-11-19 07:34:22.677376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.747 [2024-11-19 07:34:22.677415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.005 [2024-11-19 07:34:22.685438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.005 [2024-11-19 07:34:22.685489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.006 [2024-11-19 07:34:22.693545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.006 [2024-11-19 07:34:22.693614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.006 [2024-11-19 07:34:22.701530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.006 [2024-11-19 07:34:22.701599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.006 [2024-11-19 07:34:22.709471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.006 [2024-11-19 07:34:22.709508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.006 [2024-11-19 07:34:22.717446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.006 [2024-11-19 07:34:22.717480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.006 [2024-11-19 07:34:22.725495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.006 [2024-11-19 07:34:22.725530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.006 [2024-11-19 07:34:22.733522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.006 [2024-11-19 07:34:22.733558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.006 [2024-11-19 07:34:22.741559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.006 [2024-11-19 07:34:22.741594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.006 [2024-11-19 07:34:22.749568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.006 [2024-11-19 07:34:22.749602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.006 [2024-11-19 07:34:22.757588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.006 [2024-11-19 07:34:22.757622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.006 [2024-11-19 07:34:22.765622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.006 [2024-11-19 07:34:22.765663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.006 [2024-11-19 07:34:22.773658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.006 [2024-11-19 07:34:22.773705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.006 [2024-11-19 07:34:22.781633] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.006 [2024-11-19 07:34:22.781667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.006 [2024-11-19 07:34:22.789682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.006 [2024-11-19 07:34:22.789728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.006 [2024-11-19 07:34:22.797714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.006 [2024-11-19 07:34:22.797761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.006 [2024-11-19 07:34:22.805708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.006 [2024-11-19 07:34:22.805755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.006 [2024-11-19 07:34:22.813770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.006 [2024-11-19 07:34:22.813799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.006 [2024-11-19 07:34:22.821790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.006 [2024-11-19 07:34:22.821819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.006 [2024-11-19 07:34:22.829781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.006 [2024-11-19 07:34:22.829809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.006 [2024-11-19 07:34:22.837825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.006 [2024-11-19 07:34:22.837854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.006 [2024-11-19 07:34:22.849928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.006 [2024-11-19 07:34:22.849984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.006 [2024-11-19 07:34:22.858044] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.006 [2024-11-19 07:34:22.858108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.006 [2024-11-19 07:34:22.865998] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.006 [2024-11-19 07:34:22.866049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.006 [2024-11-19 07:34:22.873922] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.006 [2024-11-19 07:34:22.873952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.006 [2024-11-19 07:34:22.881962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.006 [2024-11-19 07:34:22.882012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.006 [2024-11-19 07:34:22.889987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.006 [2024-11-19 07:34:22.890018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.006 [2024-11-19 07:34:22.897966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.006 [2024-11-19 07:34:22.898016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.006 [2024-11-19 07:34:22.906045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.006 [2024-11-19 07:34:22.906079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.006 [2024-11-19 07:34:22.914037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.006 [2024-11-19 07:34:22.914074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.006 [2024-11-19 07:34:22.922255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.006 [2024-11-19 07:34:22.922333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.006 [2024-11-19 07:34:22.930236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.006 [2024-11-19 07:34:22.930302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.006 [2024-11-19 07:34:22.938231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.006 [2024-11-19 07:34:22.938303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.265 [2024-11-19 07:34:22.946187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.265 [2024-11-19 07:34:22.946227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.265 [2024-11-19 07:34:22.954178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.265 [2024-11-19 07:34:22.954215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.265 [2024-11-19 07:34:22.962185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.265 [2024-11-19 07:34:22.962220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.265 [2024-11-19 07:34:22.970222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.265 [2024-11-19 07:34:22.970257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.265 [2024-11-19 07:34:22.978213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.266 [2024-11-19 07:34:22.978247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.266 [2024-11-19 07:34:22.986259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.266 [2024-11-19 07:34:22.986296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.266 [2024-11-19 07:34:22.994288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.266 [2024-11-19 07:34:22.994323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.266 [2024-11-19 07:34:23.002282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.266 [2024-11-19 07:34:23.002315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.266 [2024-11-19 07:34:23.010322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.266 [2024-11-19 07:34:23.010357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.266 [2024-11-19 07:34:23.018346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.266 [2024-11-19 07:34:23.018391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.266 [2024-11-19 07:34:23.026350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.266 [2024-11-19 07:34:23.026383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.266 [2024-11-19 07:34:23.034404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.266 [2024-11-19 07:34:23.034437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.266 [2024-11-19 07:34:23.042393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.266 [2024-11-19 07:34:23.042426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.266 [2024-11-19 07:34:23.050442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.266 [2024-11-19 07:34:23.050475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.266 [2024-11-19 07:34:23.058468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.266 [2024-11-19 07:34:23.058502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.266 [2024-11-19 07:34:23.066492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.266 [2024-11-19 07:34:23.066526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.266 [2024-11-19 07:34:23.074514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.266 [2024-11-19 07:34:23.074547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.266 [2024-11-19 07:34:23.082539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.266 [2024-11-19 07:34:23.082573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.266 [2024-11-19 07:34:23.090559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.266 [2024-11-19 07:34:23.090597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.266 [2024-11-19 07:34:23.098731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.266 [2024-11-19 07:34:23.098798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.266 [2024-11-19 07:34:23.106604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.266 [2024-11-19 07:34:23.106638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.266 [2024-11-19 07:34:23.114643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.266 [2024-11-19 07:34:23.114677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.266 [2024-11-19 07:34:23.122653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.266 [2024-11-19 07:34:23.122687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.266 [2024-11-19 07:34:23.130656] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.266 [2024-11-19 07:34:23.130698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.266 [2024-11-19 07:34:23.138712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.266 [2024-11-19 07:34:23.138758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.266 [2024-11-19 07:34:23.146751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.266 [2024-11-19 07:34:23.146780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.266 [2024-11-19 07:34:23.154744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.266 [2024-11-19 07:34:23.154772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.266 [2024-11-19 07:34:23.162806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.266 [2024-11-19 07:34:23.162836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.266 [2024-11-19 07:34:23.170786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.266 [2024-11-19 07:34:23.170815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.266 [2024-11-19 07:34:23.178828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.266 [2024-11-19 07:34:23.178857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.266 [2024-11-19 07:34:23.186847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.266 [2024-11-19 07:34:23.186875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.266 [2024-11-19 07:34:23.194939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.266 [2024-11-19 07:34:23.194990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.525 [2024-11-19 07:34:23.202912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.525 [2024-11-19 07:34:23.202947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.525 [2024-11-19 07:34:23.210920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.525 [2024-11-19 07:34:23.210952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.525 [2024-11-19 07:34:23.219064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.525 [2024-11-19 07:34:23.219133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.525 [2024-11-19 07:34:23.227011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.525 [2024-11-19 07:34:23.227046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.525 [2024-11-19 07:34:23.234976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.525 [2024-11-19 07:34:23.235010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.525 [2024-11-19 07:34:23.243031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.525 [2024-11-19 07:34:23.243066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.525 [2024-11-19 07:34:23.251048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.525 [2024-11-19 07:34:23.251082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.525 [2024-11-19 07:34:23.259087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.525 [2024-11-19 07:34:23.259121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.525 [2024-11-19 07:34:23.267100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.525 [2024-11-19 07:34:23.267134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.525 [2024-11-19 07:34:23.275133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.525 [2024-11-19 07:34:23.275168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.525 [2024-11-19 07:34:23.283133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.525 [2024-11-19 07:34:23.283165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.525 [2024-11-19 07:34:23.291177] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.525 [2024-11-19 07:34:23.291211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.525 [2024-11-19 07:34:23.299173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.525 [2024-11-19 07:34:23.299206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.525 [2024-11-19 07:34:23.307326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.525 [2024-11-19 07:34:23.307385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.525 [2024-11-19 07:34:23.315324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.525 [2024-11-19 07:34:23.315375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.525 [2024-11-19 07:34:23.323246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.525 [2024-11-19 07:34:23.323279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.525 [2024-11-19 07:34:23.331300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.525 [2024-11-19 07:34:23.331334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.525 [2024-11-19 07:34:23.339311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.525 [2024-11-19 07:34:23.339345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.525 [2024-11-19 07:34:23.347314] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.525 [2024-11-19 07:34:23.347347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.525 [2024-11-19 07:34:23.355377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.525 [2024-11-19 07:34:23.355412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.525 [2024-11-19 07:34:23.363368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.525 [2024-11-19 07:34:23.363401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.526 [2024-11-19 07:34:23.371408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.526 [2024-11-19 07:34:23.371441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.526 [2024-11-19 07:34:23.379425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.526 [2024-11-19 07:34:23.379459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.526 [2024-11-19 07:34:23.387420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.526 [2024-11-19 07:34:23.387453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.526 [2024-11-19 07:34:23.395468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.526 [2024-11-19 07:34:23.395502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.526 [2024-11-19 07:34:23.403534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.526 [2024-11-19 07:34:23.403570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.526 [2024-11-19 07:34:23.411493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.526 [2024-11-19 07:34:23.411528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.526 [2024-11-19 07:34:23.419536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.526 [2024-11-19 07:34:23.419570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.526 [2024-11-19 07:34:23.427538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.526 [2024-11-19 07:34:23.427571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.526 [2024-11-19 07:34:23.435586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.526 [2024-11-19 07:34:23.435619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.526 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2873164) - No such process 00:10:31.526 07:34:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2873164 00:10:31.526 07:34:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:31.526 07:34:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.526 07:34:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:31.526 07:34:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.526 07:34:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:31.526 07:34:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.526 07:34:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:31.785 delay0 00:10:31.785 07:34:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.785 07:34:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:31.785 07:34:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.785 07:34:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:31.785 07:34:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.785 07:34:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:31.785 [2024-11-19 07:34:23.624836] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:38.348 Initializing NVMe Controllers 00:10:38.348 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:38.348 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:38.348 Initialization complete. Launching workers. 00:10:38.348 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 76 00:10:38.348 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 363, failed to submit 33 00:10:38.348 success 186, unsuccessful 177, failed 0 00:10:38.348 07:34:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:38.348 07:34:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:38.348 07:34:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:38.348 07:34:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:38.348 07:34:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:38.348 07:34:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:38.348 07:34:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:38.348 07:34:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:38.348 rmmod nvme_tcp 00:10:38.348 rmmod nvme_fabrics 00:10:38.348 rmmod nvme_keyring 00:10:38.348 07:34:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:38.348 07:34:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:38.348 07:34:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:38.348 07:34:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2871653 ']' 00:10:38.348 07:34:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2871653 00:10:38.348 07:34:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 2871653 ']' 00:10:38.348 07:34:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 2871653 00:10:38.348 07:34:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:10:38.348 07:34:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:38.348 07:34:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2871653 00:10:38.348 07:34:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:38.348 07:34:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:38.348 07:34:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2871653' 00:10:38.348 killing process with pid 2871653 00:10:38.348 07:34:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 2871653 00:10:38.348 07:34:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 2871653 00:10:39.283 07:34:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:39.283 07:34:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:39.283 07:34:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:39.283 07:34:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:10:39.283 07:34:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:10:39.283 07:34:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:10:39.283 07:34:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:39.283 07:34:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:39.283 07:34:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:39.283 07:34:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:39.283 07:34:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:39.283 07:34:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:41.821 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:41.821 00:10:41.821 real 0m31.945s 00:10:41.821 user 0m46.458s 00:10:41.821 sys 0m8.508s 00:10:41.821 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:41.821 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:41.821 ************************************ 00:10:41.821 END TEST nvmf_zcopy 00:10:41.821 ************************************ 00:10:41.821 07:34:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:41.821 07:34:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:41.821 07:34:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:41.821 07:34:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:41.821 ************************************ 00:10:41.821 START TEST nvmf_nmic 00:10:41.821 ************************************ 00:10:41.821 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:41.821 * Looking for test storage... 00:10:41.821 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:41.821 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:41.821 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:10:41.821 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:41.821 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:41.821 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:41.821 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:41.821 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:41.821 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:41.821 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:41.821 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:41.821 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:41.821 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:41.821 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:41.821 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:41.821 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:41.821 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:41.821 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:41.821 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:41.821 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:41.821 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:41.821 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:41.821 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:41.821 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:41.821 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:41.821 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:41.821 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:41.821 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:41.821 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:41.821 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:41.821 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:41.821 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:41.821 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:41.821 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:41.821 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:41.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.821 --rc genhtml_branch_coverage=1 00:10:41.821 --rc genhtml_function_coverage=1 00:10:41.821 --rc genhtml_legend=1 00:10:41.821 --rc geninfo_all_blocks=1 00:10:41.821 --rc geninfo_unexecuted_blocks=1 00:10:41.821 00:10:41.821 ' 00:10:41.821 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:41.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.821 --rc genhtml_branch_coverage=1 00:10:41.821 --rc genhtml_function_coverage=1 00:10:41.821 --rc genhtml_legend=1 00:10:41.821 --rc geninfo_all_blocks=1 00:10:41.821 --rc geninfo_unexecuted_blocks=1 00:10:41.821 00:10:41.821 ' 00:10:41.821 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:41.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.822 --rc genhtml_branch_coverage=1 00:10:41.822 --rc genhtml_function_coverage=1 00:10:41.822 --rc genhtml_legend=1 00:10:41.822 --rc geninfo_all_blocks=1 00:10:41.822 --rc geninfo_unexecuted_blocks=1 00:10:41.822 00:10:41.822 ' 00:10:41.822 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:41.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.822 --rc genhtml_branch_coverage=1 00:10:41.822 --rc genhtml_function_coverage=1 00:10:41.822 --rc genhtml_legend=1 00:10:41.822 --rc geninfo_all_blocks=1 00:10:41.822 --rc geninfo_unexecuted_blocks=1 00:10:41.822 00:10:41.822 ' 00:10:41.822 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:41.822 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:41.822 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:41.822 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:41.822 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:41.822 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:41.822 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:41.822 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:41.822 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:41.822 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:41.822 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:41.822 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:41.822 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:41.822 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:41.822 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:41.822 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:41.822 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:41.822 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:41.822 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:41.822 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:41.822 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:41.822 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:41.822 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:41.822 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.822 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.822 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.822 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:41.822 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.822 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:41.822 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:41.822 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:41.822 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:41.822 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:41.822 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:41.822 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:41.822 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:41.822 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:41.822 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:41.822 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:41.822 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:41.822 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:41.822 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:41.822 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:41.822 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:41.822 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:41.822 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:41.822 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:41.822 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:41.822 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:41.822 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:41.822 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:41.822 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:41.822 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:10:41.822 07:34:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:43.722 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:43.722 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:10:43.722 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:43.722 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:43.722 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:43.722 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:43.722 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:43.722 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:10:43.722 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:43.722 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:10:43.722 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:10:43.722 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:10:43.722 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:10:43.722 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:10:43.722 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:10:43.722 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:43.722 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:43.722 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:43.722 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:43.722 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:43.722 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:43.722 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:43.722 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:43.722 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:43.722 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:43.722 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:43.722 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:43.722 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:43.722 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:43.722 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:43.722 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:43.722 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:43.722 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:43.722 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:43.722 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:43.722 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:43.722 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:43.722 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:43.722 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:43.722 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:43.722 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:43.722 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:43.722 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:43.722 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:43.722 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:43.722 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:43.722 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:43.722 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:43.722 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:43.722 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:43.722 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:43.722 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:43.722 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:43.722 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:43.722 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:43.722 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:43.722 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:43.722 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:43.722 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:43.722 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:43.722 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:43.722 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:43.722 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:43.722 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:43.722 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:43.722 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:43.722 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:43.722 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:43.722 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:43.722 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:43.722 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:43.722 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:43.722 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:43.722 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:10:43.722 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:43.722 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:43.722 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:43.722 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:43.722 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:43.722 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:43.722 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:43.722 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:43.722 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:43.722 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:43.722 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:43.722 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:43.723 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:43.723 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:43.723 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:43.723 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:43.723 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:43.723 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:43.723 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:43.723 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:43.723 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:43.723 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:43.723 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:43.723 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:43.723 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:43.723 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:43.723 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:43.723 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:10:43.723 00:10:43.723 --- 10.0.0.2 ping statistics --- 00:10:43.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:43.723 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:10:43.723 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:43.723 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:43.723 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:10:43.723 00:10:43.723 --- 10.0.0.1 ping statistics --- 00:10:43.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:43.723 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:10:43.723 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:43.723 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:10:43.723 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:43.723 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:43.723 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:43.723 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:43.723 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:43.723 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:43.723 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:43.723 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:43.723 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:43.723 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:43.723 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:43.723 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2876826 00:10:43.723 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:43.723 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2876826 00:10:43.723 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 2876826 ']' 00:10:43.723 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:43.723 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:43.723 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:43.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:43.723 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:43.723 07:34:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:43.981 [2024-11-19 07:34:35.685954] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:10:43.981 [2024-11-19 07:34:35.686097] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:43.981 [2024-11-19 07:34:35.837676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:44.240 [2024-11-19 07:34:35.985001] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:44.240 [2024-11-19 07:34:35.985098] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:44.240 [2024-11-19 07:34:35.985124] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:44.240 [2024-11-19 07:34:35.985148] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:44.240 [2024-11-19 07:34:35.985167] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:44.240 [2024-11-19 07:34:35.988062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:44.240 [2024-11-19 07:34:35.988121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:44.240 [2024-11-19 07:34:35.988174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.240 [2024-11-19 07:34:35.988181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:44.806 07:34:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:44.806 07:34:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:10:44.806 07:34:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:44.806 07:34:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:44.806 07:34:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:44.806 07:34:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:44.806 07:34:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:44.806 07:34:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.806 07:34:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:44.806 [2024-11-19 07:34:36.669110] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:44.806 07:34:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.806 07:34:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:44.806 07:34:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.806 07:34:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:45.065 Malloc0 00:10:45.065 07:34:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.065 07:34:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:45.065 07:34:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.065 07:34:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:45.065 07:34:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.065 07:34:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:45.065 07:34:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.065 07:34:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:45.065 07:34:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.065 07:34:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:45.065 07:34:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.065 07:34:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:45.065 [2024-11-19 07:34:36.792811] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:45.065 07:34:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.065 07:34:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:45.065 test case1: single bdev can't be used in multiple subsystems 00:10:45.065 07:34:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:45.066 07:34:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.066 07:34:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:45.066 07:34:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.066 07:34:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:45.066 07:34:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.066 07:34:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:45.066 07:34:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.066 07:34:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:45.066 07:34:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:45.066 07:34:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.066 07:34:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:45.066 [2024-11-19 07:34:36.816539] bdev.c:8180:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:45.066 [2024-11-19 07:34:36.816578] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:45.066 [2024-11-19 07:34:36.816620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.066 request: 00:10:45.066 { 00:10:45.066 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:45.066 "namespace": { 00:10:45.066 "bdev_name": "Malloc0", 00:10:45.066 "no_auto_visible": false 00:10:45.066 }, 00:10:45.066 "method": "nvmf_subsystem_add_ns", 00:10:45.066 "req_id": 1 00:10:45.066 } 00:10:45.066 Got JSON-RPC error response 00:10:45.066 response: 00:10:45.066 { 00:10:45.066 "code": -32602, 00:10:45.066 "message": "Invalid parameters" 00:10:45.066 } 00:10:45.066 07:34:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:45.066 07:34:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:45.066 07:34:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:45.066 07:34:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:45.066 Adding namespace failed - expected result. 00:10:45.066 07:34:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:45.066 test case2: host connect to nvmf target in multiple paths 00:10:45.066 07:34:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:45.066 07:34:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.066 07:34:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:45.066 [2024-11-19 07:34:36.824707] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:45.066 07:34:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.066 07:34:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:45.633 07:34:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:46.568 07:34:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:46.568 07:34:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:10:46.568 07:34:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:46.568 07:34:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:46.568 07:34:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:10:48.472 07:34:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:48.472 07:34:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:48.472 07:34:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:48.472 07:34:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:48.472 07:34:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:48.472 07:34:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:10:48.472 07:34:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:48.472 [global] 00:10:48.472 thread=1 00:10:48.472 invalidate=1 00:10:48.472 rw=write 00:10:48.472 time_based=1 00:10:48.472 runtime=1 00:10:48.472 ioengine=libaio 00:10:48.472 direct=1 00:10:48.472 bs=4096 00:10:48.472 iodepth=1 00:10:48.472 norandommap=0 00:10:48.472 numjobs=1 00:10:48.472 00:10:48.472 verify_dump=1 00:10:48.472 verify_backlog=512 00:10:48.472 verify_state_save=0 00:10:48.472 do_verify=1 00:10:48.472 verify=crc32c-intel 00:10:48.472 [job0] 00:10:48.472 filename=/dev/nvme0n1 00:10:48.472 Could not set queue depth (nvme0n1) 00:10:48.731 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:48.731 fio-3.35 00:10:48.731 Starting 1 thread 00:10:50.105 00:10:50.105 job0: (groupid=0, jobs=1): err= 0: pid=2877472: Tue Nov 19 07:34:41 2024 00:10:50.106 read: IOPS=44, BW=176KiB/s (180kB/s)(180KiB/1022msec) 00:10:50.106 slat (nsec): min=10167, max=73439, avg=26808.67, stdev=10353.38 00:10:50.106 clat (usec): min=278, max=41212, avg=20274.49, stdev=20484.84 00:10:50.106 lat (usec): min=300, max=41282, avg=20301.30, stdev=20482.74 00:10:50.106 clat percentiles (usec): 00:10:50.106 | 1.00th=[ 277], 5.00th=[ 293], 10.00th=[ 302], 20.00th=[ 363], 00:10:50.106 | 30.00th=[ 515], 40.00th=[ 545], 50.00th=[ 1287], 60.00th=[41157], 00:10:50.106 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:50.106 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:50.106 | 99.99th=[41157] 00:10:50.106 write: IOPS=500, BW=2004KiB/s (2052kB/s)(2048KiB/1022msec); 0 zone resets 00:10:50.106 slat (nsec): min=6779, max=54086, avg=12149.21, stdev=6683.13 00:10:50.106 clat (usec): min=157, max=409, avg=194.24, stdev=18.82 00:10:50.106 lat (usec): min=164, max=439, avg=206.39, stdev=23.02 00:10:50.106 clat percentiles (usec): 00:10:50.106 | 1.00th=[ 159], 5.00th=[ 167], 10.00th=[ 176], 20.00th=[ 182], 00:10:50.106 | 30.00th=[ 186], 40.00th=[ 190], 50.00th=[ 192], 60.00th=[ 196], 00:10:50.106 | 70.00th=[ 200], 80.00th=[ 208], 90.00th=[ 217], 95.00th=[ 221], 00:10:50.106 | 99.00th=[ 241], 99.50th=[ 245], 99.90th=[ 408], 99.95th=[ 408], 00:10:50.106 | 99.99th=[ 408] 00:10:50.106 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:10:50.106 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:50.106 lat (usec) : 250=91.56%, 500=2.69%, 750=1.62% 00:10:50.106 lat (msec) : 2=0.18%, 50=3.95% 00:10:50.106 cpu : usr=0.59%, sys=0.88%, ctx=557, majf=0, minf=1 00:10:50.106 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:50.106 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.106 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.106 issued rwts: total=45,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:50.106 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:50.106 00:10:50.106 Run status group 0 (all jobs): 00:10:50.106 READ: bw=176KiB/s (180kB/s), 176KiB/s-176KiB/s (180kB/s-180kB/s), io=180KiB (184kB), run=1022-1022msec 00:10:50.106 WRITE: bw=2004KiB/s (2052kB/s), 2004KiB/s-2004KiB/s (2052kB/s-2052kB/s), io=2048KiB (2097kB), run=1022-1022msec 00:10:50.106 00:10:50.106 Disk stats (read/write): 00:10:50.106 nvme0n1: ios=82/512, merge=0/0, ticks=810/95, in_queue=905, util=91.68% 00:10:50.106 07:34:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:50.106 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:50.106 07:34:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:50.106 07:34:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:10:50.106 07:34:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:50.106 07:34:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:50.106 07:34:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:50.106 07:34:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:50.106 07:34:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:10:50.106 07:34:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:50.106 07:34:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:50.106 07:34:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:50.106 07:34:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:50.106 07:34:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:50.106 07:34:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:50.106 07:34:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:50.106 07:34:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:50.106 rmmod nvme_tcp 00:10:50.106 rmmod nvme_fabrics 00:10:50.106 rmmod nvme_keyring 00:10:50.106 07:34:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:50.106 07:34:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:50.106 07:34:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:50.106 07:34:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2876826 ']' 00:10:50.106 07:34:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2876826 00:10:50.106 07:34:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 2876826 ']' 00:10:50.106 07:34:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 2876826 00:10:50.106 07:34:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:10:50.106 07:34:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:50.106 07:34:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2876826 00:10:50.106 07:34:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:50.106 07:34:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:50.106 07:34:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2876826' 00:10:50.106 killing process with pid 2876826 00:10:50.106 07:34:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 2876826 00:10:50.106 07:34:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 2876826 00:10:51.483 07:34:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:51.483 07:34:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:51.483 07:34:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:51.483 07:34:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:51.483 07:34:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:10:51.483 07:34:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:51.483 07:34:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:10:51.483 07:34:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:51.483 07:34:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:51.483 07:34:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:51.483 07:34:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:51.483 07:34:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:53.391 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:53.391 00:10:53.391 real 0m12.094s 00:10:53.391 user 0m29.037s 00:10:53.391 sys 0m2.611s 00:10:53.391 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:53.391 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:53.391 ************************************ 00:10:53.391 END TEST nvmf_nmic 00:10:53.391 ************************************ 00:10:53.650 07:34:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:53.650 07:34:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:53.650 07:34:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:53.650 07:34:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:53.650 ************************************ 00:10:53.650 START TEST nvmf_fio_target 00:10:53.650 ************************************ 00:10:53.650 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:53.650 * Looking for test storage... 00:10:53.650 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:53.650 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:53.650 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:10:53.650 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:53.650 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:53.650 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:53.650 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:53.650 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:53.650 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:53.650 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:53.650 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:53.650 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:53.650 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:53.650 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:53.650 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:53.650 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:53.650 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:53.650 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:53.650 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:53.650 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:53.650 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:53.650 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:53.650 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:53.650 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:53.650 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:53.650 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:53.650 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:53.650 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:53.650 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:53.650 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:53.650 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:53.650 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:53.650 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:53.650 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:53.650 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:53.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.650 --rc genhtml_branch_coverage=1 00:10:53.650 --rc genhtml_function_coverage=1 00:10:53.650 --rc genhtml_legend=1 00:10:53.650 --rc geninfo_all_blocks=1 00:10:53.650 --rc geninfo_unexecuted_blocks=1 00:10:53.650 00:10:53.650 ' 00:10:53.650 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:53.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.650 --rc genhtml_branch_coverage=1 00:10:53.650 --rc genhtml_function_coverage=1 00:10:53.650 --rc genhtml_legend=1 00:10:53.650 --rc geninfo_all_blocks=1 00:10:53.650 --rc geninfo_unexecuted_blocks=1 00:10:53.650 00:10:53.650 ' 00:10:53.650 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:53.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.650 --rc genhtml_branch_coverage=1 00:10:53.650 --rc genhtml_function_coverage=1 00:10:53.650 --rc genhtml_legend=1 00:10:53.650 --rc geninfo_all_blocks=1 00:10:53.650 --rc geninfo_unexecuted_blocks=1 00:10:53.650 00:10:53.650 ' 00:10:53.650 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:53.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.650 --rc genhtml_branch_coverage=1 00:10:53.650 --rc genhtml_function_coverage=1 00:10:53.650 --rc genhtml_legend=1 00:10:53.650 --rc geninfo_all_blocks=1 00:10:53.650 --rc geninfo_unexecuted_blocks=1 00:10:53.650 00:10:53.650 ' 00:10:53.650 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:53.650 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:53.650 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:53.650 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:53.650 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:53.650 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:53.650 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:53.650 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:53.650 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:53.650 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:53.650 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:53.650 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:53.650 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:53.650 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:53.650 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:53.650 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:53.650 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:53.650 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:53.650 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:53.650 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:53.650 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:53.650 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:53.650 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:53.650 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.650 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.650 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.650 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:53.651 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.651 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:53.651 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:53.651 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:53.651 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:53.651 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:53.651 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:53.651 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:53.651 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:53.651 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:53.651 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:53.651 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:53.651 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:53.651 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:53.651 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:53.651 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:53.651 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:53.651 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:53.651 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:53.651 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:53.651 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:53.651 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:53.651 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:53.651 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:53.651 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:53.651 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:53.651 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:10:53.651 07:34:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.184 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:56.184 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:56.184 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:56.184 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:56.184 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:56.184 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:56.184 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:56.184 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:56.184 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:56.184 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:56.184 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:56.184 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:56.184 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:56.184 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:56.184 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:56.184 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:56.184 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:56.184 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:56.185 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:56.185 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:56.185 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:56.185 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:56.185 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:56.185 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:10:56.185 00:10:56.185 --- 10.0.0.2 ping statistics --- 00:10:56.185 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:56.185 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:56.185 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:56.185 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.069 ms 00:10:56.185 00:10:56.185 --- 10.0.0.1 ping statistics --- 00:10:56.185 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:56.185 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2879695 00:10:56.185 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:56.186 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2879695 00:10:56.186 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 2879695 ']' 00:10:56.186 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:56.186 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:56.186 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:56.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:56.186 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:56.186 07:34:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.186 [2024-11-19 07:34:47.775507] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:10:56.186 [2024-11-19 07:34:47.775651] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:56.186 [2024-11-19 07:34:47.928272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:56.186 [2024-11-19 07:34:48.074965] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:56.186 [2024-11-19 07:34:48.075048] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:56.186 [2024-11-19 07:34:48.075074] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:56.186 [2024-11-19 07:34:48.075099] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:56.186 [2024-11-19 07:34:48.075119] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:56.186 [2024-11-19 07:34:48.077938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:56.186 [2024-11-19 07:34:48.077997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:56.186 [2024-11-19 07:34:48.078049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:56.186 [2024-11-19 07:34:48.078055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:57.121 07:34:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:57.121 07:34:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:10:57.121 07:34:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:57.121 07:34:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:57.121 07:34:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.121 07:34:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:57.121 07:34:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:57.121 [2024-11-19 07:34:48.980440] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:57.121 07:34:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:57.688 07:34:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:57.688 07:34:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:57.946 07:34:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:57.946 07:34:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:58.205 07:34:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:58.205 07:34:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:58.771 07:34:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:58.771 07:34:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:58.771 07:34:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:59.337 07:34:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:59.337 07:34:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:59.596 07:34:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:59.596 07:34:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:59.854 07:34:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:59.854 07:34:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:00.112 07:34:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:00.369 07:34:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:00.369 07:34:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:00.626 07:34:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:00.626 07:34:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:00.884 07:34:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:01.142 [2024-11-19 07:34:53.035792] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:01.142 07:34:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:01.430 07:34:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:01.729 07:34:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:02.664 07:34:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:02.664 07:34:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:11:02.664 07:34:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:02.664 07:34:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:11:02.664 07:34:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:11:02.664 07:34:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:11:04.564 07:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:04.564 07:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:04.564 07:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:04.564 07:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:11:04.564 07:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:04.564 07:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:11:04.564 07:34:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:04.564 [global] 00:11:04.564 thread=1 00:11:04.564 invalidate=1 00:11:04.564 rw=write 00:11:04.564 time_based=1 00:11:04.564 runtime=1 00:11:04.564 ioengine=libaio 00:11:04.564 direct=1 00:11:04.564 bs=4096 00:11:04.564 iodepth=1 00:11:04.564 norandommap=0 00:11:04.564 numjobs=1 00:11:04.564 00:11:04.564 verify_dump=1 00:11:04.564 verify_backlog=512 00:11:04.564 verify_state_save=0 00:11:04.564 do_verify=1 00:11:04.564 verify=crc32c-intel 00:11:04.564 [job0] 00:11:04.564 filename=/dev/nvme0n1 00:11:04.564 [job1] 00:11:04.564 filename=/dev/nvme0n2 00:11:04.564 [job2] 00:11:04.564 filename=/dev/nvme0n3 00:11:04.564 [job3] 00:11:04.564 filename=/dev/nvme0n4 00:11:04.564 Could not set queue depth (nvme0n1) 00:11:04.564 Could not set queue depth (nvme0n2) 00:11:04.564 Could not set queue depth (nvme0n3) 00:11:04.564 Could not set queue depth (nvme0n4) 00:11:04.823 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:04.823 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:04.823 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:04.823 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:04.823 fio-3.35 00:11:04.823 Starting 4 threads 00:11:06.198 00:11:06.198 job0: (groupid=0, jobs=1): err= 0: pid=2880901: Tue Nov 19 07:34:57 2024 00:11:06.198 read: IOPS=21, BW=86.0KiB/s (88.1kB/s)(88.0KiB/1023msec) 00:11:06.198 slat (nsec): min=6500, max=33833, avg=24070.23, stdev=9261.17 00:11:06.198 clat (usec): min=40789, max=41007, avg=40957.78, stdev=45.89 00:11:06.198 lat (usec): min=40795, max=41022, avg=40981.85, stdev=46.27 00:11:06.198 clat percentiles (usec): 00:11:06.198 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:11:06.198 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:06.198 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:06.198 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:06.198 | 99.99th=[41157] 00:11:06.198 write: IOPS=500, BW=2002KiB/s (2050kB/s)(2048KiB/1023msec); 0 zone resets 00:11:06.198 slat (nsec): min=5977, max=54879, avg=9633.27, stdev=5420.97 00:11:06.198 clat (usec): min=190, max=349, avg=224.36, stdev=14.69 00:11:06.198 lat (usec): min=197, max=404, avg=234.00, stdev=16.62 00:11:06.198 clat percentiles (usec): 00:11:06.198 | 1.00th=[ 198], 5.00th=[ 204], 10.00th=[ 208], 20.00th=[ 212], 00:11:06.198 | 30.00th=[ 217], 40.00th=[ 221], 50.00th=[ 223], 60.00th=[ 227], 00:11:06.198 | 70.00th=[ 231], 80.00th=[ 235], 90.00th=[ 243], 95.00th=[ 249], 00:11:06.198 | 99.00th=[ 265], 99.50th=[ 265], 99.90th=[ 351], 99.95th=[ 351], 00:11:06.198 | 99.99th=[ 351] 00:11:06.198 bw ( KiB/s): min= 4096, max= 4096, per=24.20%, avg=4096.00, stdev= 0.00, samples=1 00:11:06.198 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:06.198 lat (usec) : 250=91.76%, 500=4.12% 00:11:06.198 lat (msec) : 50=4.12% 00:11:06.199 cpu : usr=0.39%, sys=0.39%, ctx=534, majf=0, minf=1 00:11:06.199 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:06.199 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.199 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.199 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:06.199 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:06.199 job1: (groupid=0, jobs=1): err= 0: pid=2880902: Tue Nov 19 07:34:57 2024 00:11:06.199 read: IOPS=1292, BW=5171KiB/s (5295kB/s)(5176KiB/1001msec) 00:11:06.199 slat (nsec): min=6018, max=47076, avg=12466.17, stdev=5179.92 00:11:06.199 clat (usec): min=227, max=40981, avg=465.64, stdev=2762.78 00:11:06.199 lat (usec): min=236, max=41013, avg=478.10, stdev=2764.02 00:11:06.199 clat percentiles (usec): 00:11:06.199 | 1.00th=[ 237], 5.00th=[ 247], 10.00th=[ 255], 20.00th=[ 262], 00:11:06.199 | 30.00th=[ 269], 40.00th=[ 273], 50.00th=[ 273], 60.00th=[ 277], 00:11:06.199 | 70.00th=[ 285], 80.00th=[ 289], 90.00th=[ 297], 95.00th=[ 310], 00:11:06.199 | 99.00th=[ 429], 99.50th=[ 537], 99.90th=[41157], 99.95th=[41157], 00:11:06.199 | 99.99th=[41157] 00:11:06.199 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:11:06.199 slat (nsec): min=7277, max=56326, avg=16166.28, stdev=6735.78 00:11:06.199 clat (usec): min=170, max=3532, avg=224.24, stdev=93.20 00:11:06.199 lat (usec): min=177, max=3542, avg=240.41, stdev=93.60 00:11:06.199 clat percentiles (usec): 00:11:06.199 | 1.00th=[ 180], 5.00th=[ 186], 10.00th=[ 190], 20.00th=[ 198], 00:11:06.199 | 30.00th=[ 204], 40.00th=[ 208], 50.00th=[ 212], 60.00th=[ 219], 00:11:06.199 | 70.00th=[ 225], 80.00th=[ 243], 90.00th=[ 258], 95.00th=[ 273], 00:11:06.199 | 99.00th=[ 392], 99.50th=[ 420], 99.90th=[ 519], 99.95th=[ 3523], 00:11:06.199 | 99.99th=[ 3523] 00:11:06.199 bw ( KiB/s): min= 8192, max= 8192, per=48.41%, avg=8192.00, stdev= 0.00, samples=1 00:11:06.199 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:06.199 lat (usec) : 250=49.54%, 500=50.11%, 750=0.11% 00:11:06.199 lat (msec) : 4=0.04%, 50=0.21% 00:11:06.199 cpu : usr=2.50%, sys=6.10%, ctx=2830, majf=0, minf=1 00:11:06.199 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:06.199 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.199 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.199 issued rwts: total=1294,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:06.199 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:06.199 job2: (groupid=0, jobs=1): err= 0: pid=2880904: Tue Nov 19 07:34:57 2024 00:11:06.199 read: IOPS=40, BW=162KiB/s (166kB/s)(164KiB/1014msec) 00:11:06.199 slat (nsec): min=7500, max=36489, avg=18648.00, stdev=10974.82 00:11:06.199 clat (usec): min=307, max=41775, avg=20624.33, stdev=20173.64 00:11:06.199 lat (usec): min=318, max=41791, avg=20642.98, stdev=20181.84 00:11:06.199 clat percentiles (usec): 00:11:06.199 | 1.00th=[ 310], 5.00th=[ 396], 10.00th=[ 412], 20.00th=[ 420], 00:11:06.199 | 30.00th=[ 424], 40.00th=[ 433], 50.00th=[ 9503], 60.00th=[41157], 00:11:06.199 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:06.199 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:11:06.199 | 99.99th=[41681] 00:11:06.199 write: IOPS=504, BW=2020KiB/s (2068kB/s)(2048KiB/1014msec); 0 zone resets 00:11:06.199 slat (usec): min=6, max=22746, avg=58.61, stdev=1004.67 00:11:06.199 clat (usec): min=199, max=552, avg=264.31, stdev=51.79 00:11:06.199 lat (usec): min=212, max=23234, avg=322.92, stdev=1015.81 00:11:06.199 clat percentiles (usec): 00:11:06.199 | 1.00th=[ 208], 5.00th=[ 219], 10.00th=[ 227], 20.00th=[ 235], 00:11:06.199 | 30.00th=[ 239], 40.00th=[ 245], 50.00th=[ 249], 60.00th=[ 255], 00:11:06.199 | 70.00th=[ 262], 80.00th=[ 269], 90.00th=[ 359], 95.00th=[ 396], 00:11:06.199 | 99.00th=[ 424], 99.50th=[ 490], 99.90th=[ 553], 99.95th=[ 553], 00:11:06.199 | 99.99th=[ 553] 00:11:06.199 bw ( KiB/s): min= 4096, max= 4096, per=24.20%, avg=4096.00, stdev= 0.00, samples=1 00:11:06.199 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:06.199 lat (usec) : 250=46.47%, 500=49.01%, 750=0.54% 00:11:06.199 lat (msec) : 10=0.36%, 50=3.62% 00:11:06.199 cpu : usr=0.39%, sys=0.89%, ctx=555, majf=0, minf=1 00:11:06.199 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:06.199 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.199 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.199 issued rwts: total=41,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:06.199 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:06.199 job3: (groupid=0, jobs=1): err= 0: pid=2880905: Tue Nov 19 07:34:57 2024 00:11:06.199 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:11:06.199 slat (nsec): min=6498, max=59643, avg=16273.67, stdev=6527.89 00:11:06.199 clat (usec): min=237, max=631, avg=338.66, stdev=67.52 00:11:06.199 lat (usec): min=245, max=651, avg=354.93, stdev=71.03 00:11:06.199 clat percentiles (usec): 00:11:06.199 | 1.00th=[ 251], 5.00th=[ 265], 10.00th=[ 273], 20.00th=[ 289], 00:11:06.199 | 30.00th=[ 302], 40.00th=[ 310], 50.00th=[ 326], 60.00th=[ 343], 00:11:06.199 | 70.00th=[ 355], 80.00th=[ 363], 90.00th=[ 408], 95.00th=[ 490], 00:11:06.199 | 99.00th=[ 594], 99.50th=[ 603], 99.90th=[ 619], 99.95th=[ 635], 00:11:06.199 | 99.99th=[ 635] 00:11:06.199 write: IOPS=1766, BW=7065KiB/s (7234kB/s)(7072KiB/1001msec); 0 zone resets 00:11:06.199 slat (nsec): min=8059, max=54314, avg=16151.29, stdev=6870.64 00:11:06.199 clat (usec): min=184, max=512, avg=232.08, stdev=31.18 00:11:06.199 lat (usec): min=194, max=521, avg=248.23, stdev=33.38 00:11:06.199 clat percentiles (usec): 00:11:06.199 | 1.00th=[ 190], 5.00th=[ 196], 10.00th=[ 202], 20.00th=[ 208], 00:11:06.199 | 30.00th=[ 217], 40.00th=[ 223], 50.00th=[ 229], 60.00th=[ 235], 00:11:06.199 | 70.00th=[ 243], 80.00th=[ 251], 90.00th=[ 265], 95.00th=[ 277], 00:11:06.199 | 99.00th=[ 314], 99.50th=[ 437], 99.90th=[ 510], 99.95th=[ 515], 00:11:06.199 | 99.99th=[ 515] 00:11:06.199 bw ( KiB/s): min= 8192, max= 8192, per=48.41%, avg=8192.00, stdev= 0.00, samples=1 00:11:06.199 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:06.199 lat (usec) : 250=42.37%, 500=55.60%, 750=2.03% 00:11:06.199 cpu : usr=3.60%, sys=7.60%, ctx=3305, majf=0, minf=1 00:11:06.199 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:06.199 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.199 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.199 issued rwts: total=1536,1768,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:06.199 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:06.199 00:11:06.199 Run status group 0 (all jobs): 00:11:06.199 READ: bw=11.0MiB/s (11.6MB/s), 86.0KiB/s-6138KiB/s (88.1kB/s-6285kB/s), io=11.3MiB (11.8MB), run=1001-1023msec 00:11:06.199 WRITE: bw=16.5MiB/s (17.3MB/s), 2002KiB/s-7065KiB/s (2050kB/s-7234kB/s), io=16.9MiB (17.7MB), run=1001-1023msec 00:11:06.199 00:11:06.199 Disk stats (read/write): 00:11:06.199 nvme0n1: ios=67/512, merge=0/0, ticks=727/109, in_queue=836, util=86.97% 00:11:06.199 nvme0n2: ios=1043/1536, merge=0/0, ticks=448/330, in_queue=778, util=86.88% 00:11:06.199 nvme0n3: ios=51/512, merge=0/0, ticks=1181/127, in_queue=1308, util=98.96% 00:11:06.199 nvme0n4: ios=1236/1536, merge=0/0, ticks=882/343, in_queue=1225, util=99.05% 00:11:06.199 07:34:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:06.199 [global] 00:11:06.199 thread=1 00:11:06.199 invalidate=1 00:11:06.199 rw=randwrite 00:11:06.199 time_based=1 00:11:06.199 runtime=1 00:11:06.199 ioengine=libaio 00:11:06.199 direct=1 00:11:06.199 bs=4096 00:11:06.199 iodepth=1 00:11:06.199 norandommap=0 00:11:06.199 numjobs=1 00:11:06.199 00:11:06.199 verify_dump=1 00:11:06.199 verify_backlog=512 00:11:06.199 verify_state_save=0 00:11:06.199 do_verify=1 00:11:06.199 verify=crc32c-intel 00:11:06.199 [job0] 00:11:06.199 filename=/dev/nvme0n1 00:11:06.199 [job1] 00:11:06.199 filename=/dev/nvme0n2 00:11:06.199 [job2] 00:11:06.199 filename=/dev/nvme0n3 00:11:06.199 [job3] 00:11:06.199 filename=/dev/nvme0n4 00:11:06.199 Could not set queue depth (nvme0n1) 00:11:06.199 Could not set queue depth (nvme0n2) 00:11:06.199 Could not set queue depth (nvme0n3) 00:11:06.199 Could not set queue depth (nvme0n4) 00:11:06.199 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:06.199 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:06.199 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:06.199 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:06.199 fio-3.35 00:11:06.199 Starting 4 threads 00:11:07.575 00:11:07.575 job0: (groupid=0, jobs=1): err= 0: pid=2881141: Tue Nov 19 07:34:59 2024 00:11:07.575 read: IOPS=524, BW=2098KiB/s (2148kB/s)(2104KiB/1003msec) 00:11:07.575 slat (nsec): min=5716, max=48258, avg=13540.23, stdev=7002.28 00:11:07.575 clat (usec): min=236, max=41128, avg=1395.10, stdev=6551.90 00:11:07.575 lat (usec): min=246, max=41147, avg=1408.64, stdev=6553.49 00:11:07.575 clat percentiles (usec): 00:11:07.575 | 1.00th=[ 247], 5.00th=[ 258], 10.00th=[ 262], 20.00th=[ 269], 00:11:07.575 | 30.00th=[ 277], 40.00th=[ 285], 50.00th=[ 306], 60.00th=[ 322], 00:11:07.575 | 70.00th=[ 334], 80.00th=[ 359], 90.00th=[ 392], 95.00th=[ 445], 00:11:07.575 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:07.575 | 99.99th=[41157] 00:11:07.575 write: IOPS=1020, BW=4084KiB/s (4182kB/s)(4096KiB/1003msec); 0 zone resets 00:11:07.575 slat (nsec): min=6097, max=56986, avg=14612.42, stdev=6833.30 00:11:07.575 clat (usec): min=175, max=465, avg=234.29, stdev=32.43 00:11:07.575 lat (usec): min=182, max=488, avg=248.91, stdev=35.52 00:11:07.575 clat percentiles (usec): 00:11:07.575 | 1.00th=[ 186], 5.00th=[ 194], 10.00th=[ 200], 20.00th=[ 210], 00:11:07.575 | 30.00th=[ 219], 40.00th=[ 225], 50.00th=[ 231], 60.00th=[ 237], 00:11:07.575 | 70.00th=[ 243], 80.00th=[ 251], 90.00th=[ 269], 95.00th=[ 293], 00:11:07.575 | 99.00th=[ 355], 99.50th=[ 388], 99.90th=[ 424], 99.95th=[ 465], 00:11:07.575 | 99.99th=[ 465] 00:11:07.575 bw ( KiB/s): min= 8192, max= 8192, per=40.16%, avg=8192.00, stdev= 0.00, samples=1 00:11:07.575 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:07.575 lat (usec) : 250=52.84%, 500=45.94%, 750=0.32% 00:11:07.575 lat (msec) : 50=0.90% 00:11:07.575 cpu : usr=1.40%, sys=2.30%, ctx=1551, majf=0, minf=1 00:11:07.575 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:07.575 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.575 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.575 issued rwts: total=526,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:07.575 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:07.575 job1: (groupid=0, jobs=1): err= 0: pid=2881142: Tue Nov 19 07:34:59 2024 00:11:07.575 read: IOPS=1727, BW=6909KiB/s (7075kB/s)(6916KiB/1001msec) 00:11:07.575 slat (nsec): min=6044, max=51519, avg=13966.99, stdev=5356.21 00:11:07.575 clat (usec): min=230, max=405, avg=277.00, stdev=21.11 00:11:07.575 lat (usec): min=237, max=415, avg=290.96, stdev=24.33 00:11:07.575 clat percentiles (usec): 00:11:07.575 | 1.00th=[ 239], 5.00th=[ 247], 10.00th=[ 251], 20.00th=[ 260], 00:11:07.575 | 30.00th=[ 269], 40.00th=[ 273], 50.00th=[ 277], 60.00th=[ 281], 00:11:07.575 | 70.00th=[ 285], 80.00th=[ 289], 90.00th=[ 302], 95.00th=[ 310], 00:11:07.575 | 99.00th=[ 363], 99.50th=[ 379], 99.90th=[ 400], 99.95th=[ 404], 00:11:07.575 | 99.99th=[ 404] 00:11:07.575 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:11:07.575 slat (nsec): min=7713, max=58590, avg=17019.99, stdev=7584.98 00:11:07.575 clat (usec): min=160, max=1316, avg=216.76, stdev=43.85 00:11:07.575 lat (usec): min=168, max=1338, avg=233.78, stdev=47.14 00:11:07.575 clat percentiles (usec): 00:11:07.575 | 1.00th=[ 172], 5.00th=[ 180], 10.00th=[ 186], 20.00th=[ 194], 00:11:07.575 | 30.00th=[ 202], 40.00th=[ 210], 50.00th=[ 215], 60.00th=[ 219], 00:11:07.575 | 70.00th=[ 225], 80.00th=[ 231], 90.00th=[ 243], 95.00th=[ 255], 00:11:07.575 | 99.00th=[ 347], 99.50th=[ 408], 99.90th=[ 807], 99.95th=[ 857], 00:11:07.575 | 99.99th=[ 1319] 00:11:07.575 bw ( KiB/s): min= 8192, max= 8192, per=40.16%, avg=8192.00, stdev= 0.00, samples=1 00:11:07.575 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:07.575 lat (usec) : 250=54.94%, 500=44.96%, 1000=0.08% 00:11:07.575 lat (msec) : 2=0.03% 00:11:07.575 cpu : usr=4.40%, sys=7.90%, ctx=3780, majf=0, minf=1 00:11:07.575 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:07.575 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.575 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.575 issued rwts: total=1729,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:07.575 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:07.575 job2: (groupid=0, jobs=1): err= 0: pid=2881144: Tue Nov 19 07:34:59 2024 00:11:07.575 read: IOPS=18, BW=75.7KiB/s (77.5kB/s)(76.0KiB/1004msec) 00:11:07.575 slat (nsec): min=14190, max=37040, avg=27913.00, stdev=9909.43 00:11:07.575 clat (usec): min=40747, max=41183, avg=40968.72, stdev=119.62 00:11:07.575 lat (usec): min=40783, max=41202, avg=40996.63, stdev=116.26 00:11:07.575 clat percentiles (usec): 00:11:07.575 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:11:07.575 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:07.575 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:07.575 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:07.575 | 99.99th=[41157] 00:11:07.575 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:11:07.575 slat (nsec): min=9381, max=63309, avg=22405.80, stdev=10007.92 00:11:07.575 clat (usec): min=252, max=632, avg=409.24, stdev=63.16 00:11:07.575 lat (usec): min=268, max=673, avg=431.64, stdev=60.72 00:11:07.575 clat percentiles (usec): 00:11:07.575 | 1.00th=[ 281], 5.00th=[ 310], 10.00th=[ 326], 20.00th=[ 355], 00:11:07.575 | 30.00th=[ 375], 40.00th=[ 396], 50.00th=[ 416], 60.00th=[ 429], 00:11:07.575 | 70.00th=[ 441], 80.00th=[ 453], 90.00th=[ 486], 95.00th=[ 515], 00:11:07.575 | 99.00th=[ 562], 99.50th=[ 603], 99.90th=[ 635], 99.95th=[ 635], 00:11:07.575 | 99.99th=[ 635] 00:11:07.575 bw ( KiB/s): min= 4096, max= 4096, per=20.08%, avg=4096.00, stdev= 0.00, samples=1 00:11:07.575 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:07.575 lat (usec) : 500=89.27%, 750=7.16% 00:11:07.575 lat (msec) : 50=3.58% 00:11:07.575 cpu : usr=0.70%, sys=1.60%, ctx=532, majf=0, minf=1 00:11:07.575 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:07.575 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.575 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.575 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:07.575 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:07.575 job3: (groupid=0, jobs=1): err= 0: pid=2881145: Tue Nov 19 07:34:59 2024 00:11:07.575 read: IOPS=1201, BW=4807KiB/s (4923kB/s)(4812KiB/1001msec) 00:11:07.575 slat (nsec): min=6080, max=57370, avg=17318.34, stdev=7503.98 00:11:07.575 clat (usec): min=265, max=1493, avg=374.91, stdev=91.29 00:11:07.575 lat (usec): min=273, max=1511, avg=392.22, stdev=94.59 00:11:07.575 clat percentiles (usec): 00:11:07.575 | 1.00th=[ 281], 5.00th=[ 293], 10.00th=[ 306], 20.00th=[ 314], 00:11:07.575 | 30.00th=[ 322], 40.00th=[ 334], 50.00th=[ 343], 60.00th=[ 351], 00:11:07.575 | 70.00th=[ 379], 80.00th=[ 465], 90.00th=[ 490], 95.00th=[ 510], 00:11:07.575 | 99.00th=[ 627], 99.50th=[ 693], 99.90th=[ 1254], 99.95th=[ 1500], 00:11:07.575 | 99.99th=[ 1500] 00:11:07.575 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:11:07.575 slat (nsec): min=7678, max=74567, avg=19181.51, stdev=8652.05 00:11:07.575 clat (usec): min=208, max=1004, avg=315.35, stdev=86.11 00:11:07.575 lat (usec): min=218, max=1037, avg=334.53, stdev=88.09 00:11:07.575 clat percentiles (usec): 00:11:07.575 | 1.00th=[ 223], 5.00th=[ 231], 10.00th=[ 241], 20.00th=[ 249], 00:11:07.575 | 30.00th=[ 258], 40.00th=[ 269], 50.00th=[ 281], 60.00th=[ 297], 00:11:07.575 | 70.00th=[ 347], 80.00th=[ 396], 90.00th=[ 445], 95.00th=[ 474], 00:11:07.575 | 99.00th=[ 537], 99.50th=[ 635], 99.90th=[ 996], 99.95th=[ 1004], 00:11:07.575 | 99.99th=[ 1004] 00:11:07.575 bw ( KiB/s): min= 5120, max= 5120, per=25.10%, avg=5120.00, stdev= 0.00, samples=1 00:11:07.575 iops : min= 1280, max= 1280, avg=1280.00, stdev= 0.00, samples=1 00:11:07.575 lat (usec) : 250=12.27%, 500=83.28%, 750=4.16%, 1000=0.15% 00:11:07.575 lat (msec) : 2=0.15% 00:11:07.575 cpu : usr=3.50%, sys=6.90%, ctx=2740, majf=0, minf=1 00:11:07.575 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:07.575 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.575 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.575 issued rwts: total=1203,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:07.575 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:07.575 00:11:07.575 Run status group 0 (all jobs): 00:11:07.575 READ: bw=13.5MiB/s (14.2MB/s), 75.7KiB/s-6909KiB/s (77.5kB/s-7075kB/s), io=13.6MiB (14.2MB), run=1001-1004msec 00:11:07.575 WRITE: bw=19.9MiB/s (20.9MB/s), 2040KiB/s-8184KiB/s (2089kB/s-8380kB/s), io=20.0MiB (21.0MB), run=1001-1004msec 00:11:07.575 00:11:07.575 Disk stats (read/write): 00:11:07.575 nvme0n1: ios=564/1024, merge=0/0, ticks=1374/241, in_queue=1615, util=99.20% 00:11:07.575 nvme0n2: ios=1577/1580, merge=0/0, ticks=1175/337, in_queue=1512, util=97.36% 00:11:07.575 nvme0n3: ios=63/512, merge=0/0, ticks=1602/202, in_queue=1804, util=98.44% 00:11:07.575 nvme0n4: ios=1072/1208, merge=0/0, ticks=896/396, in_queue=1292, util=97.27% 00:11:07.575 07:34:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:07.575 [global] 00:11:07.575 thread=1 00:11:07.575 invalidate=1 00:11:07.575 rw=write 00:11:07.575 time_based=1 00:11:07.575 runtime=1 00:11:07.575 ioengine=libaio 00:11:07.575 direct=1 00:11:07.575 bs=4096 00:11:07.575 iodepth=128 00:11:07.575 norandommap=0 00:11:07.575 numjobs=1 00:11:07.575 00:11:07.575 verify_dump=1 00:11:07.575 verify_backlog=512 00:11:07.575 verify_state_save=0 00:11:07.575 do_verify=1 00:11:07.575 verify=crc32c-intel 00:11:07.575 [job0] 00:11:07.575 filename=/dev/nvme0n1 00:11:07.575 [job1] 00:11:07.575 filename=/dev/nvme0n2 00:11:07.575 [job2] 00:11:07.575 filename=/dev/nvme0n3 00:11:07.575 [job3] 00:11:07.576 filename=/dev/nvme0n4 00:11:07.576 Could not set queue depth (nvme0n1) 00:11:07.576 Could not set queue depth (nvme0n2) 00:11:07.576 Could not set queue depth (nvme0n3) 00:11:07.576 Could not set queue depth (nvme0n4) 00:11:07.576 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:07.576 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:07.576 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:07.576 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:07.576 fio-3.35 00:11:07.576 Starting 4 threads 00:11:08.953 00:11:08.953 job0: (groupid=0, jobs=1): err= 0: pid=2881480: Tue Nov 19 07:35:00 2024 00:11:08.953 read: IOPS=3032, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1013msec) 00:11:08.953 slat (usec): min=3, max=14795, avg=130.54, stdev=928.61 00:11:08.953 clat (usec): min=6511, max=40777, avg=16748.84, stdev=4584.49 00:11:08.953 lat (usec): min=6529, max=40795, avg=16879.39, stdev=4647.73 00:11:08.953 clat percentiles (usec): 00:11:08.953 | 1.00th=[ 7373], 5.00th=[11863], 10.00th=[12649], 20.00th=[13698], 00:11:08.953 | 30.00th=[14746], 40.00th=[15139], 50.00th=[15795], 60.00th=[16319], 00:11:08.953 | 70.00th=[17171], 80.00th=[18482], 90.00th=[22414], 95.00th=[26084], 00:11:08.953 | 99.00th=[33817], 99.50th=[37487], 99.90th=[40633], 99.95th=[40633], 00:11:08.953 | 99.99th=[40633] 00:11:08.953 write: IOPS=3472, BW=13.6MiB/s (14.2MB/s)(13.7MiB/1013msec); 0 zone resets 00:11:08.953 slat (usec): min=4, max=13324, avg=149.44, stdev=749.57 00:11:08.953 clat (usec): min=2541, max=58645, avg=21863.07, stdev=10596.43 00:11:08.953 lat (usec): min=2566, max=58664, avg=22012.51, stdev=10681.87 00:11:08.953 clat percentiles (usec): 00:11:08.953 | 1.00th=[ 5211], 5.00th=[ 8586], 10.00th=[12387], 20.00th=[14615], 00:11:08.953 | 30.00th=[15401], 40.00th=[16319], 50.00th=[17957], 60.00th=[20317], 00:11:08.953 | 70.00th=[26870], 80.00th=[30016], 90.00th=[35390], 95.00th=[43779], 00:11:08.953 | 99.00th=[55837], 99.50th=[56361], 99.90th=[58459], 99.95th=[58459], 00:11:08.953 | 99.99th=[58459] 00:11:08.953 bw ( KiB/s): min=10824, max=16271, per=24.06%, avg=13547.50, stdev=3851.61, samples=2 00:11:08.953 iops : min= 2706, max= 4067, avg=3386.50, stdev=962.37, samples=2 00:11:08.953 lat (msec) : 4=0.33%, 10=4.04%, 20=65.19%, 50=28.80%, 100=1.64% 00:11:08.953 cpu : usr=5.24%, sys=8.79%, ctx=337, majf=0, minf=1 00:11:08.953 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:11:08.953 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.953 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:08.953 issued rwts: total=3072,3518,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:08.953 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:08.953 job1: (groupid=0, jobs=1): err= 0: pid=2881489: Tue Nov 19 07:35:00 2024 00:11:08.953 read: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(13.9MiB/1002msec) 00:11:08.953 slat (usec): min=3, max=16854, avg=126.49, stdev=684.15 00:11:08.953 clat (usec): min=879, max=71514, avg=14921.68, stdev=5503.72 00:11:08.953 lat (usec): min=2296, max=71520, avg=15048.16, stdev=5575.96 00:11:08.953 clat percentiles (usec): 00:11:08.953 | 1.00th=[ 5997], 5.00th=[11338], 10.00th=[11994], 20.00th=[13042], 00:11:08.953 | 30.00th=[13566], 40.00th=[13960], 50.00th=[14353], 60.00th=[14746], 00:11:08.953 | 70.00th=[15270], 80.00th=[15795], 90.00th=[16909], 95.00th=[17433], 00:11:08.953 | 99.00th=[46400], 99.50th=[57410], 99.90th=[71828], 99.95th=[71828], 00:11:08.953 | 99.99th=[71828] 00:11:08.953 write: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec); 0 zone resets 00:11:08.953 slat (usec): min=4, max=32370, avg=139.99, stdev=1061.17 00:11:08.953 clat (msec): min=5, max=120, avg=19.46, stdev=15.74 00:11:08.953 lat (msec): min=5, max=120, avg=19.60, stdev=15.82 00:11:08.953 clat percentiles (msec): 00:11:08.953 | 1.00th=[ 11], 5.00th=[ 12], 10.00th=[ 13], 20.00th=[ 14], 00:11:08.953 | 30.00th=[ 14], 40.00th=[ 15], 50.00th=[ 16], 60.00th=[ 16], 00:11:08.953 | 70.00th=[ 17], 80.00th=[ 18], 90.00th=[ 28], 95.00th=[ 51], 00:11:08.953 | 99.00th=[ 100], 99.50th=[ 110], 99.90th=[ 121], 99.95th=[ 121], 00:11:08.953 | 99.99th=[ 121] 00:11:08.953 bw ( KiB/s): min=15153, max=15153, per=26.91%, avg=15153.00, stdev= 0.00, samples=1 00:11:08.953 iops : min= 3788, max= 3788, avg=3788.00, stdev= 0.00, samples=1 00:11:08.953 lat (usec) : 1000=0.01% 00:11:08.953 lat (msec) : 4=0.18%, 10=1.55%, 20=88.38%, 50=6.92%, 100=2.52% 00:11:08.953 lat (msec) : 250=0.43% 00:11:08.953 cpu : usr=4.90%, sys=10.29%, ctx=420, majf=0, minf=1 00:11:08.953 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:11:08.953 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.953 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:08.953 issued rwts: total=3570,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:08.953 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:08.953 job2: (groupid=0, jobs=1): err= 0: pid=2881490: Tue Nov 19 07:35:00 2024 00:11:08.953 read: IOPS=2628, BW=10.3MiB/s (10.8MB/s)(10.3MiB/1006msec) 00:11:08.953 slat (usec): min=3, max=16695, avg=180.93, stdev=1031.80 00:11:08.953 clat (usec): min=3696, max=62968, avg=21948.36, stdev=10182.08 00:11:08.953 lat (usec): min=5741, max=62985, avg=22129.30, stdev=10264.66 00:11:08.953 clat percentiles (usec): 00:11:08.953 | 1.00th=[10421], 5.00th=[12911], 10.00th=[14091], 20.00th=[15139], 00:11:08.953 | 30.00th=[16450], 40.00th=[16712], 50.00th=[17433], 60.00th=[18744], 00:11:08.953 | 70.00th=[22152], 80.00th=[27132], 90.00th=[38536], 95.00th=[44303], 00:11:08.953 | 99.00th=[56886], 99.50th=[56886], 99.90th=[62129], 99.95th=[63177], 00:11:08.953 | 99.99th=[63177] 00:11:08.953 write: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1006msec); 0 zone resets 00:11:08.953 slat (usec): min=5, max=17016, avg=155.25, stdev=950.72 00:11:08.953 clat (usec): min=323, max=74453, avg=22499.72, stdev=14531.42 00:11:08.953 lat (usec): min=579, max=74464, avg=22654.97, stdev=14629.18 00:11:08.953 clat percentiles (usec): 00:11:08.953 | 1.00th=[ 3949], 5.00th=[10028], 10.00th=[13173], 20.00th=[13960], 00:11:08.953 | 30.00th=[15270], 40.00th=[16057], 50.00th=[16712], 60.00th=[17433], 00:11:08.953 | 70.00th=[21890], 80.00th=[28967], 90.00th=[45876], 95.00th=[57410], 00:11:08.953 | 99.00th=[73925], 99.50th=[73925], 99.90th=[74974], 99.95th=[74974], 00:11:08.953 | 99.99th=[74974] 00:11:08.953 bw ( KiB/s): min= 9408, max=14816, per=21.51%, avg=12112.00, stdev=3824.03, samples=2 00:11:08.953 iops : min= 2352, max= 3704, avg=3028.00, stdev=956.01, samples=2 00:11:08.953 lat (usec) : 500=0.02% 00:11:08.953 lat (msec) : 2=0.16%, 4=0.51%, 10=2.26%, 20=62.19%, 50=29.41% 00:11:08.953 lat (msec) : 100=5.46% 00:11:08.953 cpu : usr=4.08%, sys=7.36%, ctx=294, majf=0, minf=1 00:11:08.953 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:11:08.953 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.953 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:08.953 issued rwts: total=2644,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:08.953 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:08.953 job3: (groupid=0, jobs=1): err= 0: pid=2881491: Tue Nov 19 07:35:00 2024 00:11:08.953 read: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1007msec) 00:11:08.953 slat (usec): min=4, max=7180, avg=118.39, stdev=645.71 00:11:08.953 clat (usec): min=8779, max=43295, avg=15412.61, stdev=2389.30 00:11:08.953 lat (usec): min=8799, max=43307, avg=15531.00, stdev=2423.37 00:11:08.953 clat percentiles (usec): 00:11:08.953 | 1.00th=[10028], 5.00th=[12125], 10.00th=[13304], 20.00th=[13566], 00:11:08.953 | 30.00th=[14484], 40.00th=[15008], 50.00th=[15533], 60.00th=[16057], 00:11:08.953 | 70.00th=[16319], 80.00th=[16712], 90.00th=[17695], 95.00th=[17957], 00:11:08.953 | 99.00th=[20579], 99.50th=[21890], 99.90th=[39584], 99.95th=[39584], 00:11:08.953 | 99.99th=[43254] 00:11:08.953 write: IOPS=4058, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1007msec); 0 zone resets 00:11:08.953 slat (usec): min=4, max=14139, avg=128.94, stdev=626.31 00:11:08.953 clat (usec): min=6541, max=48010, avg=17752.71, stdev=5782.51 00:11:08.953 lat (usec): min=7549, max=48036, avg=17881.65, stdev=5823.79 00:11:08.953 clat percentiles (usec): 00:11:08.953 | 1.00th=[10028], 5.00th=[12518], 10.00th=[13566], 20.00th=[15139], 00:11:08.953 | 30.00th=[15533], 40.00th=[15926], 50.00th=[16319], 60.00th=[16581], 00:11:08.953 | 70.00th=[16909], 80.00th=[19006], 90.00th=[23462], 95.00th=[30278], 00:11:08.953 | 99.00th=[44303], 99.50th=[46400], 99.90th=[47973], 99.95th=[47973], 00:11:08.953 | 99.99th=[47973] 00:11:08.953 bw ( KiB/s): min=15296, max=16351, per=28.10%, avg=15823.50, stdev=746.00, samples=2 00:11:08.953 iops : min= 3824, max= 4087, avg=3955.50, stdev=185.97, samples=2 00:11:08.953 lat (msec) : 10=0.95%, 20=90.42%, 50=8.63% 00:11:08.953 cpu : usr=6.46%, sys=9.44%, ctx=437, majf=0, minf=1 00:11:08.953 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:08.953 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.953 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:08.953 issued rwts: total=3584,4087,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:08.953 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:08.953 00:11:08.953 Run status group 0 (all jobs): 00:11:08.953 READ: bw=49.6MiB/s (52.0MB/s), 10.3MiB/s-13.9MiB/s (10.8MB/s-14.6MB/s), io=50.3MiB (52.7MB), run=1002-1013msec 00:11:08.953 WRITE: bw=55.0MiB/s (57.7MB/s), 11.9MiB/s-15.9MiB/s (12.5MB/s-16.6MB/s), io=55.7MiB (58.4MB), run=1002-1013msec 00:11:08.953 00:11:08.953 Disk stats (read/write): 00:11:08.953 nvme0n1: ios=2612/2999, merge=0/0, ticks=41283/60929, in_queue=102212, util=97.80% 00:11:08.953 nvme0n2: ios=2687/3072, merge=0/0, ticks=11416/15250, in_queue=26666, util=97.76% 00:11:08.953 nvme0n3: ios=2560/2639, merge=0/0, ticks=22613/20121, in_queue=42734, util=88.78% 00:11:08.953 nvme0n4: ios=3108/3257, merge=0/0, ticks=18524/24531, in_queue=43055, util=90.49% 00:11:08.953 07:35:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:08.953 [global] 00:11:08.953 thread=1 00:11:08.953 invalidate=1 00:11:08.953 rw=randwrite 00:11:08.953 time_based=1 00:11:08.953 runtime=1 00:11:08.953 ioengine=libaio 00:11:08.953 direct=1 00:11:08.953 bs=4096 00:11:08.954 iodepth=128 00:11:08.954 norandommap=0 00:11:08.954 numjobs=1 00:11:08.954 00:11:08.954 verify_dump=1 00:11:08.954 verify_backlog=512 00:11:08.954 verify_state_save=0 00:11:08.954 do_verify=1 00:11:08.954 verify=crc32c-intel 00:11:08.954 [job0] 00:11:08.954 filename=/dev/nvme0n1 00:11:08.954 [job1] 00:11:08.954 filename=/dev/nvme0n2 00:11:08.954 [job2] 00:11:08.954 filename=/dev/nvme0n3 00:11:08.954 [job3] 00:11:08.954 filename=/dev/nvme0n4 00:11:08.954 Could not set queue depth (nvme0n1) 00:11:08.954 Could not set queue depth (nvme0n2) 00:11:08.954 Could not set queue depth (nvme0n3) 00:11:08.954 Could not set queue depth (nvme0n4) 00:11:09.212 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:09.212 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:09.212 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:09.212 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:09.212 fio-3.35 00:11:09.212 Starting 4 threads 00:11:10.587 00:11:10.587 job0: (groupid=0, jobs=1): err= 0: pid=2881723: Tue Nov 19 07:35:02 2024 00:11:10.587 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.1MiB/1008msec) 00:11:10.587 slat (usec): min=2, max=16800, avg=157.60, stdev=1133.96 00:11:10.587 clat (usec): min=7527, max=77004, avg=20538.06, stdev=8515.50 00:11:10.587 lat (usec): min=8701, max=77012, avg=20695.66, stdev=8631.61 00:11:10.587 clat percentiles (usec): 00:11:10.587 | 1.00th=[10683], 5.00th=[12649], 10.00th=[13829], 20.00th=[15401], 00:11:10.587 | 30.00th=[15795], 40.00th=[17171], 50.00th=[18482], 60.00th=[19530], 00:11:10.588 | 70.00th=[22152], 80.00th=[23200], 90.00th=[28181], 95.00th=[35914], 00:11:10.588 | 99.00th=[62129], 99.50th=[71828], 99.90th=[77071], 99.95th=[77071], 00:11:10.588 | 99.99th=[77071] 00:11:10.588 write: IOPS=3047, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1008msec); 0 zone resets 00:11:10.588 slat (usec): min=4, max=17316, avg=174.66, stdev=1089.14 00:11:10.588 clat (usec): min=6386, max=77008, avg=24361.82, stdev=14369.48 00:11:10.588 lat (usec): min=6394, max=77019, avg=24536.48, stdev=14477.78 00:11:10.588 clat percentiles (usec): 00:11:10.588 | 1.00th=[ 8717], 5.00th=[11207], 10.00th=[12125], 20.00th=[13435], 00:11:10.588 | 30.00th=[14353], 40.00th=[16057], 50.00th=[19530], 60.00th=[20055], 00:11:10.588 | 70.00th=[25297], 80.00th=[40109], 90.00th=[44827], 95.00th=[49021], 00:11:10.588 | 99.00th=[69731], 99.50th=[72877], 99.90th=[73925], 99.95th=[77071], 00:11:10.588 | 99.99th=[77071] 00:11:10.588 bw ( KiB/s): min=11768, max=11912, per=26.89%, avg=11840.00, stdev=101.82, samples=2 00:11:10.588 iops : min= 2942, max= 2978, avg=2960.00, stdev=25.46, samples=2 00:11:10.588 lat (msec) : 10=2.00%, 20=59.12%, 50=35.81%, 100=3.08% 00:11:10.588 cpu : usr=2.48%, sys=3.77%, ctx=213, majf=0, minf=1 00:11:10.588 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:11:10.588 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.588 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:10.588 issued rwts: total=2578,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:10.588 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:10.588 job1: (groupid=0, jobs=1): err= 0: pid=2881724: Tue Nov 19 07:35:02 2024 00:11:10.588 read: IOPS=2537, BW=9.91MiB/s (10.4MB/s)(10.0MiB/1009msec) 00:11:10.588 slat (usec): min=2, max=15019, avg=150.71, stdev=987.15 00:11:10.588 clat (usec): min=4714, max=40481, avg=19975.79, stdev=5987.15 00:11:10.588 lat (usec): min=4726, max=43041, avg=20126.50, stdev=6069.09 00:11:10.588 clat percentiles (usec): 00:11:10.588 | 1.00th=[10159], 5.00th=[13173], 10.00th=[13698], 20.00th=[14353], 00:11:10.588 | 30.00th=[14877], 40.00th=[16057], 50.00th=[21103], 60.00th=[23200], 00:11:10.588 | 70.00th=[23987], 80.00th=[25035], 90.00th=[26870], 95.00th=[29492], 00:11:10.588 | 99.00th=[36439], 99.50th=[38536], 99.90th=[40633], 99.95th=[40633], 00:11:10.588 | 99.99th=[40633] 00:11:10.588 write: IOPS=2797, BW=10.9MiB/s (11.5MB/s)(11.0MiB/1009msec); 0 zone resets 00:11:10.588 slat (usec): min=3, max=35042, avg=208.83, stdev=1466.92 00:11:10.588 clat (msec): min=2, max=101, avg=26.76, stdev=19.27 00:11:10.588 lat (msec): min=2, max=101, avg=26.97, stdev=19.42 00:11:10.588 clat percentiles (msec): 00:11:10.588 | 1.00th=[ 6], 5.00th=[ 9], 10.00th=[ 14], 20.00th=[ 15], 00:11:10.588 | 30.00th=[ 16], 40.00th=[ 17], 50.00th=[ 19], 60.00th=[ 22], 00:11:10.588 | 70.00th=[ 29], 80.00th=[ 43], 90.00th=[ 57], 95.00th=[ 68], 00:11:10.588 | 99.00th=[ 99], 99.50th=[ 101], 99.90th=[ 102], 99.95th=[ 102], 00:11:10.588 | 99.99th=[ 102] 00:11:10.588 bw ( KiB/s): min= 9216, max=12344, per=24.48%, avg=10780.00, stdev=2211.83, samples=2 00:11:10.588 iops : min= 2304, max= 3086, avg=2695.00, stdev=552.96, samples=2 00:11:10.588 lat (msec) : 4=0.33%, 10=3.05%, 20=49.27%, 50=41.06%, 100=5.91% 00:11:10.588 lat (msec) : 250=0.39% 00:11:10.588 cpu : usr=2.98%, sys=4.66%, ctx=208, majf=0, minf=1 00:11:10.588 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:11:10.588 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.588 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:10.588 issued rwts: total=2560,2823,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:10.588 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:10.588 job2: (groupid=0, jobs=1): err= 0: pid=2881725: Tue Nov 19 07:35:02 2024 00:11:10.588 read: IOPS=3356, BW=13.1MiB/s (13.7MB/s)(13.7MiB/1047msec) 00:11:10.588 slat (usec): min=2, max=11337, avg=140.98, stdev=812.93 00:11:10.588 clat (usec): min=8815, max=60116, avg=19724.97, stdev=8647.87 00:11:10.588 lat (usec): min=8839, max=67109, avg=19865.95, stdev=8685.57 00:11:10.588 clat percentiles (usec): 00:11:10.588 | 1.00th=[10028], 5.00th=[12649], 10.00th=[13173], 20.00th=[14484], 00:11:10.588 | 30.00th=[15795], 40.00th=[16450], 50.00th=[16909], 60.00th=[17433], 00:11:10.588 | 70.00th=[18220], 80.00th=[23987], 90.00th=[28967], 95.00th=[34341], 00:11:10.588 | 99.00th=[58983], 99.50th=[59507], 99.90th=[60031], 99.95th=[60031], 00:11:10.588 | 99.99th=[60031] 00:11:10.588 write: IOPS=3423, BW=13.4MiB/s (14.0MB/s)(14.0MiB/1047msec); 0 zone resets 00:11:10.588 slat (usec): min=3, max=11598, avg=132.59, stdev=823.13 00:11:10.588 clat (usec): min=8049, max=35533, avg=17514.78, stdev=3976.82 00:11:10.588 lat (usec): min=8075, max=35554, avg=17647.37, stdev=4031.53 00:11:10.588 clat percentiles (usec): 00:11:10.588 | 1.00th=[10552], 5.00th=[13042], 10.00th=[13566], 20.00th=[14091], 00:11:10.588 | 30.00th=[15533], 40.00th=[16188], 50.00th=[16712], 60.00th=[16909], 00:11:10.588 | 70.00th=[17695], 80.00th=[20841], 90.00th=[23725], 95.00th=[23987], 00:11:10.588 | 99.00th=[32900], 99.50th=[34866], 99.90th=[35390], 99.95th=[35390], 00:11:10.588 | 99.99th=[35390] 00:11:10.588 bw ( KiB/s): min=12288, max=16384, per=32.55%, avg=14336.00, stdev=2896.31, samples=2 00:11:10.588 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:11:10.588 lat (msec) : 10=0.61%, 20=75.04%, 50=22.63%, 100=1.73% 00:11:10.588 cpu : usr=3.54%, sys=6.60%, ctx=309, majf=0, minf=1 00:11:10.588 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:11:10.588 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.588 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:10.588 issued rwts: total=3514,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:10.588 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:10.588 job3: (groupid=0, jobs=1): err= 0: pid=2881726: Tue Nov 19 07:35:02 2024 00:11:10.588 read: IOPS=2010, BW=8044KiB/s (8237kB/s)(8108KiB/1008msec) 00:11:10.588 slat (usec): min=3, max=39798, avg=269.20, stdev=1891.62 00:11:10.588 clat (msec): min=7, max=126, avg=34.11, stdev=24.60 00:11:10.588 lat (msec): min=7, max=126, avg=34.38, stdev=24.79 00:11:10.588 clat percentiles (msec): 00:11:10.588 | 1.00th=[ 14], 5.00th=[ 16], 10.00th=[ 17], 20.00th=[ 18], 00:11:10.588 | 30.00th=[ 21], 40.00th=[ 22], 50.00th=[ 24], 60.00th=[ 25], 00:11:10.588 | 70.00th=[ 30], 80.00th=[ 50], 90.00th=[ 73], 95.00th=[ 100], 00:11:10.588 | 99.00th=[ 113], 99.50th=[ 113], 99.90th=[ 116], 99.95th=[ 117], 00:11:10.588 | 99.99th=[ 127] 00:11:10.588 write: IOPS=2031, BW=8127KiB/s (8322kB/s)(8192KiB/1008msec); 0 zone resets 00:11:10.588 slat (usec): min=5, max=32746, avg=211.65, stdev=1568.87 00:11:10.588 clat (msec): min=12, max=100, avg=28.68, stdev=16.86 00:11:10.588 lat (msec): min=12, max=100, avg=28.89, stdev=17.02 00:11:10.588 clat percentiles (msec): 00:11:10.588 | 1.00th=[ 13], 5.00th=[ 17], 10.00th=[ 17], 20.00th=[ 18], 00:11:10.588 | 30.00th=[ 19], 40.00th=[ 19], 50.00th=[ 23], 60.00th=[ 24], 00:11:10.588 | 70.00th=[ 28], 80.00th=[ 40], 90.00th=[ 55], 95.00th=[ 70], 00:11:10.588 | 99.00th=[ 85], 99.50th=[ 85], 99.90th=[ 85], 99.95th=[ 96], 00:11:10.588 | 99.99th=[ 102] 00:11:10.588 bw ( KiB/s): min= 6208, max=10176, per=18.60%, avg=8192.00, stdev=2805.80, samples=2 00:11:10.588 iops : min= 1552, max= 2544, avg=2048.00, stdev=701.45, samples=2 00:11:10.588 lat (msec) : 10=0.22%, 20=35.14%, 50=50.28%, 100=12.20%, 250=2.16% 00:11:10.588 cpu : usr=2.88%, sys=4.07%, ctx=124, majf=0, minf=1 00:11:10.588 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:11:10.588 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.588 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:10.588 issued rwts: total=2027,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:10.588 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:10.588 00:11:10.588 Run status group 0 (all jobs): 00:11:10.588 READ: bw=39.8MiB/s (41.8MB/s), 8044KiB/s-13.1MiB/s (8237kB/s-13.7MB/s), io=41.7MiB (43.7MB), run=1008-1047msec 00:11:10.588 WRITE: bw=43.0MiB/s (45.1MB/s), 8127KiB/s-13.4MiB/s (8322kB/s-14.0MB/s), io=45.0MiB (47.2MB), run=1008-1047msec 00:11:10.588 00:11:10.588 Disk stats (read/write): 00:11:10.588 nvme0n1: ios=2077/2291, merge=0/0, ticks=43786/62593, in_queue=106379, util=97.60% 00:11:10.588 nvme0n2: ios=2214/2560, merge=0/0, ticks=26702/38933, in_queue=65635, util=96.95% 00:11:10.588 nvme0n3: ios=3048/3072, merge=0/0, ticks=21245/17415, in_queue=38660, util=96.46% 00:11:10.588 nvme0n4: ios=1818/2048, merge=0/0, ticks=16974/17914, in_queue=34888, util=97.28% 00:11:10.588 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:10.588 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2881862 00:11:10.588 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:10.588 07:35:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:10.588 [global] 00:11:10.588 thread=1 00:11:10.588 invalidate=1 00:11:10.588 rw=read 00:11:10.588 time_based=1 00:11:10.588 runtime=10 00:11:10.588 ioengine=libaio 00:11:10.588 direct=1 00:11:10.588 bs=4096 00:11:10.588 iodepth=1 00:11:10.588 norandommap=1 00:11:10.588 numjobs=1 00:11:10.588 00:11:10.588 [job0] 00:11:10.588 filename=/dev/nvme0n1 00:11:10.588 [job1] 00:11:10.588 filename=/dev/nvme0n2 00:11:10.588 [job2] 00:11:10.588 filename=/dev/nvme0n3 00:11:10.588 [job3] 00:11:10.588 filename=/dev/nvme0n4 00:11:10.588 Could not set queue depth (nvme0n1) 00:11:10.588 Could not set queue depth (nvme0n2) 00:11:10.588 Could not set queue depth (nvme0n3) 00:11:10.588 Could not set queue depth (nvme0n4) 00:11:10.588 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:10.588 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:10.588 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:10.588 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:10.588 fio-3.35 00:11:10.588 Starting 4 threads 00:11:13.873 07:35:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:13.874 07:35:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:13.874 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=385024, buflen=4096 00:11:13.874 fio: pid=2881953, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:13.874 07:35:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:13.874 07:35:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:13.874 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=40456192, buflen=4096 00:11:13.874 fio: pid=2881952, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:14.441 07:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:14.441 07:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:14.441 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=1482752, buflen=4096 00:11:14.441 fio: pid=2881950, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:14.700 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=54267904, buflen=4096 00:11:14.700 fio: pid=2881951, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:14.700 00:11:14.700 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2881950: Tue Nov 19 07:35:06 2024 00:11:14.700 read: IOPS=101, BW=403KiB/s (413kB/s)(1448KiB/3591msec) 00:11:14.700 slat (usec): min=6, max=19879, avg=84.17, stdev=1103.11 00:11:14.700 clat (usec): min=238, max=60166, avg=9767.49, stdev=17293.94 00:11:14.700 lat (usec): min=245, max=67082, avg=9851.82, stdev=17487.39 00:11:14.700 clat percentiles (usec): 00:11:14.700 | 1.00th=[ 245], 5.00th=[ 251], 10.00th=[ 253], 20.00th=[ 258], 00:11:14.700 | 30.00th=[ 262], 40.00th=[ 265], 50.00th=[ 273], 60.00th=[ 281], 00:11:14.700 | 70.00th=[ 416], 80.00th=[40633], 90.00th=[41157], 95.00th=[41157], 00:11:14.700 | 99.00th=[41681], 99.50th=[42730], 99.90th=[60031], 99.95th=[60031], 00:11:14.700 | 99.99th=[60031] 00:11:14.700 bw ( KiB/s): min= 96, max= 2104, per=1.87%, avg=454.67, stdev=808.79, samples=6 00:11:14.700 iops : min= 24, max= 526, avg=113.67, stdev=202.20, samples=6 00:11:14.700 lat (usec) : 250=4.68%, 500=70.80%, 750=1.10% 00:11:14.700 lat (msec) : 50=22.87%, 100=0.28% 00:11:14.700 cpu : usr=0.08%, sys=0.14%, ctx=365, majf=0, minf=2 00:11:14.700 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:14.700 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.700 complete : 0=0.3%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.700 issued rwts: total=363,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:14.700 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:14.700 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2881951: Tue Nov 19 07:35:06 2024 00:11:14.700 read: IOPS=3404, BW=13.3MiB/s (13.9MB/s)(51.8MiB/3892msec) 00:11:14.700 slat (usec): min=4, max=15374, avg=13.97, stdev=219.04 00:11:14.700 clat (usec): min=204, max=978, avg=274.86, stdev=51.70 00:11:14.700 lat (usec): min=209, max=15939, avg=288.83, stdev=229.79 00:11:14.700 clat percentiles (usec): 00:11:14.700 | 1.00th=[ 217], 5.00th=[ 229], 10.00th=[ 239], 20.00th=[ 249], 00:11:14.700 | 30.00th=[ 255], 40.00th=[ 262], 50.00th=[ 265], 60.00th=[ 273], 00:11:14.700 | 70.00th=[ 277], 80.00th=[ 285], 90.00th=[ 297], 95.00th=[ 326], 00:11:14.700 | 99.00th=[ 545], 99.50th=[ 578], 99.90th=[ 619], 99.95th=[ 652], 00:11:14.700 | 99.99th=[ 783] 00:11:14.700 bw ( KiB/s): min=10664, max=14531, per=56.23%, avg=13629.00, stdev=1336.35, samples=7 00:11:14.700 iops : min= 2666, max= 3632, avg=3407.14, stdev=334.00, samples=7 00:11:14.700 lat (usec) : 250=20.79%, 500=77.46%, 750=1.73%, 1000=0.02% 00:11:14.700 cpu : usr=1.80%, sys=5.53%, ctx=13257, majf=0, minf=1 00:11:14.700 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:14.700 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.700 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.700 issued rwts: total=13250,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:14.700 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:14.700 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2881952: Tue Nov 19 07:35:06 2024 00:11:14.700 read: IOPS=3052, BW=11.9MiB/s (12.5MB/s)(38.6MiB/3236msec) 00:11:14.700 slat (usec): min=4, max=11667, avg=18.74, stdev=165.76 00:11:14.700 clat (usec): min=228, max=863, avg=302.49, stdev=54.72 00:11:14.700 lat (usec): min=234, max=12041, avg=321.23, stdev=177.05 00:11:14.700 clat percentiles (usec): 00:11:14.700 | 1.00th=[ 243], 5.00th=[ 251], 10.00th=[ 255], 20.00th=[ 265], 00:11:14.700 | 30.00th=[ 269], 40.00th=[ 273], 50.00th=[ 281], 60.00th=[ 293], 00:11:14.700 | 70.00th=[ 314], 80.00th=[ 347], 90.00th=[ 371], 95.00th=[ 392], 00:11:14.700 | 99.00th=[ 506], 99.50th=[ 523], 99.90th=[ 562], 99.95th=[ 586], 00:11:14.700 | 99.99th=[ 865] 00:11:14.700 bw ( KiB/s): min=10696, max=13360, per=50.89%, avg=12333.33, stdev=959.37, samples=6 00:11:14.700 iops : min= 2674, max= 3340, avg=3083.33, stdev=239.84, samples=6 00:11:14.700 lat (usec) : 250=4.63%, 500=94.03%, 750=1.33%, 1000=0.01% 00:11:14.700 cpu : usr=2.19%, sys=5.53%, ctx=9883, majf=0, minf=2 00:11:14.700 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:14.700 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.700 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.700 issued rwts: total=9878,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:14.700 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:14.700 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2881953: Tue Nov 19 07:35:06 2024 00:11:14.700 read: IOPS=32, BW=128KiB/s (131kB/s)(376KiB/2946msec) 00:11:14.700 slat (nsec): min=7810, max=53435, avg=18510.76, stdev=8883.63 00:11:14.700 clat (usec): min=366, max=42033, avg=31075.45, stdev=17531.99 00:11:14.700 lat (usec): min=374, max=42049, avg=31093.78, stdev=17536.09 00:11:14.700 clat percentiles (usec): 00:11:14.700 | 1.00th=[ 367], 5.00th=[ 379], 10.00th=[ 396], 20.00th=[ 420], 00:11:14.700 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:14.700 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:14.700 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:14.700 | 99.99th=[42206] 00:11:14.700 bw ( KiB/s): min= 96, max= 112, per=0.41%, avg=100.80, stdev= 7.16, samples=5 00:11:14.700 iops : min= 24, max= 28, avg=25.20, stdev= 1.79, samples=5 00:11:14.700 lat (usec) : 500=23.16% 00:11:14.700 lat (msec) : 2=1.05%, 50=74.74% 00:11:14.700 cpu : usr=0.00%, sys=0.10%, ctx=97, majf=0, minf=2 00:11:14.700 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:14.700 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.700 complete : 0=1.0%, 4=99.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.700 issued rwts: total=95,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:14.700 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:14.700 00:11:14.700 Run status group 0 (all jobs): 00:11:14.700 READ: bw=23.7MiB/s (24.8MB/s), 128KiB/s-13.3MiB/s (131kB/s-13.9MB/s), io=92.1MiB (96.6MB), run=2946-3892msec 00:11:14.700 00:11:14.700 Disk stats (read/write): 00:11:14.700 nvme0n1: ios=355/0, merge=0/0, ticks=3308/0, in_queue=3308, util=95.51% 00:11:14.700 nvme0n2: ios=13226/0, merge=0/0, ticks=3504/0, in_queue=3504, util=95.43% 00:11:14.700 nvme0n3: ios=9600/0, merge=0/0, ticks=3802/0, in_queue=3802, util=98.35% 00:11:14.700 nvme0n4: ios=135/0, merge=0/0, ticks=3448/0, in_queue=3448, util=99.36% 00:11:14.700 07:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:14.700 07:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:14.959 07:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:14.959 07:35:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:15.526 07:35:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:15.526 07:35:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:15.785 07:35:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:15.785 07:35:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:16.043 07:35:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:16.043 07:35:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:16.301 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:16.301 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 2881862 00:11:16.301 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:16.301 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:17.236 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:17.236 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:17.236 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:11:17.236 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:17.236 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:17.236 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:17.236 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:17.236 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:11:17.236 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:17.236 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:17.236 nvmf hotplug test: fio failed as expected 00:11:17.236 07:35:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:17.493 07:35:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:17.493 07:35:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:17.493 07:35:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:17.493 07:35:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:17.493 07:35:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:17.493 07:35:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:17.493 07:35:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:11:17.493 07:35:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:17.493 07:35:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:11:17.493 07:35:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:17.493 07:35:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:17.493 rmmod nvme_tcp 00:11:17.493 rmmod nvme_fabrics 00:11:17.493 rmmod nvme_keyring 00:11:17.493 07:35:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:17.493 07:35:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:11:17.493 07:35:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:11:17.493 07:35:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2879695 ']' 00:11:17.493 07:35:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2879695 00:11:17.493 07:35:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 2879695 ']' 00:11:17.493 07:35:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 2879695 00:11:17.493 07:35:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:11:17.493 07:35:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:17.493 07:35:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2879695 00:11:17.751 07:35:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:17.751 07:35:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:17.751 07:35:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2879695' 00:11:17.751 killing process with pid 2879695 00:11:17.751 07:35:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 2879695 00:11:17.751 07:35:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 2879695 00:11:18.687 07:35:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:18.688 07:35:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:18.688 07:35:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:18.688 07:35:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:11:18.688 07:35:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:11:18.688 07:35:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:18.688 07:35:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:11:18.688 07:35:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:18.688 07:35:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:18.688 07:35:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:18.688 07:35:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:18.688 07:35:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:21.224 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:21.224 00:11:21.224 real 0m27.256s 00:11:21.224 user 1m35.219s 00:11:21.224 sys 0m7.842s 00:11:21.224 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:21.224 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.224 ************************************ 00:11:21.224 END TEST nvmf_fio_target 00:11:21.224 ************************************ 00:11:21.224 07:35:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:21.224 07:35:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:21.224 07:35:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:21.224 07:35:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:21.224 ************************************ 00:11:21.224 START TEST nvmf_bdevio 00:11:21.224 ************************************ 00:11:21.224 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:21.224 * Looking for test storage... 00:11:21.224 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:21.224 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:21.224 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:11:21.224 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:21.224 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:21.224 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:21.224 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:21.224 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:21.224 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:11:21.224 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:11:21.224 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:11:21.224 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:11:21.224 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:11:21.224 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:11:21.224 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:11:21.224 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:21.224 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:11:21.224 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:11:21.224 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:21.224 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:21.224 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:11:21.224 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:11:21.224 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:21.224 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:11:21.224 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:11:21.224 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:11:21.224 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:11:21.224 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:21.224 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:11:21.224 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:11:21.224 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:21.224 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:21.224 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:11:21.224 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:21.224 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:21.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.224 --rc genhtml_branch_coverage=1 00:11:21.224 --rc genhtml_function_coverage=1 00:11:21.224 --rc genhtml_legend=1 00:11:21.224 --rc geninfo_all_blocks=1 00:11:21.224 --rc geninfo_unexecuted_blocks=1 00:11:21.224 00:11:21.224 ' 00:11:21.224 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:21.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.224 --rc genhtml_branch_coverage=1 00:11:21.224 --rc genhtml_function_coverage=1 00:11:21.224 --rc genhtml_legend=1 00:11:21.224 --rc geninfo_all_blocks=1 00:11:21.224 --rc geninfo_unexecuted_blocks=1 00:11:21.224 00:11:21.224 ' 00:11:21.224 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:21.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.224 --rc genhtml_branch_coverage=1 00:11:21.224 --rc genhtml_function_coverage=1 00:11:21.224 --rc genhtml_legend=1 00:11:21.224 --rc geninfo_all_blocks=1 00:11:21.224 --rc geninfo_unexecuted_blocks=1 00:11:21.224 00:11:21.224 ' 00:11:21.224 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:21.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.224 --rc genhtml_branch_coverage=1 00:11:21.224 --rc genhtml_function_coverage=1 00:11:21.224 --rc genhtml_legend=1 00:11:21.224 --rc geninfo_all_blocks=1 00:11:21.224 --rc geninfo_unexecuted_blocks=1 00:11:21.224 00:11:21.224 ' 00:11:21.224 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:21.224 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:21.224 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:21.224 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:21.224 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:21.224 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:21.224 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:21.224 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:21.224 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:21.224 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:21.224 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:21.224 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:21.224 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:21.224 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:21.224 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:21.224 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:21.224 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:21.224 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:21.225 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:21.225 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:11:21.225 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:21.225 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:21.225 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:21.225 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.225 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.225 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.225 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:21.225 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.225 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:11:21.225 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:21.225 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:21.225 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:21.225 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:21.225 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:21.225 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:21.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:21.225 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:21.225 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:21.225 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:21.225 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:21.225 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:21.225 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:21.225 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:21.225 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:21.225 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:21.225 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:21.225 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:21.225 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:21.225 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:21.225 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:21.225 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:21.225 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:21.225 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:11:21.225 07:35:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:23.128 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:23.128 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:23.128 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:23.128 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:23.128 07:35:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:23.128 07:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:23.128 07:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:23.128 07:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:23.129 07:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:23.129 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:23.129 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.378 ms 00:11:23.129 00:11:23.129 --- 10.0.0.2 ping statistics --- 00:11:23.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:23.129 rtt min/avg/max/mdev = 0.378/0.378/0.378/0.000 ms 00:11:23.129 07:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:23.129 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:23.129 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:11:23.129 00:11:23.129 --- 10.0.0.1 ping statistics --- 00:11:23.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:23.129 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:11:23.129 07:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:23.129 07:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:11:23.129 07:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:23.129 07:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:23.129 07:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:23.129 07:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:23.129 07:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:23.129 07:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:23.129 07:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:23.388 07:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:23.388 07:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:23.388 07:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:23.388 07:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:23.388 07:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2884857 00:11:23.388 07:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:23.388 07:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2884857 00:11:23.388 07:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 2884857 ']' 00:11:23.388 07:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:23.388 07:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:23.388 07:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:23.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:23.388 07:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:23.388 07:35:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:23.388 [2024-11-19 07:35:15.160057] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:11:23.388 [2024-11-19 07:35:15.160207] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:23.388 [2024-11-19 07:35:15.306524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:23.646 [2024-11-19 07:35:15.443522] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:23.646 [2024-11-19 07:35:15.443602] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:23.646 [2024-11-19 07:35:15.443629] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:23.646 [2024-11-19 07:35:15.443653] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:23.646 [2024-11-19 07:35:15.443673] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:23.646 [2024-11-19 07:35:15.446519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:23.646 [2024-11-19 07:35:15.446574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:11:23.646 [2024-11-19 07:35:15.446624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:23.646 [2024-11-19 07:35:15.446630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:11:24.580 07:35:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:24.580 07:35:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:11:24.580 07:35:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:24.580 07:35:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:24.580 07:35:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:24.580 07:35:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:24.580 07:35:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:24.580 07:35:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.580 07:35:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:24.580 [2024-11-19 07:35:16.203434] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:24.580 07:35:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.580 07:35:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:24.580 07:35:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.580 07:35:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:24.580 Malloc0 00:11:24.580 07:35:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.580 07:35:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:24.580 07:35:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.580 07:35:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:24.580 07:35:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.580 07:35:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:24.580 07:35:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.580 07:35:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:24.580 07:35:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.580 07:35:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:24.580 07:35:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.580 07:35:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:24.581 [2024-11-19 07:35:16.319114] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:24.581 07:35:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.581 07:35:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:24.581 07:35:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:24.581 07:35:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:11:24.581 07:35:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:11:24.581 07:35:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:24.581 07:35:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:24.581 { 00:11:24.581 "params": { 00:11:24.581 "name": "Nvme$subsystem", 00:11:24.581 "trtype": "$TEST_TRANSPORT", 00:11:24.581 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:24.581 "adrfam": "ipv4", 00:11:24.581 "trsvcid": "$NVMF_PORT", 00:11:24.581 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:24.581 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:24.581 "hdgst": ${hdgst:-false}, 00:11:24.581 "ddgst": ${ddgst:-false} 00:11:24.581 }, 00:11:24.581 "method": "bdev_nvme_attach_controller" 00:11:24.581 } 00:11:24.581 EOF 00:11:24.581 )") 00:11:24.581 07:35:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:11:24.581 07:35:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:11:24.581 07:35:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:11:24.581 07:35:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:24.581 "params": { 00:11:24.581 "name": "Nvme1", 00:11:24.581 "trtype": "tcp", 00:11:24.581 "traddr": "10.0.0.2", 00:11:24.581 "adrfam": "ipv4", 00:11:24.581 "trsvcid": "4420", 00:11:24.581 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:24.581 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:24.581 "hdgst": false, 00:11:24.581 "ddgst": false 00:11:24.581 }, 00:11:24.581 "method": "bdev_nvme_attach_controller" 00:11:24.581 }' 00:11:24.581 [2024-11-19 07:35:16.405198] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:11:24.581 [2024-11-19 07:35:16.405333] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2885017 ] 00:11:24.839 [2024-11-19 07:35:16.541792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:24.839 [2024-11-19 07:35:16.676591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:24.839 [2024-11-19 07:35:16.676639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:24.839 [2024-11-19 07:35:16.676643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:25.405 I/O targets: 00:11:25.405 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:25.405 00:11:25.405 00:11:25.405 CUnit - A unit testing framework for C - Version 2.1-3 00:11:25.405 http://cunit.sourceforge.net/ 00:11:25.405 00:11:25.405 00:11:25.405 Suite: bdevio tests on: Nvme1n1 00:11:25.405 Test: blockdev write read block ...passed 00:11:25.405 Test: blockdev write zeroes read block ...passed 00:11:25.405 Test: blockdev write zeroes read no split ...passed 00:11:25.405 Test: blockdev write zeroes read split ...passed 00:11:25.405 Test: blockdev write zeroes read split partial ...passed 00:11:25.405 Test: blockdev reset ...[2024-11-19 07:35:17.339268] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:11:25.663 [2024-11-19 07:35:17.339477] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2f00 (9): Bad file descriptor 00:11:25.663 [2024-11-19 07:35:17.352994] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:11:25.663 passed 00:11:25.663 Test: blockdev write read 8 blocks ...passed 00:11:25.663 Test: blockdev write read size > 128k ...passed 00:11:25.663 Test: blockdev write read invalid size ...passed 00:11:25.663 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:25.663 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:25.663 Test: blockdev write read max offset ...passed 00:11:25.663 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:25.663 Test: blockdev writev readv 8 blocks ...passed 00:11:25.663 Test: blockdev writev readv 30 x 1block ...passed 00:11:25.663 Test: blockdev writev readv block ...passed 00:11:25.663 Test: blockdev writev readv size > 128k ...passed 00:11:25.663 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:25.663 Test: blockdev comparev and writev ...[2024-11-19 07:35:17.570194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:25.663 [2024-11-19 07:35:17.570273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:25.663 [2024-11-19 07:35:17.570314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:25.663 [2024-11-19 07:35:17.570341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:25.663 [2024-11-19 07:35:17.570817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:25.664 [2024-11-19 07:35:17.570850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:25.664 [2024-11-19 07:35:17.570884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:25.664 [2024-11-19 07:35:17.570909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:25.664 [2024-11-19 07:35:17.571362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:25.664 [2024-11-19 07:35:17.571394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:25.664 [2024-11-19 07:35:17.571429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:25.664 [2024-11-19 07:35:17.571453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:25.664 [2024-11-19 07:35:17.571923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:25.664 [2024-11-19 07:35:17.571958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:25.664 [2024-11-19 07:35:17.571997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:25.664 [2024-11-19 07:35:17.572025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:25.922 passed 00:11:25.922 Test: blockdev nvme passthru rw ...passed 00:11:25.922 Test: blockdev nvme passthru vendor specific ...[2024-11-19 07:35:17.656149] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:25.922 [2024-11-19 07:35:17.656214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:25.922 [2024-11-19 07:35:17.656478] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:25.922 [2024-11-19 07:35:17.656520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:25.922 [2024-11-19 07:35:17.656719] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:25.922 [2024-11-19 07:35:17.656753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:25.922 [2024-11-19 07:35:17.657016] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:25.922 [2024-11-19 07:35:17.657050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:25.922 passed 00:11:25.922 Test: blockdev nvme admin passthru ...passed 00:11:25.922 Test: blockdev copy ...passed 00:11:25.922 00:11:25.922 Run Summary: Type Total Ran Passed Failed Inactive 00:11:25.922 suites 1 1 n/a 0 0 00:11:25.922 tests 23 23 23 0 0 00:11:25.922 asserts 152 152 152 0 n/a 00:11:25.922 00:11:25.922 Elapsed time = 1.193 seconds 00:11:26.857 07:35:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:26.857 07:35:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.857 07:35:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:26.857 07:35:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.857 07:35:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:26.858 07:35:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:26.858 07:35:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:26.858 07:35:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:11:26.858 07:35:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:26.858 07:35:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:11:26.858 07:35:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:26.858 07:35:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:26.858 rmmod nvme_tcp 00:11:26.858 rmmod nvme_fabrics 00:11:26.858 rmmod nvme_keyring 00:11:26.858 07:35:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:26.858 07:35:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:11:26.858 07:35:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:11:26.858 07:35:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2884857 ']' 00:11:26.858 07:35:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2884857 00:11:26.858 07:35:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 2884857 ']' 00:11:26.858 07:35:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 2884857 00:11:26.858 07:35:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:11:26.858 07:35:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:26.858 07:35:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2884857 00:11:26.858 07:35:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:11:26.858 07:35:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:11:26.858 07:35:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2884857' 00:11:26.858 killing process with pid 2884857 00:11:26.858 07:35:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 2884857 00:11:26.858 07:35:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 2884857 00:11:28.243 07:35:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:28.243 07:35:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:28.243 07:35:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:28.243 07:35:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:11:28.243 07:35:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:11:28.243 07:35:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:28.243 07:35:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:11:28.243 07:35:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:28.243 07:35:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:28.243 07:35:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:28.243 07:35:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:28.243 07:35:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:30.149 07:35:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:30.149 00:11:30.149 real 0m9.262s 00:11:30.149 user 0m22.026s 00:11:30.149 sys 0m2.395s 00:11:30.149 07:35:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:30.149 07:35:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:30.149 ************************************ 00:11:30.149 END TEST nvmf_bdevio 00:11:30.149 ************************************ 00:11:30.149 07:35:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:30.149 00:11:30.149 real 4m30.625s 00:11:30.149 user 11m52.051s 00:11:30.149 sys 1m10.142s 00:11:30.149 07:35:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:30.149 07:35:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:30.149 ************************************ 00:11:30.149 END TEST nvmf_target_core 00:11:30.149 ************************************ 00:11:30.149 07:35:21 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:30.149 07:35:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:30.149 07:35:21 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:30.149 07:35:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:30.149 ************************************ 00:11:30.149 START TEST nvmf_target_extra 00:11:30.149 ************************************ 00:11:30.149 07:35:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:30.149 * Looking for test storage... 00:11:30.149 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:30.149 07:35:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:30.149 07:35:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:11:30.149 07:35:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:30.409 07:35:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:30.409 07:35:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:30.409 07:35:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:30.409 07:35:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:30.409 07:35:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:11:30.409 07:35:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:11:30.409 07:35:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:11:30.409 07:35:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:11:30.409 07:35:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:11:30.409 07:35:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:11:30.409 07:35:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:11:30.409 07:35:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:30.409 07:35:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:11:30.409 07:35:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:11:30.409 07:35:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:30.409 07:35:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:30.409 07:35:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:11:30.409 07:35:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:11:30.409 07:35:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:30.409 07:35:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:11:30.409 07:35:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:11:30.409 07:35:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:11:30.409 07:35:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:11:30.409 07:35:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:30.409 07:35:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:11:30.409 07:35:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:11:30.409 07:35:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:30.409 07:35:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:30.409 07:35:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:11:30.409 07:35:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:30.409 07:35:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:30.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.409 --rc genhtml_branch_coverage=1 00:11:30.409 --rc genhtml_function_coverage=1 00:11:30.409 --rc genhtml_legend=1 00:11:30.410 --rc geninfo_all_blocks=1 00:11:30.410 --rc geninfo_unexecuted_blocks=1 00:11:30.410 00:11:30.410 ' 00:11:30.410 07:35:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:30.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.410 --rc genhtml_branch_coverage=1 00:11:30.410 --rc genhtml_function_coverage=1 00:11:30.410 --rc genhtml_legend=1 00:11:30.410 --rc geninfo_all_blocks=1 00:11:30.410 --rc geninfo_unexecuted_blocks=1 00:11:30.410 00:11:30.410 ' 00:11:30.410 07:35:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:30.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.410 --rc genhtml_branch_coverage=1 00:11:30.410 --rc genhtml_function_coverage=1 00:11:30.410 --rc genhtml_legend=1 00:11:30.410 --rc geninfo_all_blocks=1 00:11:30.410 --rc geninfo_unexecuted_blocks=1 00:11:30.410 00:11:30.410 ' 00:11:30.410 07:35:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:30.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.410 --rc genhtml_branch_coverage=1 00:11:30.410 --rc genhtml_function_coverage=1 00:11:30.410 --rc genhtml_legend=1 00:11:30.410 --rc geninfo_all_blocks=1 00:11:30.410 --rc geninfo_unexecuted_blocks=1 00:11:30.410 00:11:30.410 ' 00:11:30.410 07:35:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:30.410 07:35:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:30.410 07:35:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:30.410 07:35:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:30.410 07:35:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:30.410 07:35:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:30.410 07:35:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:30.410 07:35:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:30.410 07:35:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:30.410 07:35:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:30.410 07:35:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:30.410 07:35:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:30.410 07:35:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:30.410 07:35:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:30.410 07:35:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:30.410 07:35:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:30.410 07:35:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:30.410 07:35:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:30.410 07:35:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:30.410 07:35:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:11:30.410 07:35:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:30.410 07:35:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:30.410 07:35:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:30.410 07:35:22 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.410 07:35:22 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.410 07:35:22 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.410 07:35:22 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:30.410 07:35:22 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.410 07:35:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:11:30.410 07:35:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:30.410 07:35:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:30.410 07:35:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:30.410 07:35:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:30.410 07:35:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:30.410 07:35:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:30.410 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:30.410 07:35:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:30.410 07:35:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:30.410 07:35:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:30.410 07:35:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:30.410 07:35:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:30.410 07:35:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:30.410 07:35:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:30.410 07:35:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:30.410 07:35:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:30.410 07:35:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:30.410 ************************************ 00:11:30.410 START TEST nvmf_example 00:11:30.410 ************************************ 00:11:30.410 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:30.410 * Looking for test storage... 00:11:30.410 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:30.410 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:30.410 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:11:30.410 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:30.410 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:30.410 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:30.410 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:30.410 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:30.410 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:11:30.410 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:11:30.410 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:11:30.410 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:11:30.410 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:11:30.410 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:11:30.410 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:11:30.410 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:30.410 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:11:30.410 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:11:30.410 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:30.410 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:30.410 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:11:30.410 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:11:30.410 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:30.410 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:11:30.410 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:11:30.410 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:11:30.410 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:11:30.410 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:30.410 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:11:30.410 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:11:30.410 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:30.410 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:30.410 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:11:30.410 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:30.410 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:30.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.410 --rc genhtml_branch_coverage=1 00:11:30.411 --rc genhtml_function_coverage=1 00:11:30.411 --rc genhtml_legend=1 00:11:30.411 --rc geninfo_all_blocks=1 00:11:30.411 --rc geninfo_unexecuted_blocks=1 00:11:30.411 00:11:30.411 ' 00:11:30.411 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:30.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.411 --rc genhtml_branch_coverage=1 00:11:30.411 --rc genhtml_function_coverage=1 00:11:30.411 --rc genhtml_legend=1 00:11:30.411 --rc geninfo_all_blocks=1 00:11:30.411 --rc geninfo_unexecuted_blocks=1 00:11:30.411 00:11:30.411 ' 00:11:30.411 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:30.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.411 --rc genhtml_branch_coverage=1 00:11:30.411 --rc genhtml_function_coverage=1 00:11:30.411 --rc genhtml_legend=1 00:11:30.411 --rc geninfo_all_blocks=1 00:11:30.411 --rc geninfo_unexecuted_blocks=1 00:11:30.411 00:11:30.411 ' 00:11:30.411 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:30.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.411 --rc genhtml_branch_coverage=1 00:11:30.411 --rc genhtml_function_coverage=1 00:11:30.411 --rc genhtml_legend=1 00:11:30.411 --rc geninfo_all_blocks=1 00:11:30.411 --rc geninfo_unexecuted_blocks=1 00:11:30.411 00:11:30.411 ' 00:11:30.411 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:30.411 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:30.411 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:30.411 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:30.411 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:30.411 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:30.411 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:30.411 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:30.411 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:30.411 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:30.411 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:30.411 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:30.411 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:30.411 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:30.411 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:30.411 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:30.411 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:30.411 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:30.411 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:30.411 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:11:30.411 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:30.411 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:30.411 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:30.411 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.411 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.411 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.411 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:30.411 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.411 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:11:30.411 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:30.411 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:30.411 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:30.411 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:30.411 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:30.411 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:30.411 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:30.411 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:30.411 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:30.411 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:30.411 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:30.411 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:30.411 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:30.411 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:30.411 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:30.411 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:30.411 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:30.411 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:30.411 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:30.411 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:30.411 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:30.411 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:30.411 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:30.411 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:30.411 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:30.411 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:30.411 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:30.411 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:30.411 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:30.669 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:30.670 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:30.670 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:11:30.670 07:35:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:32.629 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:32.629 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:11:32.629 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:32.629 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:32.629 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:32.629 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:32.629 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:32.629 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:11:32.629 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:32.629 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:11:32.629 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:11:32.629 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:11:32.629 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:11:32.629 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:11:32.629 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:11:32.629 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:32.629 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:32.629 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:32.629 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:32.629 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:32.629 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:32.629 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:32.629 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:32.629 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:32.629 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:32.629 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:32.629 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:32.629 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:32.629 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:32.629 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:32.629 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:32.629 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:32.629 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:32.629 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:32.629 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:32.629 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:32.629 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:32.629 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:32.629 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:32.629 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:32.629 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:32.629 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:32.629 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:32.629 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:32.629 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:32.629 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:32.629 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:32.629 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:32.629 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:32.629 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:32.629 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:32.629 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:32.629 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:32.629 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:32.629 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:32.630 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:32.630 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:32.630 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:32.630 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:32.630 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:32.630 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:32.630 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:32.630 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:32.630 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:32.630 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:32.630 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:32.630 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:32.630 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:32.630 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:32.630 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:32.630 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:32.630 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:32.630 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:32.630 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:11:32.630 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:32.630 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:32.630 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:32.630 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:32.630 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:32.630 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:32.630 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:32.630 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:32.630 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:32.630 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:32.630 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:32.630 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:32.630 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:32.630 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:32.630 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:32.630 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:32.630 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:32.630 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:32.630 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:32.630 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:32.630 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:32.630 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:32.630 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:32.630 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:32.630 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:32.630 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:32.630 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:32.630 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:11:32.630 00:11:32.630 --- 10.0.0.2 ping statistics --- 00:11:32.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:32.630 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:11:32.630 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:32.630 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:32.630 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:11:32.630 00:11:32.630 --- 10.0.0.1 ping statistics --- 00:11:32.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:32.630 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:11:32.630 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:32.630 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:11:32.630 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:32.630 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:32.630 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:32.630 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:32.630 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:32.630 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:32.630 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:32.889 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:32.889 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:32.889 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:32.889 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:32.889 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:32.889 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:32.889 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2887422 00:11:32.889 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:32.889 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:32.889 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2887422 00:11:32.889 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 2887422 ']' 00:11:32.889 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:32.889 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:32.889 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:32.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:32.889 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:32.889 07:35:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:33.825 07:35:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:33.825 07:35:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:11:33.825 07:35:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:33.825 07:35:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:33.825 07:35:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:33.825 07:35:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:33.825 07:35:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.825 07:35:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:33.825 07:35:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.825 07:35:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:33.825 07:35:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.825 07:35:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:34.083 07:35:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.083 07:35:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:34.083 07:35:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:34.083 07:35:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.083 07:35:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:34.083 07:35:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.083 07:35:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:34.083 07:35:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:34.083 07:35:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.083 07:35:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:34.083 07:35:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.083 07:35:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:34.083 07:35:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.083 07:35:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:34.083 07:35:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.083 07:35:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:34.083 07:35:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:46.284 Initializing NVMe Controllers 00:11:46.284 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:46.284 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:46.284 Initialization complete. Launching workers. 00:11:46.284 ======================================================== 00:11:46.284 Latency(us) 00:11:46.284 Device Information : IOPS MiB/s Average min max 00:11:46.284 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11784.28 46.03 5430.73 1300.32 18828.39 00:11:46.284 ======================================================== 00:11:46.284 Total : 11784.28 46.03 5430.73 1300.32 18828.39 00:11:46.284 00:11:46.284 07:35:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:46.284 07:35:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:46.284 07:35:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:46.284 07:35:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:46.284 07:35:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:46.284 07:35:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:46.284 07:35:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:46.284 07:35:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:46.284 rmmod nvme_tcp 00:11:46.284 rmmod nvme_fabrics 00:11:46.284 rmmod nvme_keyring 00:11:46.284 07:35:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:46.284 07:35:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:46.284 07:35:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:46.284 07:35:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 2887422 ']' 00:11:46.284 07:35:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 2887422 00:11:46.284 07:35:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 2887422 ']' 00:11:46.284 07:35:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 2887422 00:11:46.284 07:35:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:11:46.284 07:35:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:46.284 07:35:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2887422 00:11:46.284 07:35:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:11:46.284 07:35:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:11:46.284 07:35:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2887422' 00:11:46.284 killing process with pid 2887422 00:11:46.284 07:35:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 2887422 00:11:46.284 07:35:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 2887422 00:11:46.284 nvmf threads initialize successfully 00:11:46.284 bdev subsystem init successfully 00:11:46.284 created a nvmf target service 00:11:46.284 create targets's poll groups done 00:11:46.284 all subsystems of target started 00:11:46.284 nvmf target is running 00:11:46.284 all subsystems of target stopped 00:11:46.284 destroy targets's poll groups done 00:11:46.284 destroyed the nvmf target service 00:11:46.285 bdev subsystem finish successfully 00:11:46.285 nvmf threads destroy successfully 00:11:46.285 07:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:46.285 07:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:46.285 07:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:46.285 07:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:11:46.285 07:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:11:46.285 07:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:46.285 07:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:11:46.285 07:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:46.285 07:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:46.285 07:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:46.285 07:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:46.285 07:35:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:47.662 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:47.662 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:47.662 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:47.662 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:47.662 00:11:47.662 real 0m17.356s 00:11:47.662 user 0m49.182s 00:11:47.662 sys 0m3.302s 00:11:47.662 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:47.662 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:47.662 ************************************ 00:11:47.662 END TEST nvmf_example 00:11:47.662 ************************************ 00:11:47.662 07:35:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:47.662 07:35:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:47.662 07:35:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:47.662 07:35:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:47.662 ************************************ 00:11:47.662 START TEST nvmf_filesystem 00:11:47.662 ************************************ 00:11:47.662 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:47.924 * Looking for test storage... 00:11:47.924 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:47.924 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:47.924 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:11:47.924 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:47.924 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:47.924 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:47.924 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:47.924 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:47.924 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:47.924 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:47.924 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:47.924 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:47.924 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:47.924 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:47.924 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:47.924 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:47.924 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:47.924 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:47.924 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:47.924 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:47.924 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:47.924 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:47.924 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:47.924 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:47.924 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:47.924 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:47.924 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:47.924 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:47.924 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:47.924 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:47.924 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:47.924 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:47.924 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:47.924 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:47.924 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:47.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.924 --rc genhtml_branch_coverage=1 00:11:47.924 --rc genhtml_function_coverage=1 00:11:47.924 --rc genhtml_legend=1 00:11:47.924 --rc geninfo_all_blocks=1 00:11:47.924 --rc geninfo_unexecuted_blocks=1 00:11:47.924 00:11:47.924 ' 00:11:47.924 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:47.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.924 --rc genhtml_branch_coverage=1 00:11:47.924 --rc genhtml_function_coverage=1 00:11:47.924 --rc genhtml_legend=1 00:11:47.924 --rc geninfo_all_blocks=1 00:11:47.924 --rc geninfo_unexecuted_blocks=1 00:11:47.924 00:11:47.924 ' 00:11:47.924 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:47.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.924 --rc genhtml_branch_coverage=1 00:11:47.924 --rc genhtml_function_coverage=1 00:11:47.924 --rc genhtml_legend=1 00:11:47.924 --rc geninfo_all_blocks=1 00:11:47.924 --rc geninfo_unexecuted_blocks=1 00:11:47.924 00:11:47.924 ' 00:11:47.924 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:47.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.924 --rc genhtml_branch_coverage=1 00:11:47.924 --rc genhtml_function_coverage=1 00:11:47.924 --rc genhtml_legend=1 00:11:47.924 --rc geninfo_all_blocks=1 00:11:47.924 --rc geninfo_unexecuted_blocks=1 00:11:47.924 00:11:47.924 ' 00:11:47.924 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:47.924 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:47.924 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:47.924 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:47.924 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:47.924 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:47.924 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:47.924 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:47.924 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:47.924 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:47.924 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:11:47.924 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:47.924 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:47.924 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:47.924 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:47.924 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:47.925 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:47.925 #define SPDK_CONFIG_H 00:11:47.925 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:47.925 #define SPDK_CONFIG_APPS 1 00:11:47.925 #define SPDK_CONFIG_ARCH native 00:11:47.925 #define SPDK_CONFIG_ASAN 1 00:11:47.925 #undef SPDK_CONFIG_AVAHI 00:11:47.925 #undef SPDK_CONFIG_CET 00:11:47.925 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:47.925 #define SPDK_CONFIG_COVERAGE 1 00:11:47.926 #define SPDK_CONFIG_CROSS_PREFIX 00:11:47.926 #undef SPDK_CONFIG_CRYPTO 00:11:47.926 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:47.926 #undef SPDK_CONFIG_CUSTOMOCF 00:11:47.926 #undef SPDK_CONFIG_DAOS 00:11:47.926 #define SPDK_CONFIG_DAOS_DIR 00:11:47.926 #define SPDK_CONFIG_DEBUG 1 00:11:47.926 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:47.926 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:47.926 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:47.926 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:47.926 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:47.926 #undef SPDK_CONFIG_DPDK_UADK 00:11:47.926 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:47.926 #define SPDK_CONFIG_EXAMPLES 1 00:11:47.926 #undef SPDK_CONFIG_FC 00:11:47.926 #define SPDK_CONFIG_FC_PATH 00:11:47.926 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:47.926 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:47.926 #define SPDK_CONFIG_FSDEV 1 00:11:47.926 #undef SPDK_CONFIG_FUSE 00:11:47.926 #undef SPDK_CONFIG_FUZZER 00:11:47.926 #define SPDK_CONFIG_FUZZER_LIB 00:11:47.926 #undef SPDK_CONFIG_GOLANG 00:11:47.926 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:47.926 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:47.926 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:47.926 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:47.926 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:47.926 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:47.926 #undef SPDK_CONFIG_HAVE_LZ4 00:11:47.926 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:47.926 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:47.926 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:47.926 #define SPDK_CONFIG_IDXD 1 00:11:47.926 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:47.926 #undef SPDK_CONFIG_IPSEC_MB 00:11:47.926 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:47.926 #define SPDK_CONFIG_ISAL 1 00:11:47.926 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:47.926 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:47.926 #define SPDK_CONFIG_LIBDIR 00:11:47.926 #undef SPDK_CONFIG_LTO 00:11:47.926 #define SPDK_CONFIG_MAX_LCORES 128 00:11:47.926 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:11:47.926 #define SPDK_CONFIG_NVME_CUSE 1 00:11:47.926 #undef SPDK_CONFIG_OCF 00:11:47.926 #define SPDK_CONFIG_OCF_PATH 00:11:47.926 #define SPDK_CONFIG_OPENSSL_PATH 00:11:47.926 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:47.926 #define SPDK_CONFIG_PGO_DIR 00:11:47.926 #undef SPDK_CONFIG_PGO_USE 00:11:47.926 #define SPDK_CONFIG_PREFIX /usr/local 00:11:47.926 #undef SPDK_CONFIG_RAID5F 00:11:47.926 #undef SPDK_CONFIG_RBD 00:11:47.926 #define SPDK_CONFIG_RDMA 1 00:11:47.926 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:47.926 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:47.926 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:47.926 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:47.926 #define SPDK_CONFIG_SHARED 1 00:11:47.926 #undef SPDK_CONFIG_SMA 00:11:47.926 #define SPDK_CONFIG_TESTS 1 00:11:47.926 #undef SPDK_CONFIG_TSAN 00:11:47.926 #define SPDK_CONFIG_UBLK 1 00:11:47.926 #define SPDK_CONFIG_UBSAN 1 00:11:47.926 #undef SPDK_CONFIG_UNIT_TESTS 00:11:47.926 #undef SPDK_CONFIG_URING 00:11:47.926 #define SPDK_CONFIG_URING_PATH 00:11:47.926 #undef SPDK_CONFIG_URING_ZNS 00:11:47.926 #undef SPDK_CONFIG_USDT 00:11:47.926 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:47.926 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:47.926 #undef SPDK_CONFIG_VFIO_USER 00:11:47.926 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:47.926 #define SPDK_CONFIG_VHOST 1 00:11:47.926 #define SPDK_CONFIG_VIRTIO 1 00:11:47.926 #undef SPDK_CONFIG_VTUNE 00:11:47.926 #define SPDK_CONFIG_VTUNE_DIR 00:11:47.926 #define SPDK_CONFIG_WERROR 1 00:11:47.926 #define SPDK_CONFIG_WPDK_DIR 00:11:47.926 #undef SPDK_CONFIG_XNVME 00:11:47.926 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:47.926 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:47.926 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:47.926 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:47.926 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:47.926 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:47.926 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:47.926 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.926 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.926 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.926 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:47.926 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.926 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:47.926 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:47.926 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:47.926 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:47.926 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:47.926 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:47.926 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:47.926 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:47.926 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:47.926 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:47.926 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:47.926 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:47.926 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:47.926 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:47.926 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:47.926 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:47.926 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:47.926 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:47.926 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:47.926 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:47.926 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:47.926 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:47.926 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:47.926 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:47.926 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:47.926 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:47.926 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:47.926 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:11:47.926 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:47.926 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:47.926 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:47.926 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:47.926 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:47.926 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:47.926 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:47.926 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:11:47.927 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:47.928 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:47.928 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:47.928 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:47.928 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:47.928 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:47.928 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:47.928 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:47.928 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:47.928 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:47.928 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:47.928 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:47.928 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:47.928 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:11:47.928 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:47.928 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:47.928 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:47.928 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:47.928 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:47.928 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:11:47.928 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:11:47.928 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:11:47.928 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:47.928 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:47.928 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:47.928 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:47.928 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:11:47.928 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:11:47.928 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:47.928 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:47.928 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:47.928 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:47.928 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:47.928 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:47.928 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:47.928 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:47.928 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:47.928 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:47.928 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:47.928 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:47.928 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:11:47.928 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:11:47.928 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:11:47.928 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:11:47.928 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:11:47.928 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:47.928 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:11:47.928 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:11:47.928 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:11:47.928 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:11:47.928 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:11:47.928 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:11:47.928 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:11:47.928 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:11:47.928 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:11:47.928 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:11:47.928 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:11:47.928 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j48 00:11:47.928 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:11:47.928 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:11:47.928 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 2889257 ]] 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 2889257 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.IPj2ux 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.IPj2ux/tests/target /tmp/spdk.IPj2ux 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=55042547712 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=61988519936 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6945972224 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=30982893568 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30994259968 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11366400 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12375269376 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12397707264 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=22437888 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=30993985536 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30994259968 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=274432 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6198837248 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6198849536 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:11:47.929 * Looking for test storage... 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=55042547712 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=9160564736 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:47.929 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:47.929 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:47.930 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:47.930 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:47.930 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:47.930 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:47.930 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:47.930 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:47.930 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:47.930 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:11:47.930 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:48.188 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:48.188 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:48.188 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:48.188 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:48.189 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:48.189 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:48.189 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:48.189 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:48.189 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:48.189 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:48.189 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:48.189 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:48.189 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:48.189 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:48.189 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:48.189 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:48.189 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:48.189 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:48.189 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:48.189 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:48.189 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:48.189 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:48.189 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:48.189 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:48.189 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:48.189 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:48.189 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:48.189 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:48.189 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:48.189 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:48.189 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:48.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.189 --rc genhtml_branch_coverage=1 00:11:48.189 --rc genhtml_function_coverage=1 00:11:48.189 --rc genhtml_legend=1 00:11:48.189 --rc geninfo_all_blocks=1 00:11:48.189 --rc geninfo_unexecuted_blocks=1 00:11:48.189 00:11:48.189 ' 00:11:48.189 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:48.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.189 --rc genhtml_branch_coverage=1 00:11:48.189 --rc genhtml_function_coverage=1 00:11:48.189 --rc genhtml_legend=1 00:11:48.189 --rc geninfo_all_blocks=1 00:11:48.189 --rc geninfo_unexecuted_blocks=1 00:11:48.189 00:11:48.189 ' 00:11:48.189 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:48.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.189 --rc genhtml_branch_coverage=1 00:11:48.189 --rc genhtml_function_coverage=1 00:11:48.189 --rc genhtml_legend=1 00:11:48.189 --rc geninfo_all_blocks=1 00:11:48.189 --rc geninfo_unexecuted_blocks=1 00:11:48.189 00:11:48.189 ' 00:11:48.189 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:48.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.189 --rc genhtml_branch_coverage=1 00:11:48.189 --rc genhtml_function_coverage=1 00:11:48.189 --rc genhtml_legend=1 00:11:48.189 --rc geninfo_all_blocks=1 00:11:48.189 --rc geninfo_unexecuted_blocks=1 00:11:48.189 00:11:48.189 ' 00:11:48.189 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:48.189 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:48.189 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:48.189 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:48.189 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:48.189 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:48.189 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:48.189 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:48.189 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:48.189 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:48.189 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:48.189 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:48.189 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:48.189 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:48.189 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:48.189 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:48.189 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:48.189 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:48.189 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:48.189 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:48.189 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:48.189 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:48.189 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:48.189 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.189 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.189 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.189 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:48.189 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.189 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:48.189 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:48.189 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:48.189 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:48.189 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:48.189 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:48.189 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:48.189 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:48.189 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:48.189 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:48.189 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:48.189 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:48.189 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:48.189 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:48.189 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:48.189 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:48.189 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:48.190 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:48.190 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:48.190 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:48.190 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:48.190 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:48.190 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:48.190 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:48.190 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:48.190 07:35:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:50.093 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:50.354 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:50.354 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:50.354 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:50.354 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:50.354 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:50.354 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:50.354 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:50.354 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:50.354 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:50.354 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:50.354 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:50.354 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:50.354 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:50.354 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:50.354 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:50.354 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:50.354 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:50.354 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:50.354 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:50.354 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:50.354 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:50.354 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:50.354 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:50.354 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:50.354 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:50.354 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:50.354 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:50.354 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:50.354 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:50.354 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:50.354 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:50.354 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:50.354 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:50.354 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:50.354 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:50.354 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:50.354 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:50.354 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:50.354 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:50.354 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:50.354 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:50.354 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:50.354 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:50.354 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:50.354 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:50.354 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:50.354 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:50.354 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:50.354 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:50.354 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:50.354 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:50.354 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:50.354 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:50.354 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:50.354 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:50.354 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:50.354 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:50.354 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:50.354 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:50.354 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:50.354 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:50.354 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:50.354 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:50.354 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:50.354 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:50.354 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:50.354 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:50.354 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:50.354 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:50.354 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:50.354 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:50.354 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:50.354 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:11:50.354 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:50.354 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:50.354 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:50.354 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:50.354 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:50.354 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:50.354 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:50.354 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:50.354 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:50.354 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:50.354 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:50.355 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:50.355 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:50.355 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:50.355 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:50.355 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:50.355 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:50.355 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:50.355 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:50.355 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:50.355 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:50.355 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:50.355 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:50.355 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:50.355 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:50.355 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:50.355 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:50.355 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.333 ms 00:11:50.355 00:11:50.355 --- 10.0.0.2 ping statistics --- 00:11:50.355 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:50.355 rtt min/avg/max/mdev = 0.333/0.333/0.333/0.000 ms 00:11:50.355 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:50.355 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:50.355 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:11:50.355 00:11:50.355 --- 10.0.0.1 ping statistics --- 00:11:50.355 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:50.355 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:11:50.355 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:50.355 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:11:50.355 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:50.355 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:50.355 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:50.355 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:50.355 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:50.355 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:50.355 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:50.355 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:50.355 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:50.355 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:50.355 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:50.355 ************************************ 00:11:50.355 START TEST nvmf_filesystem_no_in_capsule 00:11:50.355 ************************************ 00:11:50.355 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:11:50.355 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:50.355 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:50.355 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:50.355 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:50.355 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:50.355 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2891012 00:11:50.355 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:50.355 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2891012 00:11:50.355 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2891012 ']' 00:11:50.355 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:50.355 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:50.355 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:50.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:50.355 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:50.355 07:35:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:50.614 [2024-11-19 07:35:42.325578] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:11:50.614 [2024-11-19 07:35:42.325744] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:50.614 [2024-11-19 07:35:42.480468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:50.873 [2024-11-19 07:35:42.626795] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:50.873 [2024-11-19 07:35:42.626878] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:50.873 [2024-11-19 07:35:42.626904] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:50.873 [2024-11-19 07:35:42.626938] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:50.873 [2024-11-19 07:35:42.626959] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:50.873 [2024-11-19 07:35:42.629743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:50.873 [2024-11-19 07:35:42.629813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:50.873 [2024-11-19 07:35:42.629912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.873 [2024-11-19 07:35:42.629917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:51.440 07:35:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:51.440 07:35:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:51.440 07:35:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:51.440 07:35:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:51.440 07:35:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:51.440 07:35:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:51.440 07:35:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:51.440 07:35:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:51.440 07:35:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.440 07:35:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:51.440 [2024-11-19 07:35:43.356808] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:51.699 07:35:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.699 07:35:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:51.699 07:35:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.699 07:35:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:52.266 Malloc1 00:11:52.267 07:35:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.267 07:35:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:52.267 07:35:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.267 07:35:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:52.267 07:35:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.267 07:35:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:52.267 07:35:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.267 07:35:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:52.267 07:35:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.267 07:35:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:52.267 07:35:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.267 07:35:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:52.267 [2024-11-19 07:35:43.962547] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:52.267 07:35:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.267 07:35:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:52.267 07:35:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:52.267 07:35:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:52.267 07:35:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:52.267 07:35:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:52.267 07:35:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:52.267 07:35:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.267 07:35:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:52.267 07:35:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.267 07:35:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:52.267 { 00:11:52.267 "name": "Malloc1", 00:11:52.267 "aliases": [ 00:11:52.267 "4ff71e08-7473-41b2-8422-3aedd83052be" 00:11:52.267 ], 00:11:52.267 "product_name": "Malloc disk", 00:11:52.267 "block_size": 512, 00:11:52.267 "num_blocks": 1048576, 00:11:52.267 "uuid": "4ff71e08-7473-41b2-8422-3aedd83052be", 00:11:52.267 "assigned_rate_limits": { 00:11:52.267 "rw_ios_per_sec": 0, 00:11:52.267 "rw_mbytes_per_sec": 0, 00:11:52.267 "r_mbytes_per_sec": 0, 00:11:52.267 "w_mbytes_per_sec": 0 00:11:52.267 }, 00:11:52.267 "claimed": true, 00:11:52.267 "claim_type": "exclusive_write", 00:11:52.267 "zoned": false, 00:11:52.267 "supported_io_types": { 00:11:52.267 "read": true, 00:11:52.267 "write": true, 00:11:52.267 "unmap": true, 00:11:52.267 "flush": true, 00:11:52.267 "reset": true, 00:11:52.267 "nvme_admin": false, 00:11:52.267 "nvme_io": false, 00:11:52.267 "nvme_io_md": false, 00:11:52.267 "write_zeroes": true, 00:11:52.267 "zcopy": true, 00:11:52.267 "get_zone_info": false, 00:11:52.267 "zone_management": false, 00:11:52.267 "zone_append": false, 00:11:52.267 "compare": false, 00:11:52.267 "compare_and_write": false, 00:11:52.267 "abort": true, 00:11:52.267 "seek_hole": false, 00:11:52.267 "seek_data": false, 00:11:52.267 "copy": true, 00:11:52.267 "nvme_iov_md": false 00:11:52.267 }, 00:11:52.267 "memory_domains": [ 00:11:52.267 { 00:11:52.267 "dma_device_id": "system", 00:11:52.267 "dma_device_type": 1 00:11:52.267 }, 00:11:52.267 { 00:11:52.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:52.267 "dma_device_type": 2 00:11:52.267 } 00:11:52.267 ], 00:11:52.267 "driver_specific": {} 00:11:52.267 } 00:11:52.267 ]' 00:11:52.267 07:35:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:52.267 07:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:52.267 07:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:52.267 07:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:52.267 07:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:52.267 07:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:52.267 07:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:52.267 07:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:52.834 07:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:52.834 07:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:52.834 07:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:52.834 07:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:52.834 07:35:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:55.365 07:35:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:55.365 07:35:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:55.365 07:35:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:55.365 07:35:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:55.365 07:35:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:55.365 07:35:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:55.365 07:35:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:55.365 07:35:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:55.365 07:35:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:55.365 07:35:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:55.365 07:35:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:55.365 07:35:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:55.365 07:35:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:55.365 07:35:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:55.365 07:35:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:55.365 07:35:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:55.365 07:35:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:55.365 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:55.931 07:35:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:56.866 07:35:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:56.866 07:35:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:56.866 07:35:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:56.866 07:35:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:56.866 07:35:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:56.866 ************************************ 00:11:56.866 START TEST filesystem_ext4 00:11:56.866 ************************************ 00:11:56.866 07:35:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:56.866 07:35:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:56.866 07:35:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:56.866 07:35:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:56.866 07:35:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:56.866 07:35:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:56.866 07:35:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:56.866 07:35:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:56.866 07:35:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:56.866 07:35:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:56.866 07:35:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:56.866 mke2fs 1.47.0 (5-Feb-2023) 00:11:56.866 Discarding device blocks: 0/522240 done 00:11:56.866 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:56.866 Filesystem UUID: cbfe147b-c724-483c-b727-05824d5c7337 00:11:56.866 Superblock backups stored on blocks: 00:11:56.866 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:56.866 00:11:56.866 Allocating group tables: 0/64 done 00:11:56.866 Writing inode tables: 0/64 done 00:11:57.816 Creating journal (8192 blocks): done 00:12:00.015 Writing superblocks and filesystem accounting information: 0/64 2/64 done 00:12:00.015 00:12:00.015 07:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:12:00.015 07:35:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:06.574 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:06.574 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:12:06.574 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:06.574 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:12:06.574 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:06.574 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:06.574 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2891012 00:12:06.574 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:06.574 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:06.574 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:06.574 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:06.574 00:12:06.574 real 0m8.984s 00:12:06.574 user 0m0.022s 00:12:06.574 sys 0m0.062s 00:12:06.574 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:06.574 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:06.574 ************************************ 00:12:06.574 END TEST filesystem_ext4 00:12:06.574 ************************************ 00:12:06.574 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:06.574 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:06.574 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:06.574 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:06.574 ************************************ 00:12:06.574 START TEST filesystem_btrfs 00:12:06.574 ************************************ 00:12:06.574 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:06.574 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:06.575 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:06.575 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:06.575 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:06.575 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:06.575 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:06.575 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:06.575 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:06.575 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:06.575 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:06.575 btrfs-progs v6.8.1 00:12:06.575 See https://btrfs.readthedocs.io for more information. 00:12:06.575 00:12:06.575 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:06.575 NOTE: several default settings have changed in version 5.15, please make sure 00:12:06.575 this does not affect your deployments: 00:12:06.575 - DUP for metadata (-m dup) 00:12:06.575 - enabled no-holes (-O no-holes) 00:12:06.575 - enabled free-space-tree (-R free-space-tree) 00:12:06.575 00:12:06.575 Label: (null) 00:12:06.575 UUID: 28dc988a-9ad8-4e02-bb69-9de353515fb4 00:12:06.575 Node size: 16384 00:12:06.575 Sector size: 4096 (CPU page size: 4096) 00:12:06.575 Filesystem size: 510.00MiB 00:12:06.575 Block group profiles: 00:12:06.575 Data: single 8.00MiB 00:12:06.575 Metadata: DUP 32.00MiB 00:12:06.575 System: DUP 8.00MiB 00:12:06.575 SSD detected: yes 00:12:06.575 Zoned device: no 00:12:06.575 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:06.575 Checksum: crc32c 00:12:06.575 Number of devices: 1 00:12:06.575 Devices: 00:12:06.575 ID SIZE PATH 00:12:06.575 1 510.00MiB /dev/nvme0n1p1 00:12:06.575 00:12:06.575 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:06.575 07:35:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:06.575 07:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:06.575 07:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:12:06.575 07:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:06.575 07:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:12:06.575 07:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:06.575 07:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:06.575 07:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2891012 00:12:06.575 07:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:06.575 07:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:06.575 07:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:06.575 07:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:06.575 00:12:06.575 real 0m0.684s 00:12:06.575 user 0m0.020s 00:12:06.575 sys 0m0.103s 00:12:06.575 07:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:06.575 07:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:06.575 ************************************ 00:12:06.575 END TEST filesystem_btrfs 00:12:06.575 ************************************ 00:12:06.575 07:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:12:06.575 07:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:06.575 07:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:06.575 07:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:06.575 ************************************ 00:12:06.575 START TEST filesystem_xfs 00:12:06.575 ************************************ 00:12:06.575 07:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:06.575 07:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:06.575 07:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:06.575 07:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:06.575 07:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:06.575 07:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:06.575 07:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:06.575 07:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:12:06.575 07:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:06.575 07:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:06.575 07:35:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:06.834 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:06.834 = sectsz=512 attr=2, projid32bit=1 00:12:06.834 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:06.834 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:06.834 data = bsize=4096 blocks=130560, imaxpct=25 00:12:06.834 = sunit=0 swidth=0 blks 00:12:06.834 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:06.834 log =internal log bsize=4096 blocks=16384, version=2 00:12:06.834 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:06.834 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:07.411 Discarding blocks...Done. 00:12:07.411 07:35:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:07.411 07:35:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:09.946 07:36:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:09.946 07:36:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:12:09.946 07:36:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:09.946 07:36:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:12:09.946 07:36:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:12:09.946 07:36:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:09.946 07:36:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2891012 00:12:09.946 07:36:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:09.946 07:36:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:09.946 07:36:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:09.946 07:36:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:09.946 00:12:09.946 real 0m3.400s 00:12:09.946 user 0m0.024s 00:12:09.946 sys 0m0.057s 00:12:09.946 07:36:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:09.946 07:36:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:09.946 ************************************ 00:12:09.946 END TEST filesystem_xfs 00:12:09.946 ************************************ 00:12:09.946 07:36:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:10.204 07:36:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:10.204 07:36:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:10.204 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.204 07:36:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:10.204 07:36:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:10.204 07:36:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:10.204 07:36:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:10.462 07:36:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:10.463 07:36:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:10.463 07:36:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:10.463 07:36:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:10.463 07:36:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.463 07:36:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:10.463 07:36:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.463 07:36:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:10.463 07:36:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2891012 00:12:10.463 07:36:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2891012 ']' 00:12:10.463 07:36:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2891012 00:12:10.463 07:36:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:10.463 07:36:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:10.463 07:36:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2891012 00:12:10.463 07:36:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:10.463 07:36:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:10.463 07:36:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2891012' 00:12:10.463 killing process with pid 2891012 00:12:10.463 07:36:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 2891012 00:12:10.463 07:36:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 2891012 00:12:13.051 07:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:13.051 00:12:13.052 real 0m22.382s 00:12:13.052 user 1m24.955s 00:12:13.052 sys 0m2.714s 00:12:13.052 07:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:13.052 07:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:13.052 ************************************ 00:12:13.052 END TEST nvmf_filesystem_no_in_capsule 00:12:13.052 ************************************ 00:12:13.052 07:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:12:13.052 07:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:13.052 07:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:13.052 07:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:13.052 ************************************ 00:12:13.052 START TEST nvmf_filesystem_in_capsule 00:12:13.052 ************************************ 00:12:13.052 07:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:12:13.052 07:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:12:13.052 07:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:13.052 07:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:13.052 07:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:13.052 07:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:13.052 07:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2893899 00:12:13.052 07:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:13.052 07:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2893899 00:12:13.052 07:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2893899 ']' 00:12:13.052 07:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:13.052 07:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:13.052 07:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:13.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:13.052 07:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:13.052 07:36:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:13.052 [2024-11-19 07:36:04.760768] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:12:13.052 [2024-11-19 07:36:04.760906] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:13.052 [2024-11-19 07:36:04.905978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:13.311 [2024-11-19 07:36:05.045916] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:13.311 [2024-11-19 07:36:05.046007] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:13.311 [2024-11-19 07:36:05.046034] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:13.311 [2024-11-19 07:36:05.046064] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:13.311 [2024-11-19 07:36:05.046084] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:13.311 [2024-11-19 07:36:05.048938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:13.311 [2024-11-19 07:36:05.049008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:13.311 [2024-11-19 07:36:05.049103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:13.311 [2024-11-19 07:36:05.049108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:13.878 07:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:13.878 07:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:12:13.878 07:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:13.878 07:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:13.878 07:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:13.878 07:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:13.878 07:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:13.878 07:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:12:13.878 07:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.878 07:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:13.878 [2024-11-19 07:36:05.800870] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:14.136 07:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.136 07:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:14.136 07:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.136 07:36:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:14.704 Malloc1 00:12:14.704 07:36:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.704 07:36:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:14.704 07:36:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.704 07:36:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:14.704 07:36:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.704 07:36:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:14.704 07:36:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.704 07:36:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:14.704 07:36:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.704 07:36:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:14.704 07:36:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.704 07:36:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:14.704 [2024-11-19 07:36:06.392481] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:14.704 07:36:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.704 07:36:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:14.704 07:36:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:12:14.704 07:36:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:12:14.704 07:36:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:12:14.704 07:36:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:12:14.704 07:36:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:14.704 07:36:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.704 07:36:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:14.704 07:36:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.705 07:36:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:12:14.705 { 00:12:14.705 "name": "Malloc1", 00:12:14.705 "aliases": [ 00:12:14.705 "4ef483f3-84cf-4ac9-a220-21349bf303b8" 00:12:14.705 ], 00:12:14.705 "product_name": "Malloc disk", 00:12:14.705 "block_size": 512, 00:12:14.705 "num_blocks": 1048576, 00:12:14.705 "uuid": "4ef483f3-84cf-4ac9-a220-21349bf303b8", 00:12:14.705 "assigned_rate_limits": { 00:12:14.705 "rw_ios_per_sec": 0, 00:12:14.705 "rw_mbytes_per_sec": 0, 00:12:14.705 "r_mbytes_per_sec": 0, 00:12:14.705 "w_mbytes_per_sec": 0 00:12:14.705 }, 00:12:14.705 "claimed": true, 00:12:14.705 "claim_type": "exclusive_write", 00:12:14.705 "zoned": false, 00:12:14.705 "supported_io_types": { 00:12:14.705 "read": true, 00:12:14.705 "write": true, 00:12:14.705 "unmap": true, 00:12:14.705 "flush": true, 00:12:14.705 "reset": true, 00:12:14.705 "nvme_admin": false, 00:12:14.705 "nvme_io": false, 00:12:14.705 "nvme_io_md": false, 00:12:14.705 "write_zeroes": true, 00:12:14.705 "zcopy": true, 00:12:14.705 "get_zone_info": false, 00:12:14.705 "zone_management": false, 00:12:14.705 "zone_append": false, 00:12:14.705 "compare": false, 00:12:14.705 "compare_and_write": false, 00:12:14.705 "abort": true, 00:12:14.705 "seek_hole": false, 00:12:14.705 "seek_data": false, 00:12:14.705 "copy": true, 00:12:14.705 "nvme_iov_md": false 00:12:14.705 }, 00:12:14.705 "memory_domains": [ 00:12:14.705 { 00:12:14.705 "dma_device_id": "system", 00:12:14.705 "dma_device_type": 1 00:12:14.705 }, 00:12:14.705 { 00:12:14.705 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.705 "dma_device_type": 2 00:12:14.705 } 00:12:14.705 ], 00:12:14.705 "driver_specific": {} 00:12:14.705 } 00:12:14.705 ]' 00:12:14.705 07:36:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:12:14.705 07:36:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:12:14.705 07:36:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:12:14.705 07:36:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:12:14.705 07:36:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:12:14.705 07:36:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:12:14.705 07:36:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:14.705 07:36:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:15.270 07:36:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:15.270 07:36:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:12:15.270 07:36:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:15.270 07:36:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:15.270 07:36:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:12:17.799 07:36:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:17.799 07:36:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:17.799 07:36:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:17.799 07:36:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:17.799 07:36:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:17.799 07:36:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:12:17.799 07:36:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:17.799 07:36:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:17.799 07:36:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:17.799 07:36:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:17.799 07:36:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:17.799 07:36:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:17.799 07:36:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:17.799 07:36:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:17.799 07:36:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:17.799 07:36:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:17.799 07:36:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:17.799 07:36:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:18.366 07:36:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:19.300 07:36:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:12:19.301 07:36:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:19.301 07:36:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:19.301 07:36:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:19.301 07:36:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:19.301 ************************************ 00:12:19.301 START TEST filesystem_in_capsule_ext4 00:12:19.301 ************************************ 00:12:19.301 07:36:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:19.301 07:36:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:19.301 07:36:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:19.301 07:36:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:19.301 07:36:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:12:19.301 07:36:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:19.301 07:36:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:12:19.301 07:36:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:12:19.301 07:36:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:12:19.301 07:36:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:12:19.301 07:36:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:19.301 mke2fs 1.47.0 (5-Feb-2023) 00:12:19.559 Discarding device blocks: 0/522240 done 00:12:19.559 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:19.559 Filesystem UUID: 77fd3e3f-c2f3-42ea-acad-599d94b92ee4 00:12:19.559 Superblock backups stored on blocks: 00:12:19.559 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:19.559 00:12:19.559 Allocating group tables: 0/64 done 00:12:19.559 Writing inode tables: 0/64 done 00:12:19.559 Creating journal (8192 blocks): done 00:12:19.559 Writing superblocks and filesystem accounting information: 0/64 done 00:12:19.559 00:12:19.559 07:36:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:12:19.559 07:36:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:26.121 07:36:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:26.121 07:36:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:26.121 07:36:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:26.121 07:36:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:26.121 07:36:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:26.121 07:36:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:26.121 07:36:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2893899 00:12:26.121 07:36:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:26.121 07:36:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:26.121 07:36:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:26.121 07:36:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:26.121 00:12:26.121 real 0m5.826s 00:12:26.121 user 0m0.017s 00:12:26.121 sys 0m0.066s 00:12:26.121 07:36:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:26.121 07:36:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:26.121 ************************************ 00:12:26.121 END TEST filesystem_in_capsule_ext4 00:12:26.121 ************************************ 00:12:26.121 07:36:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:26.121 07:36:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:26.121 07:36:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:26.121 07:36:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:26.121 ************************************ 00:12:26.121 START TEST filesystem_in_capsule_btrfs 00:12:26.121 ************************************ 00:12:26.121 07:36:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:26.121 07:36:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:26.121 07:36:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:26.121 07:36:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:26.121 07:36:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:26.121 07:36:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:26.121 07:36:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:26.121 07:36:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:26.121 07:36:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:26.121 07:36:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:26.121 07:36:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:26.121 btrfs-progs v6.8.1 00:12:26.121 See https://btrfs.readthedocs.io for more information. 00:12:26.121 00:12:26.121 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:26.121 NOTE: several default settings have changed in version 5.15, please make sure 00:12:26.121 this does not affect your deployments: 00:12:26.121 - DUP for metadata (-m dup) 00:12:26.121 - enabled no-holes (-O no-holes) 00:12:26.121 - enabled free-space-tree (-R free-space-tree) 00:12:26.121 00:12:26.121 Label: (null) 00:12:26.121 UUID: ff283aba-865d-4f80-84c8-47e06401dc7e 00:12:26.121 Node size: 16384 00:12:26.121 Sector size: 4096 (CPU page size: 4096) 00:12:26.121 Filesystem size: 510.00MiB 00:12:26.121 Block group profiles: 00:12:26.121 Data: single 8.00MiB 00:12:26.121 Metadata: DUP 32.00MiB 00:12:26.121 System: DUP 8.00MiB 00:12:26.121 SSD detected: yes 00:12:26.121 Zoned device: no 00:12:26.121 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:26.121 Checksum: crc32c 00:12:26.121 Number of devices: 1 00:12:26.121 Devices: 00:12:26.121 ID SIZE PATH 00:12:26.121 1 510.00MiB /dev/nvme0n1p1 00:12:26.121 00:12:26.121 07:36:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:26.121 07:36:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:26.121 07:36:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:26.121 07:36:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:26.121 07:36:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:26.121 07:36:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:26.121 07:36:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:26.121 07:36:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:26.121 07:36:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2893899 00:12:26.121 07:36:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:26.121 07:36:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:26.121 07:36:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:26.121 07:36:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:26.121 00:12:26.121 real 0m0.845s 00:12:26.121 user 0m0.012s 00:12:26.121 sys 0m0.112s 00:12:26.121 07:36:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:26.122 07:36:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:26.122 ************************************ 00:12:26.122 END TEST filesystem_in_capsule_btrfs 00:12:26.122 ************************************ 00:12:26.122 07:36:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:26.122 07:36:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:26.122 07:36:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:26.122 07:36:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:26.122 ************************************ 00:12:26.122 START TEST filesystem_in_capsule_xfs 00:12:26.122 ************************************ 00:12:26.122 07:36:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:26.122 07:36:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:26.122 07:36:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:26.122 07:36:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:26.122 07:36:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:26.122 07:36:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:26.122 07:36:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:26.122 07:36:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:12:26.122 07:36:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:26.122 07:36:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:26.122 07:36:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:26.122 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:26.122 = sectsz=512 attr=2, projid32bit=1 00:12:26.122 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:26.122 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:26.122 data = bsize=4096 blocks=130560, imaxpct=25 00:12:26.122 = sunit=0 swidth=0 blks 00:12:26.122 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:26.122 log =internal log bsize=4096 blocks=16384, version=2 00:12:26.122 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:26.122 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:27.058 Discarding blocks...Done. 00:12:27.058 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:27.058 07:36:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:29.589 07:36:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:29.589 07:36:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:29.589 07:36:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:29.589 07:36:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:29.589 07:36:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:29.589 07:36:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:29.589 07:36:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2893899 00:12:29.589 07:36:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:29.589 07:36:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:29.589 07:36:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:29.589 07:36:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:29.589 00:12:29.589 real 0m3.532s 00:12:29.589 user 0m0.018s 00:12:29.589 sys 0m0.056s 00:12:29.589 07:36:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:29.589 07:36:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:29.589 ************************************ 00:12:29.589 END TEST filesystem_in_capsule_xfs 00:12:29.589 ************************************ 00:12:29.589 07:36:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:29.848 07:36:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:29.848 07:36:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:30.107 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:30.107 07:36:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:30.107 07:36:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:30.107 07:36:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:30.107 07:36:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:30.107 07:36:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:30.107 07:36:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:30.107 07:36:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:30.107 07:36:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:30.107 07:36:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.107 07:36:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:30.107 07:36:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.107 07:36:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:30.107 07:36:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2893899 00:12:30.107 07:36:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2893899 ']' 00:12:30.107 07:36:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2893899 00:12:30.107 07:36:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:30.107 07:36:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:30.107 07:36:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2893899 00:12:30.107 07:36:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:30.107 07:36:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:30.107 07:36:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2893899' 00:12:30.107 killing process with pid 2893899 00:12:30.107 07:36:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 2893899 00:12:30.107 07:36:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 2893899 00:12:32.641 07:36:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:32.641 00:12:32.641 real 0m19.628s 00:12:32.641 user 1m14.443s 00:12:32.641 sys 0m2.448s 00:12:32.641 07:36:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:32.641 07:36:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:32.641 ************************************ 00:12:32.641 END TEST nvmf_filesystem_in_capsule 00:12:32.641 ************************************ 00:12:32.641 07:36:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:32.641 07:36:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:32.641 07:36:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:12:32.641 07:36:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:32.641 07:36:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:12:32.641 07:36:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:32.641 07:36:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:32.641 rmmod nvme_tcp 00:12:32.641 rmmod nvme_fabrics 00:12:32.641 rmmod nvme_keyring 00:12:32.641 07:36:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:32.641 07:36:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:12:32.641 07:36:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:12:32.641 07:36:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:12:32.641 07:36:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:32.641 07:36:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:32.641 07:36:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:32.641 07:36:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:12:32.641 07:36:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:12:32.641 07:36:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:32.641 07:36:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:12:32.641 07:36:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:32.641 07:36:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:32.641 07:36:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:32.641 07:36:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:32.641 07:36:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:34.547 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:34.548 00:12:34.548 real 0m46.843s 00:12:34.548 user 2m40.520s 00:12:34.548 sys 0m6.890s 00:12:34.548 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:34.548 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:34.548 ************************************ 00:12:34.548 END TEST nvmf_filesystem 00:12:34.548 ************************************ 00:12:34.548 07:36:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:34.548 07:36:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:34.548 07:36:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:34.548 07:36:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:34.548 ************************************ 00:12:34.548 START TEST nvmf_target_discovery 00:12:34.548 ************************************ 00:12:34.548 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:34.806 * Looking for test storage... 00:12:34.806 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:34.806 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:34.806 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:12:34.806 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:34.806 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:34.806 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:34.806 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:34.806 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:34.806 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:12:34.806 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:12:34.806 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:12:34.806 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:12:34.806 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:12:34.806 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:12:34.807 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:12:34.807 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:34.807 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:12:34.807 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:12:34.807 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:34.807 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:34.807 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:12:34.807 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:12:34.807 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:34.807 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:12:34.807 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:12:34.807 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:12:34.807 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:12:34.807 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:34.807 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:12:34.807 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:12:34.807 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:34.807 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:34.807 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:12:34.807 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:34.807 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:34.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.807 --rc genhtml_branch_coverage=1 00:12:34.807 --rc genhtml_function_coverage=1 00:12:34.807 --rc genhtml_legend=1 00:12:34.807 --rc geninfo_all_blocks=1 00:12:34.807 --rc geninfo_unexecuted_blocks=1 00:12:34.807 00:12:34.807 ' 00:12:34.807 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:34.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.807 --rc genhtml_branch_coverage=1 00:12:34.807 --rc genhtml_function_coverage=1 00:12:34.807 --rc genhtml_legend=1 00:12:34.807 --rc geninfo_all_blocks=1 00:12:34.807 --rc geninfo_unexecuted_blocks=1 00:12:34.807 00:12:34.807 ' 00:12:34.807 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:34.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.807 --rc genhtml_branch_coverage=1 00:12:34.807 --rc genhtml_function_coverage=1 00:12:34.807 --rc genhtml_legend=1 00:12:34.807 --rc geninfo_all_blocks=1 00:12:34.807 --rc geninfo_unexecuted_blocks=1 00:12:34.807 00:12:34.807 ' 00:12:34.807 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:34.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.807 --rc genhtml_branch_coverage=1 00:12:34.807 --rc genhtml_function_coverage=1 00:12:34.807 --rc genhtml_legend=1 00:12:34.807 --rc geninfo_all_blocks=1 00:12:34.807 --rc geninfo_unexecuted_blocks=1 00:12:34.807 00:12:34.807 ' 00:12:34.807 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:34.807 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:34.807 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:34.807 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:34.807 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:34.807 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:34.807 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:34.807 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:34.807 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:34.807 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:34.807 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:34.807 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:34.807 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:34.807 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:34.807 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:34.807 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:34.807 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:34.807 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:34.807 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:34.807 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:12:34.807 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:34.807 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:34.807 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:34.807 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.807 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.807 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.807 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:34.807 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.807 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:12:34.807 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:34.807 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:34.807 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:34.807 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:34.807 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:34.807 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:34.807 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:34.807 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:34.807 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:34.807 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:34.807 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:34.807 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:34.807 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:34.807 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:34.807 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:34.807 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:34.808 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:34.808 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:34.808 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:34.808 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:34.808 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:34.808 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:34.808 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:34.808 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:34.808 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:34.808 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:12:34.808 07:36:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.337 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:37.337 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:12:37.337 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:37.337 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:37.337 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:37.337 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:37.337 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:37.337 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:12:37.337 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:37.337 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:12:37.337 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:12:37.337 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:12:37.337 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:12:37.337 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:12:37.337 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:12:37.337 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:37.337 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:37.337 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:37.337 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:37.337 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:37.337 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:37.337 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:37.337 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:37.337 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:37.337 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:37.337 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:37.337 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:37.337 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:37.337 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:37.337 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:37.337 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:37.337 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:37.337 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:37.337 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:37.337 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:37.337 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:37.337 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:37.337 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:37.337 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:37.337 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:37.337 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:37.337 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:37.337 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:37.337 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:37.337 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:37.337 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:37.337 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:37.337 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:37.337 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:37.337 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:37.337 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:37.337 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:37.337 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:37.337 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:37.337 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:37.337 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:37.337 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:37.337 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:37.337 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:37.337 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:37.337 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:37.337 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:37.337 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:37.337 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:37.337 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:37.337 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:37.337 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:37.337 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:37.337 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:37.338 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:37.338 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:37.338 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:37.338 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:37.338 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:12:37.338 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:37.338 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:37.338 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:37.338 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:37.338 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:37.338 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:37.338 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:37.338 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:37.338 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:37.338 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:37.338 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:37.338 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:37.338 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:37.338 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:37.338 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:37.338 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:37.338 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:37.338 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:37.338 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:37.338 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:37.338 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:37.338 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:37.338 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:37.338 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:37.338 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:37.338 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:37.338 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:37.338 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:12:37.338 00:12:37.338 --- 10.0.0.2 ping statistics --- 00:12:37.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:37.338 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:12:37.338 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:37.338 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:37.338 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:12:37.338 00:12:37.338 --- 10.0.0.1 ping statistics --- 00:12:37.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:37.338 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:12:37.338 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:37.338 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:12:37.338 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:37.338 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:37.338 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:37.338 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:37.338 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:37.338 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:37.338 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:37.338 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:37.338 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:37.338 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:37.338 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.338 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=2898944 00:12:37.338 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:37.338 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 2898944 00:12:37.338 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 2898944 ']' 00:12:37.338 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:37.338 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:37.338 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:37.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:37.338 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:37.338 07:36:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.338 [2024-11-19 07:36:29.022001] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:12:37.338 [2024-11-19 07:36:29.022143] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:37.338 [2024-11-19 07:36:29.173558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:37.596 [2024-11-19 07:36:29.316888] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:37.596 [2024-11-19 07:36:29.316979] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:37.596 [2024-11-19 07:36:29.317005] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:37.597 [2024-11-19 07:36:29.317028] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:37.597 [2024-11-19 07:36:29.317049] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:37.597 [2024-11-19 07:36:29.319936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:37.597 [2024-11-19 07:36:29.320009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:37.597 [2024-11-19 07:36:29.320108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:37.597 [2024-11-19 07:36:29.320113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:38.163 07:36:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:38.163 07:36:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:12:38.163 07:36:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:38.163 07:36:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:38.163 07:36:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.163 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:38.163 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:38.163 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.163 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.164 [2024-11-19 07:36:30.014664] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:38.164 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.164 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:38.164 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:38.164 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:38.164 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.164 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.164 Null1 00:12:38.164 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.164 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:38.164 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.164 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.164 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.164 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:38.164 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.164 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.164 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.164 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:38.164 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.164 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.164 [2024-11-19 07:36:30.072000] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:38.164 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.164 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:38.164 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:38.164 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.164 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.164 Null2 00:12:38.164 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.164 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:38.164 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.164 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.164 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.164 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:38.164 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.164 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.422 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.422 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:38.422 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.422 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.422 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.422 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:38.422 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:38.422 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.423 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.423 Null3 00:12:38.423 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.423 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:38.423 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.423 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.423 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.423 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:38.423 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.423 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.423 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.423 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:38.423 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.423 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.423 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.423 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:38.423 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:38.423 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.423 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.423 Null4 00:12:38.423 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.423 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:38.423 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.423 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.423 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.423 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:38.423 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.423 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.423 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.423 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:38.423 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.423 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.423 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.423 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:38.423 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.423 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.423 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.423 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:38.423 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.423 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.423 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.423 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:12:38.682 00:12:38.682 Discovery Log Number of Records 6, Generation counter 6 00:12:38.682 =====Discovery Log Entry 0====== 00:12:38.682 trtype: tcp 00:12:38.682 adrfam: ipv4 00:12:38.682 subtype: current discovery subsystem 00:12:38.682 treq: not required 00:12:38.682 portid: 0 00:12:38.682 trsvcid: 4420 00:12:38.682 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:38.682 traddr: 10.0.0.2 00:12:38.682 eflags: explicit discovery connections, duplicate discovery information 00:12:38.682 sectype: none 00:12:38.682 =====Discovery Log Entry 1====== 00:12:38.682 trtype: tcp 00:12:38.682 adrfam: ipv4 00:12:38.682 subtype: nvme subsystem 00:12:38.682 treq: not required 00:12:38.682 portid: 0 00:12:38.682 trsvcid: 4420 00:12:38.682 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:38.682 traddr: 10.0.0.2 00:12:38.682 eflags: none 00:12:38.682 sectype: none 00:12:38.682 =====Discovery Log Entry 2====== 00:12:38.682 trtype: tcp 00:12:38.682 adrfam: ipv4 00:12:38.682 subtype: nvme subsystem 00:12:38.682 treq: not required 00:12:38.682 portid: 0 00:12:38.682 trsvcid: 4420 00:12:38.682 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:38.682 traddr: 10.0.0.2 00:12:38.682 eflags: none 00:12:38.682 sectype: none 00:12:38.682 =====Discovery Log Entry 3====== 00:12:38.682 trtype: tcp 00:12:38.682 adrfam: ipv4 00:12:38.682 subtype: nvme subsystem 00:12:38.682 treq: not required 00:12:38.682 portid: 0 00:12:38.682 trsvcid: 4420 00:12:38.682 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:38.682 traddr: 10.0.0.2 00:12:38.682 eflags: none 00:12:38.682 sectype: none 00:12:38.682 =====Discovery Log Entry 4====== 00:12:38.682 trtype: tcp 00:12:38.682 adrfam: ipv4 00:12:38.682 subtype: nvme subsystem 00:12:38.682 treq: not required 00:12:38.682 portid: 0 00:12:38.682 trsvcid: 4420 00:12:38.682 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:38.682 traddr: 10.0.0.2 00:12:38.682 eflags: none 00:12:38.682 sectype: none 00:12:38.682 =====Discovery Log Entry 5====== 00:12:38.682 trtype: tcp 00:12:38.682 adrfam: ipv4 00:12:38.682 subtype: discovery subsystem referral 00:12:38.682 treq: not required 00:12:38.682 portid: 0 00:12:38.682 trsvcid: 4430 00:12:38.682 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:38.682 traddr: 10.0.0.2 00:12:38.682 eflags: none 00:12:38.682 sectype: none 00:12:38.682 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:38.682 Perform nvmf subsystem discovery via RPC 00:12:38.682 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:38.682 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.682 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.682 [ 00:12:38.682 { 00:12:38.682 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:38.682 "subtype": "Discovery", 00:12:38.682 "listen_addresses": [ 00:12:38.682 { 00:12:38.682 "trtype": "TCP", 00:12:38.682 "adrfam": "IPv4", 00:12:38.682 "traddr": "10.0.0.2", 00:12:38.682 "trsvcid": "4420" 00:12:38.682 } 00:12:38.682 ], 00:12:38.682 "allow_any_host": true, 00:12:38.682 "hosts": [] 00:12:38.682 }, 00:12:38.682 { 00:12:38.682 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:38.682 "subtype": "NVMe", 00:12:38.682 "listen_addresses": [ 00:12:38.682 { 00:12:38.682 "trtype": "TCP", 00:12:38.682 "adrfam": "IPv4", 00:12:38.682 "traddr": "10.0.0.2", 00:12:38.682 "trsvcid": "4420" 00:12:38.682 } 00:12:38.682 ], 00:12:38.682 "allow_any_host": true, 00:12:38.682 "hosts": [], 00:12:38.682 "serial_number": "SPDK00000000000001", 00:12:38.682 "model_number": "SPDK bdev Controller", 00:12:38.682 "max_namespaces": 32, 00:12:38.682 "min_cntlid": 1, 00:12:38.682 "max_cntlid": 65519, 00:12:38.682 "namespaces": [ 00:12:38.682 { 00:12:38.682 "nsid": 1, 00:12:38.682 "bdev_name": "Null1", 00:12:38.682 "name": "Null1", 00:12:38.682 "nguid": "B077F24842CE487A997EC64A007EF050", 00:12:38.682 "uuid": "b077f248-42ce-487a-997e-c64a007ef050" 00:12:38.682 } 00:12:38.682 ] 00:12:38.682 }, 00:12:38.682 { 00:12:38.682 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:38.682 "subtype": "NVMe", 00:12:38.682 "listen_addresses": [ 00:12:38.682 { 00:12:38.682 "trtype": "TCP", 00:12:38.682 "adrfam": "IPv4", 00:12:38.682 "traddr": "10.0.0.2", 00:12:38.682 "trsvcid": "4420" 00:12:38.682 } 00:12:38.682 ], 00:12:38.682 "allow_any_host": true, 00:12:38.682 "hosts": [], 00:12:38.682 "serial_number": "SPDK00000000000002", 00:12:38.682 "model_number": "SPDK bdev Controller", 00:12:38.682 "max_namespaces": 32, 00:12:38.682 "min_cntlid": 1, 00:12:38.682 "max_cntlid": 65519, 00:12:38.682 "namespaces": [ 00:12:38.682 { 00:12:38.682 "nsid": 1, 00:12:38.682 "bdev_name": "Null2", 00:12:38.682 "name": "Null2", 00:12:38.682 "nguid": "26DF2506663F4B18AEDD1BAE669E251D", 00:12:38.682 "uuid": "26df2506-663f-4b18-aedd-1bae669e251d" 00:12:38.682 } 00:12:38.682 ] 00:12:38.682 }, 00:12:38.682 { 00:12:38.682 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:38.682 "subtype": "NVMe", 00:12:38.682 "listen_addresses": [ 00:12:38.682 { 00:12:38.682 "trtype": "TCP", 00:12:38.682 "adrfam": "IPv4", 00:12:38.682 "traddr": "10.0.0.2", 00:12:38.682 "trsvcid": "4420" 00:12:38.682 } 00:12:38.682 ], 00:12:38.682 "allow_any_host": true, 00:12:38.682 "hosts": [], 00:12:38.682 "serial_number": "SPDK00000000000003", 00:12:38.682 "model_number": "SPDK bdev Controller", 00:12:38.682 "max_namespaces": 32, 00:12:38.682 "min_cntlid": 1, 00:12:38.682 "max_cntlid": 65519, 00:12:38.682 "namespaces": [ 00:12:38.682 { 00:12:38.682 "nsid": 1, 00:12:38.683 "bdev_name": "Null3", 00:12:38.683 "name": "Null3", 00:12:38.683 "nguid": "34603D17BD0E49309587F378D6A5A0DF", 00:12:38.683 "uuid": "34603d17-bd0e-4930-9587-f378d6a5a0df" 00:12:38.683 } 00:12:38.683 ] 00:12:38.683 }, 00:12:38.683 { 00:12:38.683 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:38.683 "subtype": "NVMe", 00:12:38.683 "listen_addresses": [ 00:12:38.683 { 00:12:38.683 "trtype": "TCP", 00:12:38.683 "adrfam": "IPv4", 00:12:38.683 "traddr": "10.0.0.2", 00:12:38.683 "trsvcid": "4420" 00:12:38.683 } 00:12:38.683 ], 00:12:38.683 "allow_any_host": true, 00:12:38.683 "hosts": [], 00:12:38.683 "serial_number": "SPDK00000000000004", 00:12:38.683 "model_number": "SPDK bdev Controller", 00:12:38.683 "max_namespaces": 32, 00:12:38.683 "min_cntlid": 1, 00:12:38.683 "max_cntlid": 65519, 00:12:38.683 "namespaces": [ 00:12:38.683 { 00:12:38.683 "nsid": 1, 00:12:38.683 "bdev_name": "Null4", 00:12:38.683 "name": "Null4", 00:12:38.683 "nguid": "58F287905A0F497492E872FD052DB5D7", 00:12:38.683 "uuid": "58f28790-5a0f-4974-92e8-72fd052db5d7" 00:12:38.683 } 00:12:38.683 ] 00:12:38.683 } 00:12:38.683 ] 00:12:38.683 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.683 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:38.683 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:38.683 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:38.683 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.683 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.683 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.683 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:38.683 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.683 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.683 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.683 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:38.683 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:38.683 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.683 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.683 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.683 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:38.683 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.683 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.683 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.683 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:38.683 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:38.683 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.683 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.683 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.683 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:38.683 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.683 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.683 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.683 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:38.683 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:38.683 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.683 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.683 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.683 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:38.683 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.683 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.683 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.683 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:38.683 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.683 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.683 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.683 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:38.683 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:38.683 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.683 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.683 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.683 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:38.683 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:38.683 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:38.683 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:38.683 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:38.683 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:12:38.683 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:38.683 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:12:38.683 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:38.683 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:38.683 rmmod nvme_tcp 00:12:38.683 rmmod nvme_fabrics 00:12:38.683 rmmod nvme_keyring 00:12:38.942 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:38.942 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:12:38.942 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:12:38.942 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 2898944 ']' 00:12:38.942 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 2898944 00:12:38.942 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 2898944 ']' 00:12:38.942 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 2898944 00:12:38.942 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:12:38.942 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:38.942 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2898944 00:12:38.942 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:38.942 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:38.942 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2898944' 00:12:38.942 killing process with pid 2898944 00:12:38.942 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 2898944 00:12:38.942 07:36:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 2898944 00:12:39.878 07:36:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:39.878 07:36:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:39.878 07:36:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:39.878 07:36:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:12:39.878 07:36:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:12:39.878 07:36:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:39.878 07:36:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:12:40.138 07:36:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:40.138 07:36:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:40.138 07:36:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:40.138 07:36:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:40.138 07:36:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:42.047 07:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:42.047 00:12:42.047 real 0m7.388s 00:12:42.047 user 0m9.879s 00:12:42.047 sys 0m2.144s 00:12:42.047 07:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:42.047 07:36:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:42.047 ************************************ 00:12:42.047 END TEST nvmf_target_discovery 00:12:42.047 ************************************ 00:12:42.047 07:36:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:42.047 07:36:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:42.047 07:36:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:42.047 07:36:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:42.047 ************************************ 00:12:42.047 START TEST nvmf_referrals 00:12:42.047 ************************************ 00:12:42.047 07:36:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:42.047 * Looking for test storage... 00:12:42.047 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:42.047 07:36:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:42.047 07:36:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:12:42.047 07:36:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:42.306 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:42.306 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:42.306 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:42.306 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:42.306 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:42.306 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:42.306 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:42.306 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:42.306 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:42.306 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:42.306 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:42.306 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:42.306 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:42.306 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:42.306 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:42.306 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:42.306 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:42.306 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:42.306 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:42.306 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:42.306 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:42.306 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:42.306 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:42.306 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:42.306 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:42.306 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:42.306 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:42.306 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:42.306 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:42.306 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:42.306 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:42.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.306 --rc genhtml_branch_coverage=1 00:12:42.306 --rc genhtml_function_coverage=1 00:12:42.306 --rc genhtml_legend=1 00:12:42.306 --rc geninfo_all_blocks=1 00:12:42.306 --rc geninfo_unexecuted_blocks=1 00:12:42.306 00:12:42.306 ' 00:12:42.307 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:42.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.307 --rc genhtml_branch_coverage=1 00:12:42.307 --rc genhtml_function_coverage=1 00:12:42.307 --rc genhtml_legend=1 00:12:42.307 --rc geninfo_all_blocks=1 00:12:42.307 --rc geninfo_unexecuted_blocks=1 00:12:42.307 00:12:42.307 ' 00:12:42.307 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:42.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.307 --rc genhtml_branch_coverage=1 00:12:42.307 --rc genhtml_function_coverage=1 00:12:42.307 --rc genhtml_legend=1 00:12:42.307 --rc geninfo_all_blocks=1 00:12:42.307 --rc geninfo_unexecuted_blocks=1 00:12:42.307 00:12:42.307 ' 00:12:42.307 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:42.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.307 --rc genhtml_branch_coverage=1 00:12:42.307 --rc genhtml_function_coverage=1 00:12:42.307 --rc genhtml_legend=1 00:12:42.307 --rc geninfo_all_blocks=1 00:12:42.307 --rc geninfo_unexecuted_blocks=1 00:12:42.307 00:12:42.307 ' 00:12:42.307 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:42.307 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:42.307 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:42.307 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:42.307 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:42.307 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:42.307 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:42.307 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:42.307 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:42.307 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:42.307 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:42.307 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:42.307 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:42.307 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:42.307 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:42.307 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:42.307 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:42.307 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:42.307 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:42.307 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:42.307 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:42.307 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:42.307 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:42.307 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.307 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.307 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.307 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:42.307 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.307 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:12:42.307 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:42.307 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:42.307 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:42.307 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:42.307 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:42.307 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:42.307 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:42.307 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:42.307 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:42.307 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:42.307 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:42.307 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:42.307 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:42.307 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:42.307 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:42.307 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:42.307 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:42.307 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:42.307 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:42.307 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:42.307 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:42.307 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:42.307 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:42.307 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:42.307 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:42.307 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:42.307 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:42.307 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:12:42.307 07:36:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:44.211 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:44.211 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:12:44.211 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:44.211 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:44.211 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:44.211 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:44.211 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:44.211 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:12:44.211 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:44.211 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:12:44.211 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:12:44.211 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:12:44.211 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:12:44.211 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:12:44.211 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:12:44.211 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:44.211 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:44.211 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:44.211 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:44.211 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:44.211 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:44.211 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:44.211 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:44.211 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:44.211 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:44.211 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:44.211 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:44.211 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:44.211 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:44.211 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:44.211 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:44.211 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:44.211 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:44.211 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:44.211 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:44.211 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:44.211 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:44.211 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:44.211 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:44.211 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:44.211 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:44.211 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:44.211 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:44.211 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:44.211 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:44.211 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:44.211 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:44.211 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:44.211 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:44.212 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:44.212 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:44.212 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:44.212 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:44.212 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:44.212 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:44.212 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:44.212 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:44.212 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:44.212 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:44.212 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:44.212 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:44.212 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:44.212 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:44.212 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:44.212 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:44.212 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:44.212 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:44.212 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:44.212 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:44.212 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:44.212 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:44.212 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:44.212 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:44.212 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:12:44.212 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:44.212 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:44.212 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:44.212 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:44.212 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:44.212 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:44.212 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:44.212 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:44.212 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:44.212 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:44.212 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:44.212 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:44.212 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:44.212 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:44.212 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:44.212 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:44.212 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:44.212 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:44.472 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:44.472 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:44.472 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:44.472 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:44.472 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:44.472 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:44.472 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:44.472 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:44.472 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:44.472 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.228 ms 00:12:44.472 00:12:44.472 --- 10.0.0.2 ping statistics --- 00:12:44.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:44.472 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:12:44.472 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:44.472 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:44.472 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:12:44.472 00:12:44.472 --- 10.0.0.1 ping statistics --- 00:12:44.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:44.472 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:12:44.472 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:44.472 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:12:44.472 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:44.472 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:44.472 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:44.472 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:44.472 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:44.472 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:44.472 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:44.472 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:44.472 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:44.472 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:44.472 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:44.472 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=2901184 00:12:44.472 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:44.472 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 2901184 00:12:44.472 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 2901184 ']' 00:12:44.472 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:44.472 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:44.472 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:44.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:44.472 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:44.472 07:36:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:44.472 [2024-11-19 07:36:36.362385] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:12:44.472 [2024-11-19 07:36:36.362528] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:44.731 [2024-11-19 07:36:36.515558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:44.731 [2024-11-19 07:36:36.660938] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:44.731 [2024-11-19 07:36:36.661021] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:44.731 [2024-11-19 07:36:36.661048] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:44.731 [2024-11-19 07:36:36.661072] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:44.731 [2024-11-19 07:36:36.661093] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:44.731 [2024-11-19 07:36:36.663884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:44.731 [2024-11-19 07:36:36.663916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:44.731 [2024-11-19 07:36:36.663945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:44.731 [2024-11-19 07:36:36.663937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:45.666 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:45.666 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:12:45.666 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:45.666 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:45.666 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:45.666 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:45.666 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:45.666 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.666 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:45.666 [2024-11-19 07:36:37.332820] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:45.666 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.666 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:45.666 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.666 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:45.666 [2024-11-19 07:36:37.362661] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:45.666 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.666 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:45.666 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.666 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:45.666 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.666 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:45.666 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.666 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:45.666 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.666 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:45.666 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.666 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:45.666 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.666 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:45.666 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:45.666 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.666 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:45.666 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.666 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:45.666 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:45.666 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:45.666 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:45.666 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:45.666 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.666 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:45.666 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:45.666 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.666 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:45.666 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:45.666 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:45.666 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:45.667 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:45.667 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:45.667 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:45.667 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:45.925 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:45.925 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:45.925 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:45.925 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.925 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:45.925 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.925 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:45.925 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.925 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:45.925 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.925 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:45.925 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.925 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:45.925 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.925 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:45.925 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.925 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:45.925 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:45.925 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.925 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:45.925 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:45.925 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:45.925 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:45.925 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:45.925 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:45.925 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:46.186 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:46.186 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:46.186 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:46.186 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.186 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:46.186 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.186 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:46.186 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.186 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:46.186 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.186 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:46.186 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:46.186 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:46.186 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:46.186 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.186 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:46.186 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:46.186 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.186 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:46.186 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:46.186 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:46.186 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:46.186 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:46.186 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:46.186 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:46.186 07:36:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:46.477 07:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:46.477 07:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:46.477 07:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:46.477 07:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:46.477 07:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:46.477 07:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:46.477 07:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:46.477 07:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:46.477 07:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:46.477 07:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:46.477 07:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:46.477 07:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:46.477 07:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:46.760 07:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:46.760 07:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:46.760 07:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.760 07:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:46.760 07:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.760 07:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:46.760 07:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:46.760 07:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:46.760 07:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:46.760 07:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.760 07:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:46.760 07:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:46.760 07:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.760 07:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:46.760 07:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:46.760 07:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:46.760 07:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:46.760 07:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:46.760 07:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:46.760 07:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:46.760 07:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:47.018 07:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:47.018 07:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:47.018 07:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:47.018 07:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:47.018 07:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:47.018 07:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:47.018 07:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:47.018 07:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:47.018 07:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:47.018 07:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:47.018 07:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:47.018 07:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:47.018 07:36:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:47.277 07:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:47.277 07:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:47.277 07:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.277 07:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:47.277 07:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.277 07:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:47.277 07:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:47.277 07:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.277 07:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:47.277 07:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.277 07:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:47.277 07:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:47.277 07:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:47.277 07:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:47.277 07:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:47.277 07:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:47.277 07:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:47.535 07:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:47.535 07:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:47.535 07:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:47.535 07:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:47.535 07:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:47.535 07:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:47.535 07:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:47.535 07:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:47.535 07:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:47.535 07:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:47.535 rmmod nvme_tcp 00:12:47.535 rmmod nvme_fabrics 00:12:47.535 rmmod nvme_keyring 00:12:47.535 07:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:47.535 07:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:47.535 07:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:47.535 07:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 2901184 ']' 00:12:47.535 07:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 2901184 00:12:47.535 07:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 2901184 ']' 00:12:47.535 07:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 2901184 00:12:47.535 07:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:12:47.535 07:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:47.535 07:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2901184 00:12:47.535 07:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:47.535 07:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:47.535 07:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2901184' 00:12:47.535 killing process with pid 2901184 00:12:47.535 07:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 2901184 00:12:47.535 07:36:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 2901184 00:12:48.912 07:36:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:48.912 07:36:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:48.912 07:36:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:48.912 07:36:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:12:48.912 07:36:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:12:48.912 07:36:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:48.912 07:36:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:12:48.912 07:36:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:48.912 07:36:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:48.912 07:36:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:48.912 07:36:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:48.912 07:36:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:50.816 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:50.816 00:12:50.816 real 0m8.673s 00:12:50.816 user 0m15.962s 00:12:50.816 sys 0m2.514s 00:12:50.816 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:50.816 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:50.816 ************************************ 00:12:50.816 END TEST nvmf_referrals 00:12:50.816 ************************************ 00:12:50.816 07:36:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:50.816 07:36:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:50.816 07:36:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:50.816 07:36:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:50.816 ************************************ 00:12:50.816 START TEST nvmf_connect_disconnect 00:12:50.816 ************************************ 00:12:50.816 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:50.816 * Looking for test storage... 00:12:50.816 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:50.816 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:50.816 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:12:50.816 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:51.076 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:51.076 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:51.076 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:51.076 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:51.076 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:51.076 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:51.076 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:51.076 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:51.076 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:51.076 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:51.076 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:51.076 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:51.076 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:51.076 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:51.076 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:51.076 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:51.076 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:51.076 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:51.076 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:51.076 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:51.076 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:51.076 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:51.076 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:51.076 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:51.076 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:51.076 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:51.076 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:51.076 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:51.076 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:51.076 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:51.076 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:51.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:51.076 --rc genhtml_branch_coverage=1 00:12:51.076 --rc genhtml_function_coverage=1 00:12:51.076 --rc genhtml_legend=1 00:12:51.076 --rc geninfo_all_blocks=1 00:12:51.076 --rc geninfo_unexecuted_blocks=1 00:12:51.076 00:12:51.076 ' 00:12:51.076 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:51.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:51.076 --rc genhtml_branch_coverage=1 00:12:51.076 --rc genhtml_function_coverage=1 00:12:51.076 --rc genhtml_legend=1 00:12:51.076 --rc geninfo_all_blocks=1 00:12:51.076 --rc geninfo_unexecuted_blocks=1 00:12:51.076 00:12:51.076 ' 00:12:51.076 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:51.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:51.076 --rc genhtml_branch_coverage=1 00:12:51.076 --rc genhtml_function_coverage=1 00:12:51.076 --rc genhtml_legend=1 00:12:51.076 --rc geninfo_all_blocks=1 00:12:51.076 --rc geninfo_unexecuted_blocks=1 00:12:51.076 00:12:51.076 ' 00:12:51.076 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:51.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:51.076 --rc genhtml_branch_coverage=1 00:12:51.076 --rc genhtml_function_coverage=1 00:12:51.076 --rc genhtml_legend=1 00:12:51.076 --rc geninfo_all_blocks=1 00:12:51.076 --rc geninfo_unexecuted_blocks=1 00:12:51.076 00:12:51.076 ' 00:12:51.076 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:51.076 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:51.076 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:51.076 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:51.076 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:51.076 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:51.076 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:51.076 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:51.076 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:51.076 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:51.076 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:51.076 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:51.076 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:51.076 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:51.076 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:51.076 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:51.076 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:51.076 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:51.076 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:51.076 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:51.076 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:51.076 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:51.076 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:51.076 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.077 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.077 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.077 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:51.077 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.077 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:51.077 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:51.077 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:51.077 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:51.077 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:51.077 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:51.077 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:51.077 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:51.077 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:51.077 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:51.077 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:51.077 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:51.077 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:51.077 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:51.077 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:51.077 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:51.077 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:51.077 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:51.077 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:51.077 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:51.077 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:51.077 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:51.077 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:51.077 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:51.077 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:51.077 07:36:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:52.983 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:52.983 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:52.983 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:52.983 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:52.983 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:52.983 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:52.983 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:52.983 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:52.983 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:52.983 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:52.983 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:52.983 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:52.983 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:52.983 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:52.983 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:52.983 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:52.983 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:52.983 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:52.983 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:52.983 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:52.983 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:52.983 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:52.983 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:52.983 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:52.983 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:52.983 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:52.983 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:52.983 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:52.983 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:52.983 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:52.983 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:52.983 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:52.983 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:52.983 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:52.983 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:52.983 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:52.983 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:52.984 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:52.984 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:52.984 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:52.984 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:52.984 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:52.984 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:52.984 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:52.984 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:52.984 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:52.984 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:52.984 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:52.984 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:52.984 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:52.984 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:52.984 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:52.984 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:52.984 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:52.984 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:52.984 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:52.984 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:52.984 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:52.984 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:52.984 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:52.984 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:52.984 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:52.984 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:52.984 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:52.984 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:52.984 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:52.984 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:52.984 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:52.984 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:52.984 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:52.984 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:52.984 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:52.984 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:52.984 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:12:52.984 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:52.984 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:52.984 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:52.984 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:52.984 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:52.984 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:52.984 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:52.984 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:52.984 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:52.984 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:52.984 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:52.984 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:52.984 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:52.984 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:52.984 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:52.984 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:52.984 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:52.984 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:53.244 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:53.244 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:53.244 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:53.244 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:53.244 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:53.244 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:53.244 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:53.244 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:53.244 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:53.244 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:12:53.244 00:12:53.244 --- 10.0.0.2 ping statistics --- 00:12:53.244 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:53.244 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:12:53.244 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:53.244 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:53.244 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:12:53.244 00:12:53.244 --- 10.0.0.1 ping statistics --- 00:12:53.244 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:53.244 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:12:53.244 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:53.244 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:12:53.244 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:53.244 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:53.244 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:53.244 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:53.244 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:53.244 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:53.244 07:36:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:53.244 07:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:53.244 07:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:53.244 07:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:53.244 07:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:53.244 07:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=2903741 00:12:53.244 07:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:53.244 07:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 2903741 00:12:53.244 07:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 2903741 ']' 00:12:53.244 07:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:53.244 07:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:53.244 07:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:53.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:53.244 07:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:53.244 07:36:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:53.244 [2024-11-19 07:36:45.127739] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:12:53.244 [2024-11-19 07:36:45.127900] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:53.502 [2024-11-19 07:36:45.283177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:53.502 [2024-11-19 07:36:45.414334] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:53.502 [2024-11-19 07:36:45.414411] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:53.502 [2024-11-19 07:36:45.414432] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:53.502 [2024-11-19 07:36:45.414452] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:53.502 [2024-11-19 07:36:45.414468] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:53.502 [2024-11-19 07:36:45.416993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:53.503 [2024-11-19 07:36:45.417056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:53.503 [2024-11-19 07:36:45.417116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.503 [2024-11-19 07:36:45.417121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:54.437 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:54.437 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:12:54.437 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:54.437 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:54.437 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:54.437 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:54.437 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:54.438 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.438 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:54.438 [2024-11-19 07:36:46.130741] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:54.438 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.438 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:54.438 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.438 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:54.438 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.438 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:54.438 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:54.438 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.438 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:54.438 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.438 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:54.438 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.438 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:54.438 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.438 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:54.438 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.438 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:54.438 [2024-11-19 07:36:46.261102] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:54.438 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.438 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:12:54.438 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:12:54.438 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:12:54.438 07:36:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:56.969 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:59.497 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:02.027 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:03.927 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:06.458 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:08.989 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:11.013 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:13.540 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:15.437 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:17.965 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:20.493 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:23.021 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:24.917 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:27.455 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:29.984 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:32.511 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:34.409 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:36.937 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:39.464 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:41.992 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:43.890 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:46.415 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:48.943 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:50.839 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:53.367 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:55.897 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:58.424 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:00.322 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:02.850 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:04.748 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:07.277 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:09.806 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:12.418 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:14.353 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:16.879 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:18.776 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:21.303 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:23.831 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:25.728 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:28.253 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:30.777 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:32.674 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:35.199 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:37.725 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:39.625 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:42.151 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:44.677 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:47.201 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:49.192 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:51.721 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:54.248 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:56.196 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:58.723 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:01.251 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:03.148 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:05.674 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:08.202 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:10.728 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:12.625 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:15.151 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:17.677 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:20.203 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:22.728 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:24.625 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:27.152 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:29.685 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:32.213 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:34.111 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:36.716 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:38.616 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:41.147 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:43.672 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:46.198 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:48.727 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:50.629 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:53.161 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:55.692 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:58.221 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:00.121 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:02.650 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:05.180 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:07.081 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:09.609 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:12.137 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:14.036 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:16.562 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:19.107 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:21.119 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:23.650 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:25.552 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:28.087 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:30.618 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:33.148 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:35.048 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:37.596 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:40.124 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:42.024 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:44.554 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:47.084 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:49.613 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:49.613 07:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:16:49.613 07:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:16:49.613 07:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:49.613 07:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:16:49.613 07:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:49.613 07:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:16:49.613 07:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:49.613 07:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:49.613 rmmod nvme_tcp 00:16:49.613 rmmod nvme_fabrics 00:16:49.613 rmmod nvme_keyring 00:16:49.613 07:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:49.613 07:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:16:49.613 07:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:16:49.613 07:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 2903741 ']' 00:16:49.613 07:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 2903741 00:16:49.613 07:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2903741 ']' 00:16:49.613 07:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 2903741 00:16:49.613 07:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:16:49.613 07:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:49.613 07:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2903741 00:16:49.613 07:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:49.613 07:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:49.613 07:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2903741' 00:16:49.613 killing process with pid 2903741 00:16:49.613 07:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 2903741 00:16:49.613 07:40:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 2903741 00:16:50.549 07:40:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:50.549 07:40:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:50.549 07:40:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:50.549 07:40:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:16:50.549 07:40:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:16:50.549 07:40:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:50.549 07:40:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:16:50.549 07:40:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:50.549 07:40:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:50.549 07:40:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:50.549 07:40:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:50.549 07:40:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:53.083 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:53.083 00:16:53.083 real 4m1.764s 00:16:53.083 user 15m14.836s 00:16:53.083 sys 0m38.842s 00:16:53.083 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:53.083 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:53.083 ************************************ 00:16:53.083 END TEST nvmf_connect_disconnect 00:16:53.083 ************************************ 00:16:53.083 07:40:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:53.083 07:40:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:53.083 07:40:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:53.083 07:40:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:53.083 ************************************ 00:16:53.083 START TEST nvmf_multitarget 00:16:53.083 ************************************ 00:16:53.083 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:53.083 * Looking for test storage... 00:16:53.083 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:53.083 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:53.083 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:16:53.083 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:53.083 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:53.083 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:53.083 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:53.083 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:53.083 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:16:53.083 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:16:53.083 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:16:53.083 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:16:53.083 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:16:53.083 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:16:53.083 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:16:53.083 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:53.083 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:16:53.083 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:16:53.083 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:53.083 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:53.083 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:16:53.083 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:16:53.083 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:53.083 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:16:53.083 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:16:53.083 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:16:53.083 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:16:53.083 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:53.083 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:16:53.083 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:16:53.083 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:53.083 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:53.083 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:16:53.083 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:53.083 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:53.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:53.083 --rc genhtml_branch_coverage=1 00:16:53.083 --rc genhtml_function_coverage=1 00:16:53.083 --rc genhtml_legend=1 00:16:53.083 --rc geninfo_all_blocks=1 00:16:53.083 --rc geninfo_unexecuted_blocks=1 00:16:53.083 00:16:53.083 ' 00:16:53.083 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:53.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:53.083 --rc genhtml_branch_coverage=1 00:16:53.083 --rc genhtml_function_coverage=1 00:16:53.083 --rc genhtml_legend=1 00:16:53.083 --rc geninfo_all_blocks=1 00:16:53.083 --rc geninfo_unexecuted_blocks=1 00:16:53.083 00:16:53.083 ' 00:16:53.083 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:53.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:53.084 --rc genhtml_branch_coverage=1 00:16:53.084 --rc genhtml_function_coverage=1 00:16:53.084 --rc genhtml_legend=1 00:16:53.084 --rc geninfo_all_blocks=1 00:16:53.084 --rc geninfo_unexecuted_blocks=1 00:16:53.084 00:16:53.084 ' 00:16:53.084 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:53.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:53.084 --rc genhtml_branch_coverage=1 00:16:53.084 --rc genhtml_function_coverage=1 00:16:53.084 --rc genhtml_legend=1 00:16:53.084 --rc geninfo_all_blocks=1 00:16:53.084 --rc geninfo_unexecuted_blocks=1 00:16:53.084 00:16:53.084 ' 00:16:53.084 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:53.084 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:16:53.084 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:53.084 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:53.084 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:53.084 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:53.084 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:53.084 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:53.084 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:53.084 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:53.084 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:53.084 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:53.084 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:53.084 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:53.084 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:53.084 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:53.084 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:53.084 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:53.084 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:53.084 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:16:53.084 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:53.084 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:53.084 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:53.084 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.084 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.084 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.084 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:16:53.084 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.084 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:16:53.084 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:53.084 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:53.084 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:53.084 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:53.084 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:53.084 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:53.084 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:53.084 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:53.084 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:53.084 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:53.084 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:53.084 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:16:53.084 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:53.084 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:53.084 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:53.084 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:53.084 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:53.084 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:53.084 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:53.084 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:53.084 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:53.084 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:53.084 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:16:53.084 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:54.988 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:54.988 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:54.988 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:54.988 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:54.988 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:55.247 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:55.247 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:55.247 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:55.247 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:55.247 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:55.247 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.282 ms 00:16:55.247 00:16:55.247 --- 10.0.0.2 ping statistics --- 00:16:55.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:55.247 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:16:55.247 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:55.247 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:55.247 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:16:55.247 00:16:55.247 --- 10.0.0.1 ping statistics --- 00:16:55.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:55.247 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:16:55.247 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:55.247 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:16:55.247 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:55.247 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:55.247 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:55.247 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:55.247 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:55.247 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:55.247 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:55.247 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:16:55.247 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:55.247 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:55.247 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:55.247 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=2935364 00:16:55.247 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:55.247 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 2935364 00:16:55.247 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 2935364 ']' 00:16:55.247 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:55.247 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:55.247 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:55.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:55.247 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:55.247 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:55.247 [2024-11-19 07:40:47.070866] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:16:55.247 [2024-11-19 07:40:47.071017] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:55.505 [2024-11-19 07:40:47.214058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:55.505 [2024-11-19 07:40:47.346946] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:55.505 [2024-11-19 07:40:47.347038] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:55.505 [2024-11-19 07:40:47.347064] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:55.505 [2024-11-19 07:40:47.347089] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:55.505 [2024-11-19 07:40:47.347109] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:55.505 [2024-11-19 07:40:47.350247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:55.505 [2024-11-19 07:40:47.350320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:55.505 [2024-11-19 07:40:47.350417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:55.505 [2024-11-19 07:40:47.350423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:56.438 07:40:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:56.438 07:40:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:16:56.438 07:40:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:56.438 07:40:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:56.438 07:40:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:56.438 07:40:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:56.438 07:40:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:56.438 07:40:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:56.438 07:40:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:16:56.438 07:40:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:16:56.438 07:40:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:16:56.438 "nvmf_tgt_1" 00:16:56.438 07:40:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:16:56.695 "nvmf_tgt_2" 00:16:56.696 07:40:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:56.696 07:40:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:16:56.696 07:40:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:16:56.696 07:40:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:16:56.953 true 00:16:56.953 07:40:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:16:56.953 true 00:16:56.953 07:40:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:56.953 07:40:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:16:57.229 07:40:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:16:57.229 07:40:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:16:57.229 07:40:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:16:57.229 07:40:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:57.229 07:40:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:16:57.229 07:40:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:57.229 07:40:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:16:57.230 07:40:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:57.230 07:40:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:57.230 rmmod nvme_tcp 00:16:57.230 rmmod nvme_fabrics 00:16:57.230 rmmod nvme_keyring 00:16:57.230 07:40:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:57.230 07:40:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:16:57.230 07:40:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:16:57.230 07:40:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 2935364 ']' 00:16:57.230 07:40:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 2935364 00:16:57.230 07:40:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 2935364 ']' 00:16:57.230 07:40:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 2935364 00:16:57.230 07:40:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:16:57.230 07:40:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:57.230 07:40:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2935364 00:16:57.230 07:40:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:57.230 07:40:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:57.230 07:40:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2935364' 00:16:57.230 killing process with pid 2935364 00:16:57.230 07:40:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 2935364 00:16:57.230 07:40:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 2935364 00:16:58.608 07:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:58.608 07:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:58.608 07:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:58.608 07:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:16:58.608 07:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:16:58.608 07:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:58.608 07:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:16:58.608 07:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:58.608 07:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:58.608 07:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:58.608 07:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:58.608 07:40:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:00.511 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:00.511 00:17:00.511 real 0m7.739s 00:17:00.511 user 0m12.371s 00:17:00.511 sys 0m2.271s 00:17:00.511 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:00.511 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:00.511 ************************************ 00:17:00.511 END TEST nvmf_multitarget 00:17:00.511 ************************************ 00:17:00.511 07:40:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:17:00.511 07:40:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:00.511 07:40:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:00.511 07:40:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:00.511 ************************************ 00:17:00.511 START TEST nvmf_rpc 00:17:00.511 ************************************ 00:17:00.511 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:17:00.511 * Looking for test storage... 00:17:00.511 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:00.511 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:00.511 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:17:00.511 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:00.511 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:00.511 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:00.511 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:00.511 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:00.511 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:17:00.511 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:17:00.511 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:17:00.511 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:17:00.511 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:17:00.511 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:17:00.511 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:17:00.511 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:00.511 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:17:00.511 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:17:00.511 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:00.511 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:00.511 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:17:00.511 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:17:00.511 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:00.511 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:17:00.511 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:17:00.511 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:17:00.511 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:17:00.511 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:00.511 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:17:00.511 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:17:00.511 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:00.511 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:00.511 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:17:00.511 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:00.511 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:00.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:00.511 --rc genhtml_branch_coverage=1 00:17:00.511 --rc genhtml_function_coverage=1 00:17:00.511 --rc genhtml_legend=1 00:17:00.511 --rc geninfo_all_blocks=1 00:17:00.511 --rc geninfo_unexecuted_blocks=1 00:17:00.511 00:17:00.511 ' 00:17:00.511 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:00.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:00.511 --rc genhtml_branch_coverage=1 00:17:00.511 --rc genhtml_function_coverage=1 00:17:00.511 --rc genhtml_legend=1 00:17:00.511 --rc geninfo_all_blocks=1 00:17:00.511 --rc geninfo_unexecuted_blocks=1 00:17:00.511 00:17:00.511 ' 00:17:00.511 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:00.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:00.511 --rc genhtml_branch_coverage=1 00:17:00.511 --rc genhtml_function_coverage=1 00:17:00.511 --rc genhtml_legend=1 00:17:00.511 --rc geninfo_all_blocks=1 00:17:00.511 --rc geninfo_unexecuted_blocks=1 00:17:00.511 00:17:00.512 ' 00:17:00.512 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:00.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:00.512 --rc genhtml_branch_coverage=1 00:17:00.512 --rc genhtml_function_coverage=1 00:17:00.512 --rc genhtml_legend=1 00:17:00.512 --rc geninfo_all_blocks=1 00:17:00.512 --rc geninfo_unexecuted_blocks=1 00:17:00.512 00:17:00.512 ' 00:17:00.512 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:00.512 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:17:00.512 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:00.512 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:00.512 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:00.512 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:00.512 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:00.512 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:00.512 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:00.512 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:00.512 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:00.512 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:00.512 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:00.512 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:00.512 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:00.512 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:00.512 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:00.512 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:00.512 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:00.512 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:17:00.512 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:00.512 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:00.512 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:00.512 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.512 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.512 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.512 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:17:00.512 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.512 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:17:00.512 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:00.512 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:00.512 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:00.512 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:00.512 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:00.512 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:00.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:00.512 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:00.512 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:00.512 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:00.512 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:17:00.512 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:17:00.512 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:00.512 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:00.512 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:00.512 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:00.512 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:00.512 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:00.512 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:00.512 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:00.512 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:00.512 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:00.512 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:17:00.512 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:02.514 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:02.514 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:17:02.514 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:02.514 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:02.514 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:02.514 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:02.514 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:02.514 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:17:02.514 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:02.514 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:17:02.514 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:17:02.514 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:17:02.514 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:17:02.514 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:17:02.514 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:17:02.514 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:02.514 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:02.514 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:02.514 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:02.514 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:02.514 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:02.514 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:02.515 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:02.515 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:02.515 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:02.515 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:02.515 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:02.515 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:02.515 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:02.515 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:02.515 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:02.515 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:02.515 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:02.515 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:02.515 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:02.515 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:02.515 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:02.515 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:02.515 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:02.515 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:02.515 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:02.515 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:02.515 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:02.515 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:02.515 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:02.515 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:02.515 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:02.515 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:02.515 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:02.515 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:02.515 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:02.515 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:02.515 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:02.515 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:02.515 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:02.515 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:02.515 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:02.515 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:02.515 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:02.515 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:02.515 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:02.515 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:02.515 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:02.515 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:02.515 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:02.515 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:02.515 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:02.515 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:02.515 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:02.515 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:02.515 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:02.515 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:02.515 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:02.515 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:17:02.515 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:02.515 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:02.515 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:02.515 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:02.515 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:02.515 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:02.515 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:02.515 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:02.515 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:02.515 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:02.515 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:02.515 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:02.515 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:02.515 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:02.515 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:02.515 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:02.515 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:02.515 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:02.774 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:02.774 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:02.774 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:02.774 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:02.774 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:02.774 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:02.774 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:02.774 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:02.774 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:02.774 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:17:02.774 00:17:02.774 --- 10.0.0.2 ping statistics --- 00:17:02.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:02.774 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:17:02.774 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:02.774 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:02.774 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.072 ms 00:17:02.774 00:17:02.775 --- 10.0.0.1 ping statistics --- 00:17:02.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:02.775 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:17:02.775 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:02.775 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:17:02.775 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:02.775 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:02.775 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:02.775 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:02.775 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:02.775 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:02.775 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:02.775 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:17:02.775 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:02.775 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:02.775 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:02.775 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=2937723 00:17:02.775 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 2937723 00:17:02.775 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 2937723 ']' 00:17:02.775 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:02.775 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:02.775 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:02.775 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:02.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:02.775 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:02.775 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:02.775 [2024-11-19 07:40:54.664355] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:17:02.775 [2024-11-19 07:40:54.664513] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:03.033 [2024-11-19 07:40:54.819822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:03.033 [2024-11-19 07:40:54.964552] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:03.033 [2024-11-19 07:40:54.964649] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:03.033 [2024-11-19 07:40:54.964681] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:03.033 [2024-11-19 07:40:54.964730] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:03.033 [2024-11-19 07:40:54.964752] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:03.291 [2024-11-19 07:40:54.967637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:03.291 [2024-11-19 07:40:54.967729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:03.291 [2024-11-19 07:40:54.967769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:03.291 [2024-11-19 07:40:54.967774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:03.857 07:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:03.857 07:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:17:03.857 07:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:03.857 07:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:03.857 07:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.857 07:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:03.857 07:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:17:03.857 07:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.857 07:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.857 07:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.857 07:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:17:03.857 "tick_rate": 2700000000, 00:17:03.857 "poll_groups": [ 00:17:03.857 { 00:17:03.857 "name": "nvmf_tgt_poll_group_000", 00:17:03.857 "admin_qpairs": 0, 00:17:03.857 "io_qpairs": 0, 00:17:03.857 "current_admin_qpairs": 0, 00:17:03.857 "current_io_qpairs": 0, 00:17:03.857 "pending_bdev_io": 0, 00:17:03.857 "completed_nvme_io": 0, 00:17:03.857 "transports": [] 00:17:03.857 }, 00:17:03.857 { 00:17:03.857 "name": "nvmf_tgt_poll_group_001", 00:17:03.857 "admin_qpairs": 0, 00:17:03.857 "io_qpairs": 0, 00:17:03.857 "current_admin_qpairs": 0, 00:17:03.857 "current_io_qpairs": 0, 00:17:03.857 "pending_bdev_io": 0, 00:17:03.857 "completed_nvme_io": 0, 00:17:03.857 "transports": [] 00:17:03.857 }, 00:17:03.857 { 00:17:03.857 "name": "nvmf_tgt_poll_group_002", 00:17:03.857 "admin_qpairs": 0, 00:17:03.857 "io_qpairs": 0, 00:17:03.857 "current_admin_qpairs": 0, 00:17:03.857 "current_io_qpairs": 0, 00:17:03.857 "pending_bdev_io": 0, 00:17:03.857 "completed_nvme_io": 0, 00:17:03.857 "transports": [] 00:17:03.857 }, 00:17:03.857 { 00:17:03.857 "name": "nvmf_tgt_poll_group_003", 00:17:03.857 "admin_qpairs": 0, 00:17:03.857 "io_qpairs": 0, 00:17:03.857 "current_admin_qpairs": 0, 00:17:03.857 "current_io_qpairs": 0, 00:17:03.857 "pending_bdev_io": 0, 00:17:03.857 "completed_nvme_io": 0, 00:17:03.857 "transports": [] 00:17:03.857 } 00:17:03.857 ] 00:17:03.857 }' 00:17:03.858 07:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:17:03.858 07:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:17:03.858 07:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:17:03.858 07:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:17:03.858 07:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:17:03.858 07:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:17:03.858 07:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:17:03.858 07:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:03.858 07:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.858 07:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.858 [2024-11-19 07:40:55.749451] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:03.858 07:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.858 07:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:17:03.858 07:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.858 07:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.858 07:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.858 07:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:17:03.858 "tick_rate": 2700000000, 00:17:03.858 "poll_groups": [ 00:17:03.858 { 00:17:03.858 "name": "nvmf_tgt_poll_group_000", 00:17:03.858 "admin_qpairs": 0, 00:17:03.858 "io_qpairs": 0, 00:17:03.858 "current_admin_qpairs": 0, 00:17:03.858 "current_io_qpairs": 0, 00:17:03.858 "pending_bdev_io": 0, 00:17:03.858 "completed_nvme_io": 0, 00:17:03.858 "transports": [ 00:17:03.858 { 00:17:03.858 "trtype": "TCP" 00:17:03.858 } 00:17:03.858 ] 00:17:03.858 }, 00:17:03.858 { 00:17:03.858 "name": "nvmf_tgt_poll_group_001", 00:17:03.858 "admin_qpairs": 0, 00:17:03.858 "io_qpairs": 0, 00:17:03.858 "current_admin_qpairs": 0, 00:17:03.858 "current_io_qpairs": 0, 00:17:03.858 "pending_bdev_io": 0, 00:17:03.858 "completed_nvme_io": 0, 00:17:03.858 "transports": [ 00:17:03.858 { 00:17:03.858 "trtype": "TCP" 00:17:03.858 } 00:17:03.858 ] 00:17:03.858 }, 00:17:03.858 { 00:17:03.858 "name": "nvmf_tgt_poll_group_002", 00:17:03.858 "admin_qpairs": 0, 00:17:03.858 "io_qpairs": 0, 00:17:03.858 "current_admin_qpairs": 0, 00:17:03.858 "current_io_qpairs": 0, 00:17:03.858 "pending_bdev_io": 0, 00:17:03.858 "completed_nvme_io": 0, 00:17:03.858 "transports": [ 00:17:03.858 { 00:17:03.858 "trtype": "TCP" 00:17:03.858 } 00:17:03.858 ] 00:17:03.858 }, 00:17:03.858 { 00:17:03.858 "name": "nvmf_tgt_poll_group_003", 00:17:03.858 "admin_qpairs": 0, 00:17:03.858 "io_qpairs": 0, 00:17:03.858 "current_admin_qpairs": 0, 00:17:03.858 "current_io_qpairs": 0, 00:17:03.858 "pending_bdev_io": 0, 00:17:03.858 "completed_nvme_io": 0, 00:17:03.858 "transports": [ 00:17:03.858 { 00:17:03.858 "trtype": "TCP" 00:17:03.858 } 00:17:03.858 ] 00:17:03.858 } 00:17:03.858 ] 00:17:03.858 }' 00:17:03.858 07:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:17:03.858 07:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:03.858 07:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:03.858 07:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:04.116 07:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:17:04.116 07:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:17:04.116 07:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:04.116 07:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:04.116 07:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:04.116 07:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:17:04.116 07:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:17:04.116 07:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:17:04.116 07:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:17:04.116 07:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:04.116 07:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.116 07:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:04.116 Malloc1 00:17:04.116 07:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.116 07:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:04.116 07:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.116 07:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:04.116 07:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.116 07:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:04.116 07:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.116 07:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:04.116 07:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.116 07:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:17:04.116 07:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.116 07:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:04.116 07:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.116 07:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:04.116 07:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.116 07:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:04.117 [2024-11-19 07:40:55.971464] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:04.117 07:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.117 07:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:17:04.117 07:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:17:04.117 07:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:17:04.117 07:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:17:04.117 07:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:04.117 07:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:17:04.117 07:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:04.117 07:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:17:04.117 07:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:04.117 07:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:17:04.117 07:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:17:04.117 07:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:17:04.117 [2024-11-19 07:40:55.994781] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:17:04.117 Failed to write to /dev/nvme-fabrics: Input/output error 00:17:04.117 could not add new controller: failed to write to nvme-fabrics device 00:17:04.117 07:40:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:17:04.117 07:40:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:04.117 07:40:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:04.117 07:40:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:04.117 07:40:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:04.117 07:40:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.117 07:40:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:04.117 07:40:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.117 07:40:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:05.053 07:40:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:17:05.053 07:40:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:05.053 07:40:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:05.053 07:40:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:05.053 07:40:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:06.961 07:40:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:06.961 07:40:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:06.961 07:40:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:06.961 07:40:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:06.961 07:40:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:06.961 07:40:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:06.961 07:40:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:07.222 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:07.222 07:40:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:07.222 07:40:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:07.222 07:40:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:07.222 07:40:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:07.222 07:40:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:07.222 07:40:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:07.222 07:40:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:07.222 07:40:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:07.222 07:40:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.222 07:40:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.222 07:40:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.222 07:40:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:07.222 07:40:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:17:07.222 07:40:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:07.222 07:40:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:17:07.222 07:40:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:07.222 07:40:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:17:07.222 07:40:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:07.222 07:40:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:17:07.222 07:40:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:07.222 07:40:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:17:07.222 07:40:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:17:07.222 07:40:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:07.222 [2024-11-19 07:40:58.997669] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:17:07.222 Failed to write to /dev/nvme-fabrics: Input/output error 00:17:07.222 could not add new controller: failed to write to nvme-fabrics device 00:17:07.222 07:40:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:17:07.222 07:40:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:07.222 07:40:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:07.222 07:40:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:07.222 07:40:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:17:07.222 07:40:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.222 07:40:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.222 07:40:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.222 07:40:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:08.156 07:40:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:17:08.156 07:40:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:08.156 07:40:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:08.156 07:40:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:08.156 07:40:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:10.065 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:10.065 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:10.065 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:10.065 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:10.065 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:10.065 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:10.065 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:10.065 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:10.065 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:10.065 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:10.065 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:10.065 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:10.065 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:10.065 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:10.065 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:10.065 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:10.065 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.065 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:10.065 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.065 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:17:10.065 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:10.065 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:10.065 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.065 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:10.065 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.065 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:10.065 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.065 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:10.065 [2024-11-19 07:41:01.894483] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:10.065 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.065 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:10.065 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.065 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:10.065 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.065 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:10.065 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.065 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:10.065 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.065 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:11.000 07:41:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:11.000 07:41:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:11.000 07:41:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:11.000 07:41:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:11.000 07:41:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:12.909 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:12.909 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:12.909 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:12.909 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:12.909 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:12.909 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:12.909 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:12.909 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:12.909 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:12.909 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:12.909 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:12.909 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:12.909 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:12.909 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:12.909 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:12.909 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:12.909 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.909 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:12.909 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.909 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:12.909 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.909 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:12.909 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.909 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:12.909 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:12.909 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.909 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:12.909 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.909 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:12.909 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.909 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:12.909 [2024-11-19 07:41:04.817775] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:12.909 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.909 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:12.909 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.909 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:12.909 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.910 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:12.910 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.910 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:12.910 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.910 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:13.846 07:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:13.846 07:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:13.846 07:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:13.846 07:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:13.846 07:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:15.747 07:41:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:15.747 07:41:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:15.747 07:41:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:15.747 07:41:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:15.747 07:41:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:15.747 07:41:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:15.747 07:41:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:15.747 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:15.747 07:41:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:15.747 07:41:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:15.747 07:41:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:15.747 07:41:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:15.747 07:41:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:15.747 07:41:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:15.747 07:41:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:15.747 07:41:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:15.747 07:41:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.747 07:41:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:15.747 07:41:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.747 07:41:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:15.747 07:41:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.747 07:41:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:15.747 07:41:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.747 07:41:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:15.747 07:41:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:15.747 07:41:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.747 07:41:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:16.005 07:41:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.005 07:41:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:16.005 07:41:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.005 07:41:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:16.005 [2024-11-19 07:41:07.687562] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:16.005 07:41:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.005 07:41:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:16.005 07:41:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.005 07:41:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:16.005 07:41:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.005 07:41:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:16.005 07:41:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.005 07:41:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:16.005 07:41:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.005 07:41:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:16.575 07:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:16.575 07:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:16.575 07:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:16.575 07:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:16.575 07:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:18.480 07:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:18.480 07:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:18.480 07:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:18.480 07:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:18.480 07:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:18.480 07:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:18.480 07:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:18.738 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:18.738 07:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:18.738 07:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:18.738 07:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:18.738 07:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:18.738 07:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:18.738 07:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:18.738 07:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:18.738 07:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:18.738 07:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.738 07:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.738 07:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.738 07:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:18.738 07:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.738 07:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.738 07:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.738 07:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:18.738 07:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:18.738 07:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.738 07:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.738 07:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.738 07:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:18.738 07:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.738 07:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.738 [2024-11-19 07:41:10.607955] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:18.738 07:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.738 07:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:18.739 07:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.739 07:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.739 07:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.739 07:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:18.739 07:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.739 07:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.739 07:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.739 07:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:19.680 07:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:19.680 07:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:19.680 07:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:19.680 07:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:19.680 07:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:21.588 07:41:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:21.588 07:41:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:21.588 07:41:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:21.588 07:41:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:21.588 07:41:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:21.588 07:41:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:21.588 07:41:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:21.588 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:21.588 07:41:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:21.588 07:41:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:21.588 07:41:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:21.588 07:41:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:21.588 07:41:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:21.588 07:41:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:21.588 07:41:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:21.588 07:41:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:21.588 07:41:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.588 07:41:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:21.588 07:41:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.588 07:41:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:21.588 07:41:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.588 07:41:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:21.588 07:41:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.588 07:41:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:21.588 07:41:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:21.588 07:41:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.588 07:41:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:21.846 07:41:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.846 07:41:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:21.846 07:41:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.846 07:41:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:21.846 [2024-11-19 07:41:13.532897] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:21.846 07:41:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.846 07:41:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:21.846 07:41:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.846 07:41:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:21.846 07:41:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.846 07:41:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:21.846 07:41:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.846 07:41:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:21.846 07:41:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.846 07:41:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:22.416 07:41:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:22.416 07:41:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:22.416 07:41:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:22.416 07:41:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:22.417 07:41:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:24.321 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:24.321 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:24.321 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:24.322 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:24.322 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:24.322 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:24.322 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:24.582 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:24.582 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:24.582 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:24.582 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:24.582 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:24.582 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:24.582 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:24.582 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:24.582 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:24.582 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.582 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:24.582 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.582 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:24.582 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.582 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:24.582 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.582 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:17:24.582 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:24.582 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:24.582 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.582 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:24.582 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.582 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:24.582 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.582 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:24.582 [2024-11-19 07:41:16.391247] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:24.582 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.582 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:24.582 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.582 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:24.582 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.582 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:24.582 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.582 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:24.582 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.582 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:24.582 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.582 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:24.582 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.582 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:24.582 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.582 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:24.582 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.582 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:24.582 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:24.582 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.582 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:24.582 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.582 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:24.582 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.582 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:24.582 [2024-11-19 07:41:16.439290] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:24.582 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.582 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:24.582 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.582 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:24.582 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.582 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:24.582 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.582 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:24.582 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.582 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:24.582 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.582 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:24.583 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.583 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:24.583 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.583 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:24.583 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.583 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:24.583 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:24.583 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.583 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:24.583 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.583 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:24.583 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.583 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:24.583 [2024-11-19 07:41:16.487470] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:24.583 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.583 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:24.583 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.583 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:24.583 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.583 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:24.583 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.583 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:24.583 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.583 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:24.583 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.583 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:24.842 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.842 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:24.842 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.842 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:24.842 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.842 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:24.842 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:24.842 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.842 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:24.842 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.842 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:24.842 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.842 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:24.842 [2024-11-19 07:41:16.535608] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:24.842 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.842 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:24.842 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.842 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:24.842 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.842 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:24.842 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.842 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:24.842 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.842 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:24.842 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.842 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:24.842 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.842 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:24.842 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.842 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:24.842 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.842 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:24.842 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:24.842 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.842 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:24.842 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.842 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:24.842 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.842 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:24.843 [2024-11-19 07:41:16.583814] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:24.843 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.843 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:24.843 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.843 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:24.843 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.843 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:24.843 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.843 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:24.843 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.843 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:24.843 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.843 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:24.843 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.843 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:24.843 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.843 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:24.843 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.843 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:17:24.843 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.843 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:24.843 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.843 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:17:24.843 "tick_rate": 2700000000, 00:17:24.843 "poll_groups": [ 00:17:24.843 { 00:17:24.843 "name": "nvmf_tgt_poll_group_000", 00:17:24.843 "admin_qpairs": 2, 00:17:24.843 "io_qpairs": 84, 00:17:24.843 "current_admin_qpairs": 0, 00:17:24.843 "current_io_qpairs": 0, 00:17:24.843 "pending_bdev_io": 0, 00:17:24.843 "completed_nvme_io": 133, 00:17:24.843 "transports": [ 00:17:24.843 { 00:17:24.843 "trtype": "TCP" 00:17:24.843 } 00:17:24.843 ] 00:17:24.843 }, 00:17:24.843 { 00:17:24.843 "name": "nvmf_tgt_poll_group_001", 00:17:24.843 "admin_qpairs": 2, 00:17:24.843 "io_qpairs": 84, 00:17:24.843 "current_admin_qpairs": 0, 00:17:24.843 "current_io_qpairs": 0, 00:17:24.843 "pending_bdev_io": 0, 00:17:24.843 "completed_nvme_io": 192, 00:17:24.843 "transports": [ 00:17:24.843 { 00:17:24.843 "trtype": "TCP" 00:17:24.843 } 00:17:24.843 ] 00:17:24.843 }, 00:17:24.843 { 00:17:24.843 "name": "nvmf_tgt_poll_group_002", 00:17:24.843 "admin_qpairs": 1, 00:17:24.843 "io_qpairs": 84, 00:17:24.843 "current_admin_qpairs": 0, 00:17:24.843 "current_io_qpairs": 0, 00:17:24.843 "pending_bdev_io": 0, 00:17:24.843 "completed_nvme_io": 177, 00:17:24.843 "transports": [ 00:17:24.843 { 00:17:24.843 "trtype": "TCP" 00:17:24.843 } 00:17:24.843 ] 00:17:24.843 }, 00:17:24.843 { 00:17:24.843 "name": "nvmf_tgt_poll_group_003", 00:17:24.843 "admin_qpairs": 2, 00:17:24.843 "io_qpairs": 84, 00:17:24.843 "current_admin_qpairs": 0, 00:17:24.843 "current_io_qpairs": 0, 00:17:24.843 "pending_bdev_io": 0, 00:17:24.843 "completed_nvme_io": 184, 00:17:24.843 "transports": [ 00:17:24.843 { 00:17:24.843 "trtype": "TCP" 00:17:24.843 } 00:17:24.843 ] 00:17:24.843 } 00:17:24.843 ] 00:17:24.843 }' 00:17:24.843 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:17:24.843 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:24.843 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:24.843 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:24.843 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:17:24.843 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:17:24.843 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:24.843 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:24.843 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:24.843 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:17:24.843 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:17:24.843 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:17:24.843 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:17:24.843 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:24.843 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:17:24.843 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:24.843 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:17:24.843 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:24.843 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:24.843 rmmod nvme_tcp 00:17:24.843 rmmod nvme_fabrics 00:17:24.843 rmmod nvme_keyring 00:17:24.843 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:24.843 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:17:24.843 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:17:24.843 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 2937723 ']' 00:17:24.843 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 2937723 00:17:24.843 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 2937723 ']' 00:17:24.843 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 2937723 00:17:25.101 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:17:25.101 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:25.101 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2937723 00:17:25.101 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:25.101 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:25.101 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2937723' 00:17:25.101 killing process with pid 2937723 00:17:25.101 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 2937723 00:17:25.101 07:41:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 2937723 00:17:26.485 07:41:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:26.485 07:41:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:26.485 07:41:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:26.485 07:41:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:17:26.485 07:41:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:17:26.485 07:41:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:17:26.485 07:41:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:26.485 07:41:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:26.485 07:41:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:26.485 07:41:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:26.485 07:41:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:26.485 07:41:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:28.400 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:28.400 00:17:28.400 real 0m27.912s 00:17:28.400 user 1m30.229s 00:17:28.400 sys 0m4.646s 00:17:28.400 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:28.400 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.400 ************************************ 00:17:28.400 END TEST nvmf_rpc 00:17:28.400 ************************************ 00:17:28.400 07:41:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:28.400 07:41:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:28.400 07:41:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:28.400 07:41:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:28.400 ************************************ 00:17:28.400 START TEST nvmf_invalid 00:17:28.400 ************************************ 00:17:28.400 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:28.400 * Looking for test storage... 00:17:28.400 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:28.400 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:28.400 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:17:28.400 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:28.400 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:28.400 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:28.400 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:28.400 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:28.400 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:17:28.400 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:17:28.400 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:17:28.400 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:17:28.400 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:17:28.400 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:17:28.400 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:17:28.400 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:28.400 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:17:28.400 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:17:28.400 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:28.400 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:28.400 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:17:28.400 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:17:28.400 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:28.400 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:17:28.400 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:17:28.400 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:17:28.400 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:17:28.400 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:28.400 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:17:28.400 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:17:28.400 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:28.400 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:28.400 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:17:28.400 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:28.400 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:28.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:28.400 --rc genhtml_branch_coverage=1 00:17:28.400 --rc genhtml_function_coverage=1 00:17:28.400 --rc genhtml_legend=1 00:17:28.400 --rc geninfo_all_blocks=1 00:17:28.400 --rc geninfo_unexecuted_blocks=1 00:17:28.400 00:17:28.400 ' 00:17:28.400 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:28.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:28.400 --rc genhtml_branch_coverage=1 00:17:28.400 --rc genhtml_function_coverage=1 00:17:28.400 --rc genhtml_legend=1 00:17:28.400 --rc geninfo_all_blocks=1 00:17:28.400 --rc geninfo_unexecuted_blocks=1 00:17:28.400 00:17:28.400 ' 00:17:28.400 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:28.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:28.400 --rc genhtml_branch_coverage=1 00:17:28.400 --rc genhtml_function_coverage=1 00:17:28.400 --rc genhtml_legend=1 00:17:28.400 --rc geninfo_all_blocks=1 00:17:28.400 --rc geninfo_unexecuted_blocks=1 00:17:28.400 00:17:28.400 ' 00:17:28.400 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:28.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:28.400 --rc genhtml_branch_coverage=1 00:17:28.400 --rc genhtml_function_coverage=1 00:17:28.400 --rc genhtml_legend=1 00:17:28.400 --rc geninfo_all_blocks=1 00:17:28.400 --rc geninfo_unexecuted_blocks=1 00:17:28.400 00:17:28.400 ' 00:17:28.400 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:28.661 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:17:28.661 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:28.661 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:28.661 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:28.661 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:28.661 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:28.661 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:28.661 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:28.661 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:28.661 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:28.661 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:28.661 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:28.661 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:28.661 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:28.661 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:28.661 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:28.661 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:28.661 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:28.661 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:17:28.661 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:28.661 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:28.661 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:28.661 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.661 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.661 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.661 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:17:28.661 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.661 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:17:28.661 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:28.661 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:28.661 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:28.661 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:28.661 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:28.661 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:28.661 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:28.661 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:28.661 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:28.661 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:28.661 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:17:28.661 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:28.661 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:17:28.661 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:17:28.661 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:17:28.661 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:17:28.661 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:28.661 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:28.661 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:28.661 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:28.661 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:28.661 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:28.661 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:28.661 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:28.661 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:28.661 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:28.661 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:17:28.661 07:41:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:30.569 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:30.569 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:30.569 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:30.569 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:30.569 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:30.570 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:30.570 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:30.570 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:30.570 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:30.570 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:30.570 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:30.570 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:30.570 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:30.570 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:30.570 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:30.570 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.236 ms 00:17:30.570 00:17:30.570 --- 10.0.0.2 ping statistics --- 00:17:30.570 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:30.570 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:17:30.570 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:30.570 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:30.570 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:17:30.570 00:17:30.570 --- 10.0.0.1 ping statistics --- 00:17:30.570 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:30.570 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:17:30.570 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:30.570 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:17:30.570 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:30.570 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:30.570 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:30.570 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:30.570 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:30.570 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:30.570 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:30.570 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:17:30.570 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:30.570 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:30.570 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:30.570 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=2942490 00:17:30.570 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 2942490 00:17:30.570 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 2942490 ']' 00:17:30.570 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:30.570 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:30.570 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:30.570 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:30.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:30.570 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:30.570 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:30.828 [2024-11-19 07:41:22.580700] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:17:30.828 [2024-11-19 07:41:22.580864] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:30.828 [2024-11-19 07:41:22.737273] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:31.088 [2024-11-19 07:41:22.883936] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:31.088 [2024-11-19 07:41:22.884030] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:31.088 [2024-11-19 07:41:22.884057] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:31.088 [2024-11-19 07:41:22.884083] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:31.088 [2024-11-19 07:41:22.884104] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:31.088 [2024-11-19 07:41:22.887206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:31.088 [2024-11-19 07:41:22.887270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:31.088 [2024-11-19 07:41:22.887326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:31.088 [2024-11-19 07:41:22.887333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:31.654 07:41:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:31.654 07:41:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:17:31.654 07:41:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:31.654 07:41:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:31.655 07:41:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:31.655 07:41:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:31.655 07:41:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:31.655 07:41:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode18006 00:17:31.913 [2024-11-19 07:41:23.822568] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:17:31.913 07:41:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:17:31.913 { 00:17:31.913 "nqn": "nqn.2016-06.io.spdk:cnode18006", 00:17:31.913 "tgt_name": "foobar", 00:17:31.913 "method": "nvmf_create_subsystem", 00:17:31.913 "req_id": 1 00:17:31.913 } 00:17:31.913 Got JSON-RPC error response 00:17:31.913 response: 00:17:31.913 { 00:17:31.913 "code": -32603, 00:17:31.913 "message": "Unable to find target foobar" 00:17:31.913 }' 00:17:31.913 07:41:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:17:31.913 { 00:17:31.913 "nqn": "nqn.2016-06.io.spdk:cnode18006", 00:17:31.913 "tgt_name": "foobar", 00:17:31.913 "method": "nvmf_create_subsystem", 00:17:31.913 "req_id": 1 00:17:31.913 } 00:17:31.913 Got JSON-RPC error response 00:17:31.913 response: 00:17:31.913 { 00:17:31.913 "code": -32603, 00:17:31.913 "message": "Unable to find target foobar" 00:17:31.913 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:17:31.913 07:41:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:17:32.173 07:41:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode20895 00:17:32.173 [2024-11-19 07:41:24.087496] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20895: invalid serial number 'SPDKISFASTANDAWESOME' 00:17:32.432 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:17:32.432 { 00:17:32.432 "nqn": "nqn.2016-06.io.spdk:cnode20895", 00:17:32.432 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:32.432 "method": "nvmf_create_subsystem", 00:17:32.432 "req_id": 1 00:17:32.432 } 00:17:32.432 Got JSON-RPC error response 00:17:32.432 response: 00:17:32.432 { 00:17:32.432 "code": -32602, 00:17:32.432 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:32.432 }' 00:17:32.432 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:17:32.432 { 00:17:32.432 "nqn": "nqn.2016-06.io.spdk:cnode20895", 00:17:32.432 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:32.432 "method": "nvmf_create_subsystem", 00:17:32.432 "req_id": 1 00:17:32.432 } 00:17:32.432 Got JSON-RPC error response 00:17:32.432 response: 00:17:32.432 { 00:17:32.432 "code": -32602, 00:17:32.432 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:32.432 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:32.432 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:17:32.432 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode16621 00:17:32.432 [2024-11-19 07:41:24.352416] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16621: invalid model number 'SPDK_Controller' 00:17:32.691 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:17:32.691 { 00:17:32.691 "nqn": "nqn.2016-06.io.spdk:cnode16621", 00:17:32.691 "model_number": "SPDK_Controller\u001f", 00:17:32.691 "method": "nvmf_create_subsystem", 00:17:32.691 "req_id": 1 00:17:32.691 } 00:17:32.691 Got JSON-RPC error response 00:17:32.691 response: 00:17:32.691 { 00:17:32.691 "code": -32602, 00:17:32.691 "message": "Invalid MN SPDK_Controller\u001f" 00:17:32.691 }' 00:17:32.691 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:17:32.691 { 00:17:32.691 "nqn": "nqn.2016-06.io.spdk:cnode16621", 00:17:32.691 "model_number": "SPDK_Controller\u001f", 00:17:32.691 "method": "nvmf_create_subsystem", 00:17:32.691 "req_id": 1 00:17:32.691 } 00:17:32.691 Got JSON-RPC error response 00:17:32.691 response: 00:17:32.691 { 00:17:32.691 "code": -32602, 00:17:32.691 "message": "Invalid MN SPDK_Controller\u001f" 00:17:32.691 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:32.691 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:17:32.691 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:17:32.691 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:32.691 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:32.691 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:32.691 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:32.691 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.691 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:17:32.691 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:17:32.691 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:17:32.691 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.691 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.691 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:17:32.691 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:17:32.691 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:17:32.691 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.691 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.691 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:17:32.691 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:17:32.691 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:17:32.691 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.691 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.691 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:17:32.691 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:17:32.691 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:17:32.691 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.691 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.691 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:17:32.691 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:17:32.691 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:17:32.691 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.691 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.691 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:17:32.692 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:17:32.692 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:17:32.692 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.692 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.692 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:17:32.692 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:17:32.692 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:17:32.692 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.692 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.692 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:17:32.692 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:17:32.692 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:17:32.692 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.692 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.692 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:17:32.692 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:17:32.692 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:17:32.692 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.692 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.692 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:17:32.692 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:17:32.692 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:17:32.692 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.692 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.692 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:17:32.692 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:17:32.692 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:17:32.692 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.692 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.692 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:17:32.692 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:17:32.692 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:17:32.692 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.692 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.692 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:17:32.692 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:17:32.692 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:17:32.692 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.692 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.692 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:17:32.692 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:17:32.692 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:17:32.692 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.692 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.692 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:17:32.692 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:17:32.692 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:17:32.692 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.692 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.692 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:17:32.692 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:17:32.692 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:17:32.692 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.692 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.692 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:17:32.692 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:17:32.692 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:17:32.692 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.692 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.692 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:17:32.692 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:17:32.692 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:17:32.692 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.692 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.692 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:17:32.692 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:17:32.692 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:17:32.692 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.692 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.692 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:17:32.692 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:17:32.692 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:17:32.692 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.692 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.692 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:17:32.692 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:17:32.692 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:17:32.692 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.692 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.692 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ 9 == \- ]] 00:17:32.692 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '9@Yf:}ri&20T2tt3L>Lk]' 00:17:32.692 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '9@Yf:}ri&20T2tt3L>Lk]' nqn.2016-06.io.spdk:cnode22723 00:17:32.952 [2024-11-19 07:41:24.697552] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22723: invalid serial number '9@Yf:}ri&20T2tt3L>Lk]' 00:17:32.952 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:17:32.952 { 00:17:32.952 "nqn": "nqn.2016-06.io.spdk:cnode22723", 00:17:32.952 "serial_number": "9@Yf:}ri&20T2tt3L>Lk]", 00:17:32.952 "method": "nvmf_create_subsystem", 00:17:32.952 "req_id": 1 00:17:32.952 } 00:17:32.952 Got JSON-RPC error response 00:17:32.952 response: 00:17:32.952 { 00:17:32.952 "code": -32602, 00:17:32.952 "message": "Invalid SN 9@Yf:}ri&20T2tt3L>Lk]" 00:17:32.952 }' 00:17:32.952 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:17:32.952 { 00:17:32.952 "nqn": "nqn.2016-06.io.spdk:cnode22723", 00:17:32.952 "serial_number": "9@Yf:}ri&20T2tt3L>Lk]", 00:17:32.952 "method": "nvmf_create_subsystem", 00:17:32.952 "req_id": 1 00:17:32.952 } 00:17:32.952 Got JSON-RPC error response 00:17:32.952 response: 00:17:32.952 { 00:17:32.952 "code": -32602, 00:17:32.952 "message": "Invalid SN 9@Yf:}ri&20T2tt3L>Lk]" 00:17:32.952 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:32.952 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:17:32.952 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:17:32.952 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:32.952 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:32.952 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:32.952 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:32.952 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:17:32.953 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.954 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.955 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ R == \- ]] 00:17:32.955 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'R%1VOH%":kuu!|!K@::1@Zit_iBqI+[[dk&`}5^#C' 00:17:32.955 07:41:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'R%1VOH%":kuu!|!K@::1@Zit_iBqI+[[dk&`}5^#C' nqn.2016-06.io.spdk:cnode26233 00:17:33.213 [2024-11-19 07:41:25.102916] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26233: invalid model number 'R%1VOH%":kuu!|!K@::1@Zit_iBqI+[[dk&`}5^#C' 00:17:33.213 07:41:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:17:33.213 { 00:17:33.213 "nqn": "nqn.2016-06.io.spdk:cnode26233", 00:17:33.213 "model_number": "R%1VOH%\":kuu!|!K@::1@Zit_iBqI+[[dk&`}5^#C", 00:17:33.213 "method": "nvmf_create_subsystem", 00:17:33.213 "req_id": 1 00:17:33.213 } 00:17:33.213 Got JSON-RPC error response 00:17:33.213 response: 00:17:33.213 { 00:17:33.213 "code": -32602, 00:17:33.213 "message": "Invalid MN R%1VOH%\":kuu!|!K@::1@Zit_iBqI+[[dk&`}5^#C" 00:17:33.213 }' 00:17:33.213 07:41:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:17:33.213 { 00:17:33.213 "nqn": "nqn.2016-06.io.spdk:cnode26233", 00:17:33.213 "model_number": "R%1VOH%\":kuu!|!K@::1@Zit_iBqI+[[dk&`}5^#C", 00:17:33.213 "method": "nvmf_create_subsystem", 00:17:33.213 "req_id": 1 00:17:33.213 } 00:17:33.213 Got JSON-RPC error response 00:17:33.213 response: 00:17:33.213 { 00:17:33.213 "code": -32602, 00:17:33.213 "message": "Invalid MN R%1VOH%\":kuu!|!K@::1@Zit_iBqI+[[dk&`}5^#C" 00:17:33.213 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:33.213 07:41:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:17:33.473 [2024-11-19 07:41:25.372034] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:33.733 07:41:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:17:33.993 07:41:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:17:33.993 07:41:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:17:33.993 07:41:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:17:33.993 07:41:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:17:33.993 07:41:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:17:33.993 [2024-11-19 07:41:25.924377] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:17:34.251 07:41:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:17:34.251 { 00:17:34.251 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:34.251 "listen_address": { 00:17:34.251 "trtype": "tcp", 00:17:34.251 "traddr": "", 00:17:34.251 "trsvcid": "4421" 00:17:34.251 }, 00:17:34.251 "method": "nvmf_subsystem_remove_listener", 00:17:34.251 "req_id": 1 00:17:34.251 } 00:17:34.251 Got JSON-RPC error response 00:17:34.251 response: 00:17:34.251 { 00:17:34.251 "code": -32602, 00:17:34.251 "message": "Invalid parameters" 00:17:34.251 }' 00:17:34.251 07:41:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:17:34.251 { 00:17:34.251 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:34.251 "listen_address": { 00:17:34.251 "trtype": "tcp", 00:17:34.251 "traddr": "", 00:17:34.251 "trsvcid": "4421" 00:17:34.251 }, 00:17:34.251 "method": "nvmf_subsystem_remove_listener", 00:17:34.251 "req_id": 1 00:17:34.251 } 00:17:34.251 Got JSON-RPC error response 00:17:34.251 response: 00:17:34.251 { 00:17:34.251 "code": -32602, 00:17:34.251 "message": "Invalid parameters" 00:17:34.251 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:17:34.251 07:41:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode18211 -i 0 00:17:34.509 [2024-11-19 07:41:26.209272] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18211: invalid cntlid range [0-65519] 00:17:34.509 07:41:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:17:34.509 { 00:17:34.509 "nqn": "nqn.2016-06.io.spdk:cnode18211", 00:17:34.509 "min_cntlid": 0, 00:17:34.509 "method": "nvmf_create_subsystem", 00:17:34.509 "req_id": 1 00:17:34.509 } 00:17:34.509 Got JSON-RPC error response 00:17:34.509 response: 00:17:34.509 { 00:17:34.509 "code": -32602, 00:17:34.509 "message": "Invalid cntlid range [0-65519]" 00:17:34.509 }' 00:17:34.509 07:41:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:17:34.509 { 00:17:34.509 "nqn": "nqn.2016-06.io.spdk:cnode18211", 00:17:34.509 "min_cntlid": 0, 00:17:34.509 "method": "nvmf_create_subsystem", 00:17:34.509 "req_id": 1 00:17:34.509 } 00:17:34.509 Got JSON-RPC error response 00:17:34.509 response: 00:17:34.509 { 00:17:34.509 "code": -32602, 00:17:34.509 "message": "Invalid cntlid range [0-65519]" 00:17:34.509 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:34.509 07:41:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode981 -i 65520 00:17:34.767 [2024-11-19 07:41:26.490200] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode981: invalid cntlid range [65520-65519] 00:17:34.767 07:41:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:17:34.767 { 00:17:34.767 "nqn": "nqn.2016-06.io.spdk:cnode981", 00:17:34.767 "min_cntlid": 65520, 00:17:34.767 "method": "nvmf_create_subsystem", 00:17:34.767 "req_id": 1 00:17:34.767 } 00:17:34.767 Got JSON-RPC error response 00:17:34.767 response: 00:17:34.767 { 00:17:34.767 "code": -32602, 00:17:34.767 "message": "Invalid cntlid range [65520-65519]" 00:17:34.767 }' 00:17:34.767 07:41:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:17:34.767 { 00:17:34.767 "nqn": "nqn.2016-06.io.spdk:cnode981", 00:17:34.767 "min_cntlid": 65520, 00:17:34.767 "method": "nvmf_create_subsystem", 00:17:34.767 "req_id": 1 00:17:34.767 } 00:17:34.767 Got JSON-RPC error response 00:17:34.767 response: 00:17:34.767 { 00:17:34.767 "code": -32602, 00:17:34.767 "message": "Invalid cntlid range [65520-65519]" 00:17:34.767 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:34.768 07:41:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode29885 -I 0 00:17:35.025 [2024-11-19 07:41:26.759235] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29885: invalid cntlid range [1-0] 00:17:35.025 07:41:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:17:35.025 { 00:17:35.025 "nqn": "nqn.2016-06.io.spdk:cnode29885", 00:17:35.025 "max_cntlid": 0, 00:17:35.025 "method": "nvmf_create_subsystem", 00:17:35.025 "req_id": 1 00:17:35.025 } 00:17:35.025 Got JSON-RPC error response 00:17:35.025 response: 00:17:35.025 { 00:17:35.025 "code": -32602, 00:17:35.025 "message": "Invalid cntlid range [1-0]" 00:17:35.025 }' 00:17:35.025 07:41:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:17:35.025 { 00:17:35.025 "nqn": "nqn.2016-06.io.spdk:cnode29885", 00:17:35.025 "max_cntlid": 0, 00:17:35.025 "method": "nvmf_create_subsystem", 00:17:35.025 "req_id": 1 00:17:35.025 } 00:17:35.025 Got JSON-RPC error response 00:17:35.025 response: 00:17:35.025 { 00:17:35.025 "code": -32602, 00:17:35.025 "message": "Invalid cntlid range [1-0]" 00:17:35.025 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:35.025 07:41:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12245 -I 65520 00:17:35.283 [2024-11-19 07:41:27.020145] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12245: invalid cntlid range [1-65520] 00:17:35.283 07:41:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:17:35.283 { 00:17:35.283 "nqn": "nqn.2016-06.io.spdk:cnode12245", 00:17:35.283 "max_cntlid": 65520, 00:17:35.283 "method": "nvmf_create_subsystem", 00:17:35.283 "req_id": 1 00:17:35.283 } 00:17:35.283 Got JSON-RPC error response 00:17:35.283 response: 00:17:35.283 { 00:17:35.283 "code": -32602, 00:17:35.283 "message": "Invalid cntlid range [1-65520]" 00:17:35.283 }' 00:17:35.283 07:41:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:17:35.283 { 00:17:35.283 "nqn": "nqn.2016-06.io.spdk:cnode12245", 00:17:35.283 "max_cntlid": 65520, 00:17:35.283 "method": "nvmf_create_subsystem", 00:17:35.283 "req_id": 1 00:17:35.283 } 00:17:35.283 Got JSON-RPC error response 00:17:35.283 response: 00:17:35.283 { 00:17:35.283 "code": -32602, 00:17:35.283 "message": "Invalid cntlid range [1-65520]" 00:17:35.283 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:35.283 07:41:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7926 -i 6 -I 5 00:17:35.541 [2024-11-19 07:41:27.313159] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7926: invalid cntlid range [6-5] 00:17:35.541 07:41:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:17:35.541 { 00:17:35.541 "nqn": "nqn.2016-06.io.spdk:cnode7926", 00:17:35.541 "min_cntlid": 6, 00:17:35.541 "max_cntlid": 5, 00:17:35.541 "method": "nvmf_create_subsystem", 00:17:35.541 "req_id": 1 00:17:35.541 } 00:17:35.541 Got JSON-RPC error response 00:17:35.541 response: 00:17:35.541 { 00:17:35.541 "code": -32602, 00:17:35.541 "message": "Invalid cntlid range [6-5]" 00:17:35.541 }' 00:17:35.541 07:41:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:17:35.541 { 00:17:35.541 "nqn": "nqn.2016-06.io.spdk:cnode7926", 00:17:35.541 "min_cntlid": 6, 00:17:35.541 "max_cntlid": 5, 00:17:35.541 "method": "nvmf_create_subsystem", 00:17:35.541 "req_id": 1 00:17:35.541 } 00:17:35.541 Got JSON-RPC error response 00:17:35.541 response: 00:17:35.541 { 00:17:35.541 "code": -32602, 00:17:35.541 "message": "Invalid cntlid range [6-5]" 00:17:35.541 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:35.541 07:41:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:17:35.541 07:41:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:17:35.541 { 00:17:35.541 "name": "foobar", 00:17:35.541 "method": "nvmf_delete_target", 00:17:35.541 "req_id": 1 00:17:35.541 } 00:17:35.541 Got JSON-RPC error response 00:17:35.541 response: 00:17:35.541 { 00:17:35.541 "code": -32602, 00:17:35.541 "message": "The specified target doesn'\''t exist, cannot delete it." 00:17:35.541 }' 00:17:35.541 07:41:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:17:35.541 { 00:17:35.541 "name": "foobar", 00:17:35.541 "method": "nvmf_delete_target", 00:17:35.541 "req_id": 1 00:17:35.541 } 00:17:35.541 Got JSON-RPC error response 00:17:35.541 response: 00:17:35.541 { 00:17:35.541 "code": -32602, 00:17:35.541 "message": "The specified target doesn't exist, cannot delete it." 00:17:35.541 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:17:35.541 07:41:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:17:35.541 07:41:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:17:35.541 07:41:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:35.541 07:41:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:17:35.541 07:41:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:35.541 07:41:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:17:35.541 07:41:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:35.541 07:41:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:35.542 rmmod nvme_tcp 00:17:35.806 rmmod nvme_fabrics 00:17:35.806 rmmod nvme_keyring 00:17:35.806 07:41:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:35.806 07:41:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:17:35.806 07:41:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:17:35.806 07:41:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 2942490 ']' 00:17:35.806 07:41:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 2942490 00:17:35.806 07:41:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 2942490 ']' 00:17:35.806 07:41:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 2942490 00:17:35.806 07:41:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:17:35.806 07:41:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:35.806 07:41:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2942490 00:17:35.806 07:41:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:35.806 07:41:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:35.806 07:41:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2942490' 00:17:35.806 killing process with pid 2942490 00:17:35.806 07:41:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 2942490 00:17:35.806 07:41:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 2942490 00:17:36.827 07:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:36.827 07:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:36.827 07:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:36.827 07:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:17:36.827 07:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:17:36.827 07:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:36.827 07:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:17:36.827 07:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:36.827 07:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:36.827 07:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:36.827 07:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:36.827 07:41:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:39.378 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:39.378 00:17:39.378 real 0m10.491s 00:17:39.378 user 0m26.399s 00:17:39.378 sys 0m2.678s 00:17:39.378 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:39.378 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:39.378 ************************************ 00:17:39.378 END TEST nvmf_invalid 00:17:39.378 ************************************ 00:17:39.378 07:41:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:39.378 07:41:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:39.378 07:41:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:39.378 07:41:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:39.378 ************************************ 00:17:39.378 START TEST nvmf_connect_stress 00:17:39.378 ************************************ 00:17:39.378 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:39.378 * Looking for test storage... 00:17:39.378 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:39.378 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:39.378 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:17:39.378 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:39.378 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:39.378 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:39.378 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:39.378 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:39.378 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:17:39.378 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:17:39.378 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:17:39.378 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:17:39.378 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:17:39.378 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:17:39.378 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:17:39.378 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:39.378 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:17:39.378 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:17:39.378 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:39.378 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:39.378 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:17:39.379 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:17:39.379 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:39.379 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:17:39.379 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:17:39.379 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:17:39.379 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:17:39.379 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:39.379 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:17:39.379 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:17:39.379 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:39.379 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:39.379 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:17:39.379 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:39.379 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:39.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:39.379 --rc genhtml_branch_coverage=1 00:17:39.379 --rc genhtml_function_coverage=1 00:17:39.379 --rc genhtml_legend=1 00:17:39.379 --rc geninfo_all_blocks=1 00:17:39.379 --rc geninfo_unexecuted_blocks=1 00:17:39.379 00:17:39.379 ' 00:17:39.379 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:39.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:39.379 --rc genhtml_branch_coverage=1 00:17:39.379 --rc genhtml_function_coverage=1 00:17:39.379 --rc genhtml_legend=1 00:17:39.379 --rc geninfo_all_blocks=1 00:17:39.379 --rc geninfo_unexecuted_blocks=1 00:17:39.379 00:17:39.379 ' 00:17:39.379 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:39.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:39.379 --rc genhtml_branch_coverage=1 00:17:39.379 --rc genhtml_function_coverage=1 00:17:39.379 --rc genhtml_legend=1 00:17:39.379 --rc geninfo_all_blocks=1 00:17:39.379 --rc geninfo_unexecuted_blocks=1 00:17:39.379 00:17:39.379 ' 00:17:39.379 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:39.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:39.379 --rc genhtml_branch_coverage=1 00:17:39.379 --rc genhtml_function_coverage=1 00:17:39.379 --rc genhtml_legend=1 00:17:39.379 --rc geninfo_all_blocks=1 00:17:39.379 --rc geninfo_unexecuted_blocks=1 00:17:39.379 00:17:39.379 ' 00:17:39.379 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:39.379 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:17:39.379 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:39.379 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:39.379 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:39.379 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:39.379 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:39.379 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:39.379 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:39.379 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:39.379 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:39.379 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:39.379 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:39.379 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:39.379 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:39.379 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:39.379 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:39.379 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:39.379 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:39.379 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:17:39.379 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:39.379 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:39.379 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:39.379 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.379 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.379 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.379 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:17:39.379 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.379 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:17:39.379 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:39.379 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:39.379 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:39.379 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:39.379 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:39.379 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:39.379 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:39.380 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:39.380 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:39.380 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:39.380 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:17:39.380 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:39.380 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:39.380 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:39.380 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:39.380 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:39.380 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:39.380 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:39.380 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:39.380 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:39.380 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:39.380 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:17:39.380 07:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:41.287 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:41.287 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:41.287 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:41.287 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:41.287 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:41.288 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:41.288 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:41.288 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:41.288 07:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:41.288 07:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:41.288 07:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:41.288 07:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:41.288 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:41.288 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.290 ms 00:17:41.288 00:17:41.288 --- 10.0.0.2 ping statistics --- 00:17:41.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:41.288 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:17:41.288 07:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:41.288 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:41.288 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:17:41.288 00:17:41.288 --- 10.0.0.1 ping statistics --- 00:17:41.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:41.288 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:17:41.288 07:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:41.288 07:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:17:41.288 07:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:41.288 07:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:41.288 07:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:41.288 07:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:41.288 07:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:41.288 07:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:41.288 07:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:41.288 07:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:17:41.288 07:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:41.288 07:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:41.288 07:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:41.288 07:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=2945387 00:17:41.288 07:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:41.288 07:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 2945387 00:17:41.288 07:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 2945387 ']' 00:17:41.288 07:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:41.288 07:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:41.288 07:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:41.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:41.288 07:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:41.288 07:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:41.288 [2024-11-19 07:41:33.135906] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:17:41.288 [2024-11-19 07:41:33.136077] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:41.546 [2024-11-19 07:41:33.279386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:41.546 [2024-11-19 07:41:33.394280] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:41.546 [2024-11-19 07:41:33.394367] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:41.546 [2024-11-19 07:41:33.394387] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:41.546 [2024-11-19 07:41:33.394407] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:41.546 [2024-11-19 07:41:33.394423] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:41.546 [2024-11-19 07:41:33.396888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:41.546 [2024-11-19 07:41:33.396933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:41.546 [2024-11-19 07:41:33.396939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:42.482 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:42.482 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:17:42.482 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:42.482 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:42.482 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:42.482 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:42.482 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:42.482 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.482 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:42.482 [2024-11-19 07:41:34.145813] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:42.482 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.482 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:42.482 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.482 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:42.482 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.482 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:42.482 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.482 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:42.482 [2024-11-19 07:41:34.166132] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:42.482 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.482 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:42.482 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.482 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:42.482 NULL1 00:17:42.482 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.482 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2945537 00:17:42.482 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:42.482 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:42.482 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:17:42.482 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:17:42.482 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:42.482 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:42.482 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:42.482 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:42.482 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:42.482 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:42.482 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:42.482 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:42.482 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:42.482 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:42.482 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:42.482 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:42.482 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:42.482 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:42.482 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:42.482 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:42.482 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:42.482 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:42.482 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:42.482 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:42.482 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:42.482 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:42.482 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:42.482 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:42.482 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:42.482 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:42.482 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:42.482 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:42.482 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:42.482 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:42.482 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:42.482 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:42.482 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:42.482 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:42.482 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:42.482 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:42.482 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:42.482 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:42.482 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:42.482 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:42.482 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2945537 00:17:42.482 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:42.482 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.482 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:42.807 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.807 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2945537 00:17:42.807 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:42.807 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.807 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:43.066 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.066 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2945537 00:17:43.066 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:43.066 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.066 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:43.325 07:41:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.325 07:41:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2945537 00:17:43.325 07:41:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:43.325 07:41:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.325 07:41:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:43.896 07:41:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.896 07:41:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2945537 00:17:43.896 07:41:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:43.896 07:41:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.896 07:41:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:44.155 07:41:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.155 07:41:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2945537 00:17:44.155 07:41:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:44.155 07:41:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.155 07:41:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:44.414 07:41:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.414 07:41:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2945537 00:17:44.414 07:41:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:44.414 07:41:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.414 07:41:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:44.673 07:41:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.673 07:41:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2945537 00:17:44.673 07:41:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:44.673 07:41:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.673 07:41:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:44.933 07:41:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.933 07:41:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2945537 00:17:44.933 07:41:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:44.933 07:41:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.933 07:41:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:45.502 07:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.502 07:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2945537 00:17:45.502 07:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:45.502 07:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.502 07:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:45.761 07:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.761 07:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2945537 00:17:45.761 07:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:45.761 07:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.761 07:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:46.019 07:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.019 07:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2945537 00:17:46.019 07:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:46.019 07:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.019 07:41:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:46.277 07:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.277 07:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2945537 00:17:46.277 07:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:46.277 07:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.277 07:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:46.536 07:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.536 07:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2945537 00:17:46.536 07:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:46.536 07:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.536 07:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:47.107 07:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.107 07:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2945537 00:17:47.107 07:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:47.107 07:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.107 07:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:47.365 07:41:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.365 07:41:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2945537 00:17:47.365 07:41:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:47.365 07:41:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.365 07:41:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:47.623 07:41:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.623 07:41:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2945537 00:17:47.623 07:41:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:47.623 07:41:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.623 07:41:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:47.881 07:41:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.881 07:41:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2945537 00:17:47.881 07:41:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:47.881 07:41:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.881 07:41:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:48.141 07:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.141 07:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2945537 00:17:48.141 07:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:48.141 07:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.141 07:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:48.709 07:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.710 07:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2945537 00:17:48.710 07:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:48.710 07:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.710 07:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:48.968 07:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.968 07:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2945537 00:17:48.968 07:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:48.968 07:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.968 07:41:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:49.225 07:41:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.225 07:41:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2945537 00:17:49.225 07:41:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:49.225 07:41:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.225 07:41:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:49.483 07:41:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.483 07:41:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2945537 00:17:49.483 07:41:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:49.483 07:41:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.483 07:41:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:50.052 07:41:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.052 07:41:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2945537 00:17:50.052 07:41:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:50.052 07:41:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.052 07:41:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:50.312 07:41:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.312 07:41:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2945537 00:17:50.312 07:41:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:50.312 07:41:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.312 07:41:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:50.571 07:41:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.571 07:41:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2945537 00:17:50.571 07:41:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:50.571 07:41:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.571 07:41:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:50.829 07:41:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.829 07:41:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2945537 00:17:50.829 07:41:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:50.829 07:41:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.829 07:41:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:51.087 07:41:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.087 07:41:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2945537 00:17:51.087 07:41:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:51.087 07:41:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.087 07:41:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:51.656 07:41:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.656 07:41:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2945537 00:17:51.656 07:41:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:51.656 07:41:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.656 07:41:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:51.916 07:41:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.916 07:41:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2945537 00:17:51.916 07:41:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:51.916 07:41:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.916 07:41:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:52.175 07:41:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.175 07:41:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2945537 00:17:52.175 07:41:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:52.175 07:41:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.175 07:41:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:52.433 07:41:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.433 07:41:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2945537 00:17:52.433 07:41:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:52.433 07:41:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.433 07:41:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:52.691 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:52.958 07:41:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.958 07:41:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2945537 00:17:52.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2945537) - No such process 00:17:52.958 07:41:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2945537 00:17:52.958 07:41:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:52.958 07:41:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:52.958 07:41:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:17:52.958 07:41:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:52.958 07:41:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:17:52.958 07:41:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:52.958 07:41:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:17:52.958 07:41:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:52.958 07:41:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:52.958 rmmod nvme_tcp 00:17:52.958 rmmod nvme_fabrics 00:17:52.958 rmmod nvme_keyring 00:17:52.958 07:41:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:52.958 07:41:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:17:52.958 07:41:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:17:52.958 07:41:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 2945387 ']' 00:17:52.958 07:41:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 2945387 00:17:52.958 07:41:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 2945387 ']' 00:17:52.958 07:41:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 2945387 00:17:52.958 07:41:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:17:52.958 07:41:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:52.958 07:41:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2945387 00:17:52.958 07:41:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:52.958 07:41:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:52.958 07:41:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2945387' 00:17:52.958 killing process with pid 2945387 00:17:52.958 07:41:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 2945387 00:17:52.958 07:41:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 2945387 00:17:54.339 07:41:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:54.339 07:41:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:54.339 07:41:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:54.339 07:41:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:17:54.339 07:41:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:17:54.339 07:41:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:54.339 07:41:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:17:54.339 07:41:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:54.339 07:41:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:54.339 07:41:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:54.339 07:41:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:54.339 07:41:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:56.246 07:41:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:56.246 00:17:56.246 real 0m17.149s 00:17:56.246 user 0m43.129s 00:17:56.246 sys 0m5.905s 00:17:56.246 07:41:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:56.246 07:41:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:56.246 ************************************ 00:17:56.246 END TEST nvmf_connect_stress 00:17:56.246 ************************************ 00:17:56.246 07:41:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:56.246 07:41:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:56.246 07:41:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:56.246 07:41:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:56.246 ************************************ 00:17:56.247 START TEST nvmf_fused_ordering 00:17:56.247 ************************************ 00:17:56.247 07:41:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:56.247 * Looking for test storage... 00:17:56.247 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:56.247 07:41:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:56.247 07:41:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:17:56.247 07:41:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:56.247 07:41:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:56.247 07:41:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:56.247 07:41:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:56.247 07:41:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:56.247 07:41:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:17:56.247 07:41:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:17:56.247 07:41:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:17:56.247 07:41:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:17:56.247 07:41:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:17:56.247 07:41:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:17:56.247 07:41:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:17:56.247 07:41:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:56.247 07:41:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:17:56.247 07:41:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:17:56.247 07:41:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:56.247 07:41:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:56.247 07:41:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:17:56.247 07:41:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:17:56.247 07:41:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:56.247 07:41:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:17:56.247 07:41:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:17:56.247 07:41:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:17:56.247 07:41:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:17:56.247 07:41:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:56.247 07:41:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:17:56.247 07:41:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:17:56.247 07:41:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:56.247 07:41:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:56.247 07:41:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:17:56.247 07:41:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:56.247 07:41:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:56.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:56.247 --rc genhtml_branch_coverage=1 00:17:56.247 --rc genhtml_function_coverage=1 00:17:56.247 --rc genhtml_legend=1 00:17:56.247 --rc geninfo_all_blocks=1 00:17:56.247 --rc geninfo_unexecuted_blocks=1 00:17:56.247 00:17:56.247 ' 00:17:56.247 07:41:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:56.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:56.247 --rc genhtml_branch_coverage=1 00:17:56.247 --rc genhtml_function_coverage=1 00:17:56.247 --rc genhtml_legend=1 00:17:56.247 --rc geninfo_all_blocks=1 00:17:56.247 --rc geninfo_unexecuted_blocks=1 00:17:56.247 00:17:56.247 ' 00:17:56.247 07:41:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:56.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:56.247 --rc genhtml_branch_coverage=1 00:17:56.247 --rc genhtml_function_coverage=1 00:17:56.247 --rc genhtml_legend=1 00:17:56.247 --rc geninfo_all_blocks=1 00:17:56.247 --rc geninfo_unexecuted_blocks=1 00:17:56.247 00:17:56.247 ' 00:17:56.247 07:41:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:56.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:56.247 --rc genhtml_branch_coverage=1 00:17:56.247 --rc genhtml_function_coverage=1 00:17:56.247 --rc genhtml_legend=1 00:17:56.247 --rc geninfo_all_blocks=1 00:17:56.247 --rc geninfo_unexecuted_blocks=1 00:17:56.247 00:17:56.247 ' 00:17:56.247 07:41:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:56.247 07:41:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:17:56.247 07:41:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:56.247 07:41:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:56.247 07:41:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:56.247 07:41:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:56.247 07:41:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:56.247 07:41:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:56.247 07:41:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:56.247 07:41:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:56.247 07:41:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:56.247 07:41:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:56.247 07:41:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:56.247 07:41:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:56.247 07:41:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:56.247 07:41:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:56.247 07:41:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:56.247 07:41:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:56.247 07:41:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:56.247 07:41:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:17:56.247 07:41:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:56.247 07:41:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:56.247 07:41:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:56.247 07:41:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:56.247 07:41:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:56.247 07:41:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:56.247 07:41:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:17:56.247 07:41:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:56.247 07:41:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:17:56.247 07:41:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:56.247 07:41:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:56.247 07:41:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:56.247 07:41:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:56.248 07:41:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:56.248 07:41:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:56.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:56.248 07:41:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:56.248 07:41:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:56.248 07:41:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:56.248 07:41:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:17:56.248 07:41:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:56.248 07:41:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:56.248 07:41:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:56.248 07:41:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:56.248 07:41:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:56.248 07:41:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:56.248 07:41:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:56.248 07:41:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:56.248 07:41:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:56.248 07:41:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:56.248 07:41:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:17:56.248 07:41:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:58.781 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:58.781 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:17:58.781 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:58.781 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:58.781 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:58.781 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:58.781 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:58.781 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:17:58.781 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:58.781 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:17:58.781 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:17:58.781 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:17:58.781 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:17:58.781 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:17:58.781 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:17:58.781 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:58.781 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:58.781 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:58.781 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:58.781 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:58.781 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:58.781 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:58.781 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:58.781 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:58.781 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:58.781 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:58.781 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:58.781 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:58.781 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:58.781 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:58.781 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:58.781 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:58.781 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:58.781 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:58.781 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:58.781 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:58.781 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:58.781 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:58.781 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:58.781 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:58.781 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:58.781 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:58.781 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:58.781 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:58.781 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:58.781 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:58.781 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:58.781 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:58.781 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:58.781 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:58.782 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:58.782 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:58.782 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:58.782 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:58.782 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:58.782 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:58.782 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:58.782 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:58.782 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:58.782 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:58.782 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:58.782 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:58.782 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:58.782 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:58.782 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:58.782 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:58.782 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:58.782 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:58.782 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:58.782 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:58.782 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:58.782 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:58.782 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:58.782 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:17:58.782 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:58.782 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:58.782 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:58.782 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:58.782 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:58.782 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:58.782 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:58.782 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:58.782 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:58.782 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:58.782 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:58.782 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:58.782 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:58.782 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:58.782 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:58.782 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:58.782 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:58.782 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:58.782 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:58.782 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:58.782 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:58.782 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:58.782 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:58.782 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:58.782 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:58.782 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:58.782 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:58.782 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.300 ms 00:17:58.782 00:17:58.782 --- 10.0.0.2 ping statistics --- 00:17:58.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:58.782 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:17:58.782 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:58.782 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:58.782 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.074 ms 00:17:58.782 00:17:58.782 --- 10.0.0.1 ping statistics --- 00:17:58.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:58.782 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:17:58.782 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:58.782 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:17:58.782 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:58.782 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:58.782 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:58.782 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:58.782 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:58.782 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:58.782 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:58.782 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:17:58.782 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:58.782 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:58.782 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:58.782 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=2948817 00:17:58.782 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:58.782 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 2948817 00:17:58.782 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 2948817 ']' 00:17:58.782 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:58.782 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:58.782 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:58.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:58.782 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:58.782 07:41:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:58.782 [2024-11-19 07:41:50.456933] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:17:58.782 [2024-11-19 07:41:50.457101] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:58.782 [2024-11-19 07:41:50.615993] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.040 [2024-11-19 07:41:50.756631] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:59.040 [2024-11-19 07:41:50.756729] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:59.040 [2024-11-19 07:41:50.756758] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:59.040 [2024-11-19 07:41:50.756783] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:59.040 [2024-11-19 07:41:50.756803] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:59.040 [2024-11-19 07:41:50.758399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:59.608 07:41:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:59.608 07:41:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:17:59.608 07:41:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:59.608 07:41:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:59.608 07:41:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:59.608 07:41:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:59.608 07:41:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:59.608 07:41:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.608 07:41:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:59.608 [2024-11-19 07:41:51.440993] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:59.608 07:41:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.608 07:41:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:59.608 07:41:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.608 07:41:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:59.608 07:41:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.608 07:41:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:59.608 07:41:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.608 07:41:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:59.608 [2024-11-19 07:41:51.457230] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:59.608 07:41:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.608 07:41:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:59.608 07:41:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.608 07:41:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:59.608 NULL1 00:17:59.608 07:41:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.608 07:41:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:17:59.608 07:41:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.608 07:41:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:59.608 07:41:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.608 07:41:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:17:59.608 07:41:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.608 07:41:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:59.608 07:41:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.608 07:41:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:17:59.608 [2024-11-19 07:41:51.532594] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:17:59.608 [2024-11-19 07:41:51.532716] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2948970 ] 00:18:00.547 Attached to nqn.2016-06.io.spdk:cnode1 00:18:00.547 Namespace ID: 1 size: 1GB 00:18:00.547 fused_ordering(0) 00:18:00.547 fused_ordering(1) 00:18:00.547 fused_ordering(2) 00:18:00.547 fused_ordering(3) 00:18:00.547 fused_ordering(4) 00:18:00.547 fused_ordering(5) 00:18:00.547 fused_ordering(6) 00:18:00.547 fused_ordering(7) 00:18:00.547 fused_ordering(8) 00:18:00.547 fused_ordering(9) 00:18:00.547 fused_ordering(10) 00:18:00.547 fused_ordering(11) 00:18:00.547 fused_ordering(12) 00:18:00.547 fused_ordering(13) 00:18:00.547 fused_ordering(14) 00:18:00.547 fused_ordering(15) 00:18:00.547 fused_ordering(16) 00:18:00.547 fused_ordering(17) 00:18:00.547 fused_ordering(18) 00:18:00.547 fused_ordering(19) 00:18:00.547 fused_ordering(20) 00:18:00.547 fused_ordering(21) 00:18:00.547 fused_ordering(22) 00:18:00.547 fused_ordering(23) 00:18:00.547 fused_ordering(24) 00:18:00.547 fused_ordering(25) 00:18:00.547 fused_ordering(26) 00:18:00.547 fused_ordering(27) 00:18:00.547 fused_ordering(28) 00:18:00.547 fused_ordering(29) 00:18:00.547 fused_ordering(30) 00:18:00.547 fused_ordering(31) 00:18:00.547 fused_ordering(32) 00:18:00.547 fused_ordering(33) 00:18:00.547 fused_ordering(34) 00:18:00.547 fused_ordering(35) 00:18:00.547 fused_ordering(36) 00:18:00.547 fused_ordering(37) 00:18:00.547 fused_ordering(38) 00:18:00.547 fused_ordering(39) 00:18:00.547 fused_ordering(40) 00:18:00.547 fused_ordering(41) 00:18:00.547 fused_ordering(42) 00:18:00.547 fused_ordering(43) 00:18:00.547 fused_ordering(44) 00:18:00.547 fused_ordering(45) 00:18:00.547 fused_ordering(46) 00:18:00.547 fused_ordering(47) 00:18:00.547 fused_ordering(48) 00:18:00.547 fused_ordering(49) 00:18:00.547 fused_ordering(50) 00:18:00.547 fused_ordering(51) 00:18:00.547 fused_ordering(52) 00:18:00.547 fused_ordering(53) 00:18:00.547 fused_ordering(54) 00:18:00.547 fused_ordering(55) 00:18:00.547 fused_ordering(56) 00:18:00.547 fused_ordering(57) 00:18:00.547 fused_ordering(58) 00:18:00.547 fused_ordering(59) 00:18:00.547 fused_ordering(60) 00:18:00.547 fused_ordering(61) 00:18:00.547 fused_ordering(62) 00:18:00.547 fused_ordering(63) 00:18:00.547 fused_ordering(64) 00:18:00.547 fused_ordering(65) 00:18:00.547 fused_ordering(66) 00:18:00.547 fused_ordering(67) 00:18:00.547 fused_ordering(68) 00:18:00.547 fused_ordering(69) 00:18:00.547 fused_ordering(70) 00:18:00.547 fused_ordering(71) 00:18:00.547 fused_ordering(72) 00:18:00.547 fused_ordering(73) 00:18:00.547 fused_ordering(74) 00:18:00.547 fused_ordering(75) 00:18:00.547 fused_ordering(76) 00:18:00.547 fused_ordering(77) 00:18:00.547 fused_ordering(78) 00:18:00.547 fused_ordering(79) 00:18:00.547 fused_ordering(80) 00:18:00.547 fused_ordering(81) 00:18:00.547 fused_ordering(82) 00:18:00.547 fused_ordering(83) 00:18:00.548 fused_ordering(84) 00:18:00.548 fused_ordering(85) 00:18:00.548 fused_ordering(86) 00:18:00.548 fused_ordering(87) 00:18:00.548 fused_ordering(88) 00:18:00.548 fused_ordering(89) 00:18:00.548 fused_ordering(90) 00:18:00.548 fused_ordering(91) 00:18:00.548 fused_ordering(92) 00:18:00.548 fused_ordering(93) 00:18:00.548 fused_ordering(94) 00:18:00.548 fused_ordering(95) 00:18:00.548 fused_ordering(96) 00:18:00.548 fused_ordering(97) 00:18:00.548 fused_ordering(98) 00:18:00.548 fused_ordering(99) 00:18:00.548 fused_ordering(100) 00:18:00.548 fused_ordering(101) 00:18:00.548 fused_ordering(102) 00:18:00.548 fused_ordering(103) 00:18:00.548 fused_ordering(104) 00:18:00.548 fused_ordering(105) 00:18:00.548 fused_ordering(106) 00:18:00.548 fused_ordering(107) 00:18:00.548 fused_ordering(108) 00:18:00.548 fused_ordering(109) 00:18:00.548 fused_ordering(110) 00:18:00.548 fused_ordering(111) 00:18:00.548 fused_ordering(112) 00:18:00.548 fused_ordering(113) 00:18:00.548 fused_ordering(114) 00:18:00.548 fused_ordering(115) 00:18:00.548 fused_ordering(116) 00:18:00.548 fused_ordering(117) 00:18:00.548 fused_ordering(118) 00:18:00.548 fused_ordering(119) 00:18:00.548 fused_ordering(120) 00:18:00.548 fused_ordering(121) 00:18:00.548 fused_ordering(122) 00:18:00.548 fused_ordering(123) 00:18:00.548 fused_ordering(124) 00:18:00.548 fused_ordering(125) 00:18:00.548 fused_ordering(126) 00:18:00.548 fused_ordering(127) 00:18:00.548 fused_ordering(128) 00:18:00.548 fused_ordering(129) 00:18:00.548 fused_ordering(130) 00:18:00.548 fused_ordering(131) 00:18:00.548 fused_ordering(132) 00:18:00.548 fused_ordering(133) 00:18:00.548 fused_ordering(134) 00:18:00.548 fused_ordering(135) 00:18:00.548 fused_ordering(136) 00:18:00.548 fused_ordering(137) 00:18:00.548 fused_ordering(138) 00:18:00.548 fused_ordering(139) 00:18:00.548 fused_ordering(140) 00:18:00.548 fused_ordering(141) 00:18:00.548 fused_ordering(142) 00:18:00.548 fused_ordering(143) 00:18:00.548 fused_ordering(144) 00:18:00.548 fused_ordering(145) 00:18:00.548 fused_ordering(146) 00:18:00.548 fused_ordering(147) 00:18:00.548 fused_ordering(148) 00:18:00.548 fused_ordering(149) 00:18:00.548 fused_ordering(150) 00:18:00.548 fused_ordering(151) 00:18:00.548 fused_ordering(152) 00:18:00.548 fused_ordering(153) 00:18:00.548 fused_ordering(154) 00:18:00.548 fused_ordering(155) 00:18:00.548 fused_ordering(156) 00:18:00.548 fused_ordering(157) 00:18:00.548 fused_ordering(158) 00:18:00.548 fused_ordering(159) 00:18:00.548 fused_ordering(160) 00:18:00.548 fused_ordering(161) 00:18:00.548 fused_ordering(162) 00:18:00.548 fused_ordering(163) 00:18:00.548 fused_ordering(164) 00:18:00.548 fused_ordering(165) 00:18:00.548 fused_ordering(166) 00:18:00.548 fused_ordering(167) 00:18:00.548 fused_ordering(168) 00:18:00.548 fused_ordering(169) 00:18:00.548 fused_ordering(170) 00:18:00.548 fused_ordering(171) 00:18:00.548 fused_ordering(172) 00:18:00.548 fused_ordering(173) 00:18:00.548 fused_ordering(174) 00:18:00.548 fused_ordering(175) 00:18:00.548 fused_ordering(176) 00:18:00.548 fused_ordering(177) 00:18:00.548 fused_ordering(178) 00:18:00.548 fused_ordering(179) 00:18:00.548 fused_ordering(180) 00:18:00.548 fused_ordering(181) 00:18:00.548 fused_ordering(182) 00:18:00.548 fused_ordering(183) 00:18:00.548 fused_ordering(184) 00:18:00.548 fused_ordering(185) 00:18:00.548 fused_ordering(186) 00:18:00.548 fused_ordering(187) 00:18:00.548 fused_ordering(188) 00:18:00.548 fused_ordering(189) 00:18:00.548 fused_ordering(190) 00:18:00.548 fused_ordering(191) 00:18:00.548 fused_ordering(192) 00:18:00.548 fused_ordering(193) 00:18:00.548 fused_ordering(194) 00:18:00.548 fused_ordering(195) 00:18:00.548 fused_ordering(196) 00:18:00.548 fused_ordering(197) 00:18:00.548 fused_ordering(198) 00:18:00.548 fused_ordering(199) 00:18:00.548 fused_ordering(200) 00:18:00.548 fused_ordering(201) 00:18:00.548 fused_ordering(202) 00:18:00.548 fused_ordering(203) 00:18:00.548 fused_ordering(204) 00:18:00.548 fused_ordering(205) 00:18:00.807 fused_ordering(206) 00:18:00.807 fused_ordering(207) 00:18:00.807 fused_ordering(208) 00:18:00.807 fused_ordering(209) 00:18:00.807 fused_ordering(210) 00:18:00.807 fused_ordering(211) 00:18:00.807 fused_ordering(212) 00:18:00.807 fused_ordering(213) 00:18:00.807 fused_ordering(214) 00:18:00.807 fused_ordering(215) 00:18:00.807 fused_ordering(216) 00:18:00.807 fused_ordering(217) 00:18:00.807 fused_ordering(218) 00:18:00.807 fused_ordering(219) 00:18:00.807 fused_ordering(220) 00:18:00.807 fused_ordering(221) 00:18:00.807 fused_ordering(222) 00:18:00.807 fused_ordering(223) 00:18:00.807 fused_ordering(224) 00:18:00.807 fused_ordering(225) 00:18:00.807 fused_ordering(226) 00:18:00.807 fused_ordering(227) 00:18:00.807 fused_ordering(228) 00:18:00.807 fused_ordering(229) 00:18:00.807 fused_ordering(230) 00:18:00.807 fused_ordering(231) 00:18:00.807 fused_ordering(232) 00:18:00.807 fused_ordering(233) 00:18:00.807 fused_ordering(234) 00:18:00.807 fused_ordering(235) 00:18:00.807 fused_ordering(236) 00:18:00.807 fused_ordering(237) 00:18:00.807 fused_ordering(238) 00:18:00.807 fused_ordering(239) 00:18:00.807 fused_ordering(240) 00:18:00.807 fused_ordering(241) 00:18:00.807 fused_ordering(242) 00:18:00.807 fused_ordering(243) 00:18:00.807 fused_ordering(244) 00:18:00.807 fused_ordering(245) 00:18:00.807 fused_ordering(246) 00:18:00.807 fused_ordering(247) 00:18:00.807 fused_ordering(248) 00:18:00.807 fused_ordering(249) 00:18:00.807 fused_ordering(250) 00:18:00.807 fused_ordering(251) 00:18:00.807 fused_ordering(252) 00:18:00.807 fused_ordering(253) 00:18:00.807 fused_ordering(254) 00:18:00.807 fused_ordering(255) 00:18:00.807 fused_ordering(256) 00:18:00.807 fused_ordering(257) 00:18:00.807 fused_ordering(258) 00:18:00.807 fused_ordering(259) 00:18:00.807 fused_ordering(260) 00:18:00.807 fused_ordering(261) 00:18:00.807 fused_ordering(262) 00:18:00.807 fused_ordering(263) 00:18:00.807 fused_ordering(264) 00:18:00.807 fused_ordering(265) 00:18:00.807 fused_ordering(266) 00:18:00.807 fused_ordering(267) 00:18:00.807 fused_ordering(268) 00:18:00.807 fused_ordering(269) 00:18:00.807 fused_ordering(270) 00:18:00.807 fused_ordering(271) 00:18:00.807 fused_ordering(272) 00:18:00.807 fused_ordering(273) 00:18:00.807 fused_ordering(274) 00:18:00.807 fused_ordering(275) 00:18:00.807 fused_ordering(276) 00:18:00.807 fused_ordering(277) 00:18:00.807 fused_ordering(278) 00:18:00.807 fused_ordering(279) 00:18:00.807 fused_ordering(280) 00:18:00.807 fused_ordering(281) 00:18:00.807 fused_ordering(282) 00:18:00.807 fused_ordering(283) 00:18:00.807 fused_ordering(284) 00:18:00.807 fused_ordering(285) 00:18:00.807 fused_ordering(286) 00:18:00.807 fused_ordering(287) 00:18:00.807 fused_ordering(288) 00:18:00.807 fused_ordering(289) 00:18:00.807 fused_ordering(290) 00:18:00.807 fused_ordering(291) 00:18:00.807 fused_ordering(292) 00:18:00.807 fused_ordering(293) 00:18:00.807 fused_ordering(294) 00:18:00.807 fused_ordering(295) 00:18:00.807 fused_ordering(296) 00:18:00.807 fused_ordering(297) 00:18:00.807 fused_ordering(298) 00:18:00.807 fused_ordering(299) 00:18:00.807 fused_ordering(300) 00:18:00.807 fused_ordering(301) 00:18:00.807 fused_ordering(302) 00:18:00.807 fused_ordering(303) 00:18:00.807 fused_ordering(304) 00:18:00.808 fused_ordering(305) 00:18:00.808 fused_ordering(306) 00:18:00.808 fused_ordering(307) 00:18:00.808 fused_ordering(308) 00:18:00.808 fused_ordering(309) 00:18:00.808 fused_ordering(310) 00:18:00.808 fused_ordering(311) 00:18:00.808 fused_ordering(312) 00:18:00.808 fused_ordering(313) 00:18:00.808 fused_ordering(314) 00:18:00.808 fused_ordering(315) 00:18:00.808 fused_ordering(316) 00:18:00.808 fused_ordering(317) 00:18:00.808 fused_ordering(318) 00:18:00.808 fused_ordering(319) 00:18:00.808 fused_ordering(320) 00:18:00.808 fused_ordering(321) 00:18:00.808 fused_ordering(322) 00:18:00.808 fused_ordering(323) 00:18:00.808 fused_ordering(324) 00:18:00.808 fused_ordering(325) 00:18:00.808 fused_ordering(326) 00:18:00.808 fused_ordering(327) 00:18:00.808 fused_ordering(328) 00:18:00.808 fused_ordering(329) 00:18:00.808 fused_ordering(330) 00:18:00.808 fused_ordering(331) 00:18:00.808 fused_ordering(332) 00:18:00.808 fused_ordering(333) 00:18:00.808 fused_ordering(334) 00:18:00.808 fused_ordering(335) 00:18:00.808 fused_ordering(336) 00:18:00.808 fused_ordering(337) 00:18:00.808 fused_ordering(338) 00:18:00.808 fused_ordering(339) 00:18:00.808 fused_ordering(340) 00:18:00.808 fused_ordering(341) 00:18:00.808 fused_ordering(342) 00:18:00.808 fused_ordering(343) 00:18:00.808 fused_ordering(344) 00:18:00.808 fused_ordering(345) 00:18:00.808 fused_ordering(346) 00:18:00.808 fused_ordering(347) 00:18:00.808 fused_ordering(348) 00:18:00.808 fused_ordering(349) 00:18:00.808 fused_ordering(350) 00:18:00.808 fused_ordering(351) 00:18:00.808 fused_ordering(352) 00:18:00.808 fused_ordering(353) 00:18:00.808 fused_ordering(354) 00:18:00.808 fused_ordering(355) 00:18:00.808 fused_ordering(356) 00:18:00.808 fused_ordering(357) 00:18:00.808 fused_ordering(358) 00:18:00.808 fused_ordering(359) 00:18:00.808 fused_ordering(360) 00:18:00.808 fused_ordering(361) 00:18:00.808 fused_ordering(362) 00:18:00.808 fused_ordering(363) 00:18:00.808 fused_ordering(364) 00:18:00.808 fused_ordering(365) 00:18:00.808 fused_ordering(366) 00:18:00.808 fused_ordering(367) 00:18:00.808 fused_ordering(368) 00:18:00.808 fused_ordering(369) 00:18:00.808 fused_ordering(370) 00:18:00.808 fused_ordering(371) 00:18:00.808 fused_ordering(372) 00:18:00.808 fused_ordering(373) 00:18:00.808 fused_ordering(374) 00:18:00.808 fused_ordering(375) 00:18:00.808 fused_ordering(376) 00:18:00.808 fused_ordering(377) 00:18:00.808 fused_ordering(378) 00:18:00.808 fused_ordering(379) 00:18:00.808 fused_ordering(380) 00:18:00.808 fused_ordering(381) 00:18:00.808 fused_ordering(382) 00:18:00.808 fused_ordering(383) 00:18:00.808 fused_ordering(384) 00:18:00.808 fused_ordering(385) 00:18:00.808 fused_ordering(386) 00:18:00.808 fused_ordering(387) 00:18:00.808 fused_ordering(388) 00:18:00.808 fused_ordering(389) 00:18:00.808 fused_ordering(390) 00:18:00.808 fused_ordering(391) 00:18:00.808 fused_ordering(392) 00:18:00.808 fused_ordering(393) 00:18:00.808 fused_ordering(394) 00:18:00.808 fused_ordering(395) 00:18:00.808 fused_ordering(396) 00:18:00.808 fused_ordering(397) 00:18:00.808 fused_ordering(398) 00:18:00.808 fused_ordering(399) 00:18:00.808 fused_ordering(400) 00:18:00.808 fused_ordering(401) 00:18:00.808 fused_ordering(402) 00:18:00.808 fused_ordering(403) 00:18:00.808 fused_ordering(404) 00:18:00.808 fused_ordering(405) 00:18:00.808 fused_ordering(406) 00:18:00.808 fused_ordering(407) 00:18:00.808 fused_ordering(408) 00:18:00.808 fused_ordering(409) 00:18:00.808 fused_ordering(410) 00:18:01.377 fused_ordering(411) 00:18:01.377 fused_ordering(412) 00:18:01.377 fused_ordering(413) 00:18:01.377 fused_ordering(414) 00:18:01.377 fused_ordering(415) 00:18:01.377 fused_ordering(416) 00:18:01.377 fused_ordering(417) 00:18:01.377 fused_ordering(418) 00:18:01.377 fused_ordering(419) 00:18:01.377 fused_ordering(420) 00:18:01.377 fused_ordering(421) 00:18:01.377 fused_ordering(422) 00:18:01.377 fused_ordering(423) 00:18:01.377 fused_ordering(424) 00:18:01.377 fused_ordering(425) 00:18:01.377 fused_ordering(426) 00:18:01.377 fused_ordering(427) 00:18:01.377 fused_ordering(428) 00:18:01.377 fused_ordering(429) 00:18:01.377 fused_ordering(430) 00:18:01.377 fused_ordering(431) 00:18:01.377 fused_ordering(432) 00:18:01.377 fused_ordering(433) 00:18:01.377 fused_ordering(434) 00:18:01.377 fused_ordering(435) 00:18:01.377 fused_ordering(436) 00:18:01.377 fused_ordering(437) 00:18:01.377 fused_ordering(438) 00:18:01.377 fused_ordering(439) 00:18:01.377 fused_ordering(440) 00:18:01.377 fused_ordering(441) 00:18:01.377 fused_ordering(442) 00:18:01.377 fused_ordering(443) 00:18:01.377 fused_ordering(444) 00:18:01.377 fused_ordering(445) 00:18:01.377 fused_ordering(446) 00:18:01.377 fused_ordering(447) 00:18:01.377 fused_ordering(448) 00:18:01.377 fused_ordering(449) 00:18:01.377 fused_ordering(450) 00:18:01.377 fused_ordering(451) 00:18:01.377 fused_ordering(452) 00:18:01.377 fused_ordering(453) 00:18:01.377 fused_ordering(454) 00:18:01.377 fused_ordering(455) 00:18:01.377 fused_ordering(456) 00:18:01.377 fused_ordering(457) 00:18:01.377 fused_ordering(458) 00:18:01.377 fused_ordering(459) 00:18:01.377 fused_ordering(460) 00:18:01.377 fused_ordering(461) 00:18:01.377 fused_ordering(462) 00:18:01.377 fused_ordering(463) 00:18:01.377 fused_ordering(464) 00:18:01.377 fused_ordering(465) 00:18:01.377 fused_ordering(466) 00:18:01.377 fused_ordering(467) 00:18:01.377 fused_ordering(468) 00:18:01.377 fused_ordering(469) 00:18:01.377 fused_ordering(470) 00:18:01.377 fused_ordering(471) 00:18:01.377 fused_ordering(472) 00:18:01.377 fused_ordering(473) 00:18:01.377 fused_ordering(474) 00:18:01.377 fused_ordering(475) 00:18:01.377 fused_ordering(476) 00:18:01.377 fused_ordering(477) 00:18:01.377 fused_ordering(478) 00:18:01.377 fused_ordering(479) 00:18:01.377 fused_ordering(480) 00:18:01.377 fused_ordering(481) 00:18:01.377 fused_ordering(482) 00:18:01.377 fused_ordering(483) 00:18:01.377 fused_ordering(484) 00:18:01.377 fused_ordering(485) 00:18:01.377 fused_ordering(486) 00:18:01.377 fused_ordering(487) 00:18:01.377 fused_ordering(488) 00:18:01.377 fused_ordering(489) 00:18:01.377 fused_ordering(490) 00:18:01.377 fused_ordering(491) 00:18:01.377 fused_ordering(492) 00:18:01.377 fused_ordering(493) 00:18:01.377 fused_ordering(494) 00:18:01.377 fused_ordering(495) 00:18:01.377 fused_ordering(496) 00:18:01.377 fused_ordering(497) 00:18:01.377 fused_ordering(498) 00:18:01.377 fused_ordering(499) 00:18:01.377 fused_ordering(500) 00:18:01.377 fused_ordering(501) 00:18:01.377 fused_ordering(502) 00:18:01.377 fused_ordering(503) 00:18:01.377 fused_ordering(504) 00:18:01.377 fused_ordering(505) 00:18:01.377 fused_ordering(506) 00:18:01.377 fused_ordering(507) 00:18:01.377 fused_ordering(508) 00:18:01.377 fused_ordering(509) 00:18:01.377 fused_ordering(510) 00:18:01.377 fused_ordering(511) 00:18:01.377 fused_ordering(512) 00:18:01.377 fused_ordering(513) 00:18:01.377 fused_ordering(514) 00:18:01.377 fused_ordering(515) 00:18:01.377 fused_ordering(516) 00:18:01.377 fused_ordering(517) 00:18:01.377 fused_ordering(518) 00:18:01.377 fused_ordering(519) 00:18:01.377 fused_ordering(520) 00:18:01.377 fused_ordering(521) 00:18:01.377 fused_ordering(522) 00:18:01.377 fused_ordering(523) 00:18:01.377 fused_ordering(524) 00:18:01.377 fused_ordering(525) 00:18:01.377 fused_ordering(526) 00:18:01.377 fused_ordering(527) 00:18:01.377 fused_ordering(528) 00:18:01.377 fused_ordering(529) 00:18:01.377 fused_ordering(530) 00:18:01.377 fused_ordering(531) 00:18:01.377 fused_ordering(532) 00:18:01.377 fused_ordering(533) 00:18:01.377 fused_ordering(534) 00:18:01.377 fused_ordering(535) 00:18:01.377 fused_ordering(536) 00:18:01.377 fused_ordering(537) 00:18:01.377 fused_ordering(538) 00:18:01.377 fused_ordering(539) 00:18:01.377 fused_ordering(540) 00:18:01.377 fused_ordering(541) 00:18:01.377 fused_ordering(542) 00:18:01.377 fused_ordering(543) 00:18:01.377 fused_ordering(544) 00:18:01.377 fused_ordering(545) 00:18:01.377 fused_ordering(546) 00:18:01.377 fused_ordering(547) 00:18:01.377 fused_ordering(548) 00:18:01.377 fused_ordering(549) 00:18:01.377 fused_ordering(550) 00:18:01.377 fused_ordering(551) 00:18:01.377 fused_ordering(552) 00:18:01.377 fused_ordering(553) 00:18:01.377 fused_ordering(554) 00:18:01.377 fused_ordering(555) 00:18:01.377 fused_ordering(556) 00:18:01.377 fused_ordering(557) 00:18:01.377 fused_ordering(558) 00:18:01.378 fused_ordering(559) 00:18:01.378 fused_ordering(560) 00:18:01.378 fused_ordering(561) 00:18:01.378 fused_ordering(562) 00:18:01.378 fused_ordering(563) 00:18:01.378 fused_ordering(564) 00:18:01.378 fused_ordering(565) 00:18:01.378 fused_ordering(566) 00:18:01.378 fused_ordering(567) 00:18:01.378 fused_ordering(568) 00:18:01.378 fused_ordering(569) 00:18:01.378 fused_ordering(570) 00:18:01.378 fused_ordering(571) 00:18:01.378 fused_ordering(572) 00:18:01.378 fused_ordering(573) 00:18:01.378 fused_ordering(574) 00:18:01.378 fused_ordering(575) 00:18:01.378 fused_ordering(576) 00:18:01.378 fused_ordering(577) 00:18:01.378 fused_ordering(578) 00:18:01.378 fused_ordering(579) 00:18:01.378 fused_ordering(580) 00:18:01.378 fused_ordering(581) 00:18:01.378 fused_ordering(582) 00:18:01.378 fused_ordering(583) 00:18:01.378 fused_ordering(584) 00:18:01.378 fused_ordering(585) 00:18:01.378 fused_ordering(586) 00:18:01.378 fused_ordering(587) 00:18:01.378 fused_ordering(588) 00:18:01.378 fused_ordering(589) 00:18:01.378 fused_ordering(590) 00:18:01.378 fused_ordering(591) 00:18:01.378 fused_ordering(592) 00:18:01.378 fused_ordering(593) 00:18:01.378 fused_ordering(594) 00:18:01.378 fused_ordering(595) 00:18:01.378 fused_ordering(596) 00:18:01.378 fused_ordering(597) 00:18:01.378 fused_ordering(598) 00:18:01.378 fused_ordering(599) 00:18:01.378 fused_ordering(600) 00:18:01.378 fused_ordering(601) 00:18:01.378 fused_ordering(602) 00:18:01.378 fused_ordering(603) 00:18:01.378 fused_ordering(604) 00:18:01.378 fused_ordering(605) 00:18:01.378 fused_ordering(606) 00:18:01.378 fused_ordering(607) 00:18:01.378 fused_ordering(608) 00:18:01.378 fused_ordering(609) 00:18:01.378 fused_ordering(610) 00:18:01.378 fused_ordering(611) 00:18:01.378 fused_ordering(612) 00:18:01.378 fused_ordering(613) 00:18:01.378 fused_ordering(614) 00:18:01.378 fused_ordering(615) 00:18:01.947 fused_ordering(616) 00:18:01.947 fused_ordering(617) 00:18:01.947 fused_ordering(618) 00:18:01.947 fused_ordering(619) 00:18:01.947 fused_ordering(620) 00:18:01.947 fused_ordering(621) 00:18:01.947 fused_ordering(622) 00:18:01.947 fused_ordering(623) 00:18:01.947 fused_ordering(624) 00:18:01.947 fused_ordering(625) 00:18:01.947 fused_ordering(626) 00:18:01.947 fused_ordering(627) 00:18:01.947 fused_ordering(628) 00:18:01.947 fused_ordering(629) 00:18:01.947 fused_ordering(630) 00:18:01.947 fused_ordering(631) 00:18:01.947 fused_ordering(632) 00:18:01.948 fused_ordering(633) 00:18:01.948 fused_ordering(634) 00:18:01.948 fused_ordering(635) 00:18:01.948 fused_ordering(636) 00:18:01.948 fused_ordering(637) 00:18:01.948 fused_ordering(638) 00:18:01.948 fused_ordering(639) 00:18:01.948 fused_ordering(640) 00:18:01.948 fused_ordering(641) 00:18:01.948 fused_ordering(642) 00:18:01.948 fused_ordering(643) 00:18:01.948 fused_ordering(644) 00:18:01.948 fused_ordering(645) 00:18:01.948 fused_ordering(646) 00:18:01.948 fused_ordering(647) 00:18:01.948 fused_ordering(648) 00:18:01.948 fused_ordering(649) 00:18:01.948 fused_ordering(650) 00:18:01.948 fused_ordering(651) 00:18:01.948 fused_ordering(652) 00:18:01.948 fused_ordering(653) 00:18:01.948 fused_ordering(654) 00:18:01.948 fused_ordering(655) 00:18:01.948 fused_ordering(656) 00:18:01.948 fused_ordering(657) 00:18:01.948 fused_ordering(658) 00:18:01.948 fused_ordering(659) 00:18:01.948 fused_ordering(660) 00:18:01.948 fused_ordering(661) 00:18:01.948 fused_ordering(662) 00:18:01.948 fused_ordering(663) 00:18:01.948 fused_ordering(664) 00:18:01.948 fused_ordering(665) 00:18:01.948 fused_ordering(666) 00:18:01.948 fused_ordering(667) 00:18:01.948 fused_ordering(668) 00:18:01.948 fused_ordering(669) 00:18:01.948 fused_ordering(670) 00:18:01.948 fused_ordering(671) 00:18:01.948 fused_ordering(672) 00:18:01.948 fused_ordering(673) 00:18:01.948 fused_ordering(674) 00:18:01.948 fused_ordering(675) 00:18:01.948 fused_ordering(676) 00:18:01.948 fused_ordering(677) 00:18:01.948 fused_ordering(678) 00:18:01.948 fused_ordering(679) 00:18:01.948 fused_ordering(680) 00:18:01.948 fused_ordering(681) 00:18:01.948 fused_ordering(682) 00:18:01.948 fused_ordering(683) 00:18:01.948 fused_ordering(684) 00:18:01.948 fused_ordering(685) 00:18:01.948 fused_ordering(686) 00:18:01.948 fused_ordering(687) 00:18:01.948 fused_ordering(688) 00:18:01.948 fused_ordering(689) 00:18:01.948 fused_ordering(690) 00:18:01.948 fused_ordering(691) 00:18:01.948 fused_ordering(692) 00:18:01.948 fused_ordering(693) 00:18:01.948 fused_ordering(694) 00:18:01.948 fused_ordering(695) 00:18:01.948 fused_ordering(696) 00:18:01.948 fused_ordering(697) 00:18:01.948 fused_ordering(698) 00:18:01.948 fused_ordering(699) 00:18:01.948 fused_ordering(700) 00:18:01.948 fused_ordering(701) 00:18:01.948 fused_ordering(702) 00:18:01.948 fused_ordering(703) 00:18:01.948 fused_ordering(704) 00:18:01.948 fused_ordering(705) 00:18:01.948 fused_ordering(706) 00:18:01.948 fused_ordering(707) 00:18:01.948 fused_ordering(708) 00:18:01.948 fused_ordering(709) 00:18:01.948 fused_ordering(710) 00:18:01.948 fused_ordering(711) 00:18:01.948 fused_ordering(712) 00:18:01.948 fused_ordering(713) 00:18:01.948 fused_ordering(714) 00:18:01.948 fused_ordering(715) 00:18:01.948 fused_ordering(716) 00:18:01.948 fused_ordering(717) 00:18:01.948 fused_ordering(718) 00:18:01.948 fused_ordering(719) 00:18:01.948 fused_ordering(720) 00:18:01.948 fused_ordering(721) 00:18:01.948 fused_ordering(722) 00:18:01.948 fused_ordering(723) 00:18:01.948 fused_ordering(724) 00:18:01.948 fused_ordering(725) 00:18:01.948 fused_ordering(726) 00:18:01.948 fused_ordering(727) 00:18:01.948 fused_ordering(728) 00:18:01.948 fused_ordering(729) 00:18:01.948 fused_ordering(730) 00:18:01.948 fused_ordering(731) 00:18:01.948 fused_ordering(732) 00:18:01.948 fused_ordering(733) 00:18:01.948 fused_ordering(734) 00:18:01.948 fused_ordering(735) 00:18:01.948 fused_ordering(736) 00:18:01.948 fused_ordering(737) 00:18:01.948 fused_ordering(738) 00:18:01.948 fused_ordering(739) 00:18:01.948 fused_ordering(740) 00:18:01.948 fused_ordering(741) 00:18:01.948 fused_ordering(742) 00:18:01.948 fused_ordering(743) 00:18:01.948 fused_ordering(744) 00:18:01.948 fused_ordering(745) 00:18:01.948 fused_ordering(746) 00:18:01.948 fused_ordering(747) 00:18:01.948 fused_ordering(748) 00:18:01.948 fused_ordering(749) 00:18:01.948 fused_ordering(750) 00:18:01.948 fused_ordering(751) 00:18:01.948 fused_ordering(752) 00:18:01.948 fused_ordering(753) 00:18:01.948 fused_ordering(754) 00:18:01.948 fused_ordering(755) 00:18:01.948 fused_ordering(756) 00:18:01.948 fused_ordering(757) 00:18:01.948 fused_ordering(758) 00:18:01.948 fused_ordering(759) 00:18:01.948 fused_ordering(760) 00:18:01.948 fused_ordering(761) 00:18:01.948 fused_ordering(762) 00:18:01.948 fused_ordering(763) 00:18:01.948 fused_ordering(764) 00:18:01.948 fused_ordering(765) 00:18:01.948 fused_ordering(766) 00:18:01.948 fused_ordering(767) 00:18:01.948 fused_ordering(768) 00:18:01.948 fused_ordering(769) 00:18:01.948 fused_ordering(770) 00:18:01.948 fused_ordering(771) 00:18:01.948 fused_ordering(772) 00:18:01.948 fused_ordering(773) 00:18:01.948 fused_ordering(774) 00:18:01.948 fused_ordering(775) 00:18:01.948 fused_ordering(776) 00:18:01.948 fused_ordering(777) 00:18:01.948 fused_ordering(778) 00:18:01.948 fused_ordering(779) 00:18:01.948 fused_ordering(780) 00:18:01.948 fused_ordering(781) 00:18:01.948 fused_ordering(782) 00:18:01.948 fused_ordering(783) 00:18:01.948 fused_ordering(784) 00:18:01.948 fused_ordering(785) 00:18:01.948 fused_ordering(786) 00:18:01.948 fused_ordering(787) 00:18:01.948 fused_ordering(788) 00:18:01.948 fused_ordering(789) 00:18:01.948 fused_ordering(790) 00:18:01.948 fused_ordering(791) 00:18:01.948 fused_ordering(792) 00:18:01.948 fused_ordering(793) 00:18:01.948 fused_ordering(794) 00:18:01.948 fused_ordering(795) 00:18:01.948 fused_ordering(796) 00:18:01.948 fused_ordering(797) 00:18:01.948 fused_ordering(798) 00:18:01.948 fused_ordering(799) 00:18:01.948 fused_ordering(800) 00:18:01.948 fused_ordering(801) 00:18:01.948 fused_ordering(802) 00:18:01.948 fused_ordering(803) 00:18:01.948 fused_ordering(804) 00:18:01.948 fused_ordering(805) 00:18:01.948 fused_ordering(806) 00:18:01.948 fused_ordering(807) 00:18:01.948 fused_ordering(808) 00:18:01.948 fused_ordering(809) 00:18:01.948 fused_ordering(810) 00:18:01.948 fused_ordering(811) 00:18:01.948 fused_ordering(812) 00:18:01.948 fused_ordering(813) 00:18:01.948 fused_ordering(814) 00:18:01.948 fused_ordering(815) 00:18:01.948 fused_ordering(816) 00:18:01.948 fused_ordering(817) 00:18:01.948 fused_ordering(818) 00:18:01.948 fused_ordering(819) 00:18:01.948 fused_ordering(820) 00:18:02.888 fused_ordering(821) 00:18:02.888 fused_ordering(822) 00:18:02.888 fused_ordering(823) 00:18:02.888 fused_ordering(824) 00:18:02.888 fused_ordering(825) 00:18:02.888 fused_ordering(826) 00:18:02.888 fused_ordering(827) 00:18:02.888 fused_ordering(828) 00:18:02.888 fused_ordering(829) 00:18:02.888 fused_ordering(830) 00:18:02.888 fused_ordering(831) 00:18:02.889 fused_ordering(832) 00:18:02.889 fused_ordering(833) 00:18:02.889 fused_ordering(834) 00:18:02.889 fused_ordering(835) 00:18:02.889 fused_ordering(836) 00:18:02.889 fused_ordering(837) 00:18:02.889 fused_ordering(838) 00:18:02.889 fused_ordering(839) 00:18:02.889 fused_ordering(840) 00:18:02.889 fused_ordering(841) 00:18:02.889 fused_ordering(842) 00:18:02.889 fused_ordering(843) 00:18:02.889 fused_ordering(844) 00:18:02.889 fused_ordering(845) 00:18:02.889 fused_ordering(846) 00:18:02.889 fused_ordering(847) 00:18:02.889 fused_ordering(848) 00:18:02.889 fused_ordering(849) 00:18:02.889 fused_ordering(850) 00:18:02.889 fused_ordering(851) 00:18:02.889 fused_ordering(852) 00:18:02.889 fused_ordering(853) 00:18:02.889 fused_ordering(854) 00:18:02.889 fused_ordering(855) 00:18:02.889 fused_ordering(856) 00:18:02.889 fused_ordering(857) 00:18:02.889 fused_ordering(858) 00:18:02.889 fused_ordering(859) 00:18:02.889 fused_ordering(860) 00:18:02.889 fused_ordering(861) 00:18:02.889 fused_ordering(862) 00:18:02.889 fused_ordering(863) 00:18:02.889 fused_ordering(864) 00:18:02.889 fused_ordering(865) 00:18:02.889 fused_ordering(866) 00:18:02.889 fused_ordering(867) 00:18:02.889 fused_ordering(868) 00:18:02.889 fused_ordering(869) 00:18:02.889 fused_ordering(870) 00:18:02.889 fused_ordering(871) 00:18:02.889 fused_ordering(872) 00:18:02.889 fused_ordering(873) 00:18:02.889 fused_ordering(874) 00:18:02.889 fused_ordering(875) 00:18:02.889 fused_ordering(876) 00:18:02.889 fused_ordering(877) 00:18:02.889 fused_ordering(878) 00:18:02.889 fused_ordering(879) 00:18:02.889 fused_ordering(880) 00:18:02.889 fused_ordering(881) 00:18:02.889 fused_ordering(882) 00:18:02.889 fused_ordering(883) 00:18:02.889 fused_ordering(884) 00:18:02.889 fused_ordering(885) 00:18:02.889 fused_ordering(886) 00:18:02.889 fused_ordering(887) 00:18:02.889 fused_ordering(888) 00:18:02.889 fused_ordering(889) 00:18:02.889 fused_ordering(890) 00:18:02.889 fused_ordering(891) 00:18:02.889 fused_ordering(892) 00:18:02.889 fused_ordering(893) 00:18:02.889 fused_ordering(894) 00:18:02.889 fused_ordering(895) 00:18:02.889 fused_ordering(896) 00:18:02.889 fused_ordering(897) 00:18:02.889 fused_ordering(898) 00:18:02.889 fused_ordering(899) 00:18:02.889 fused_ordering(900) 00:18:02.889 fused_ordering(901) 00:18:02.889 fused_ordering(902) 00:18:02.889 fused_ordering(903) 00:18:02.889 fused_ordering(904) 00:18:02.889 fused_ordering(905) 00:18:02.889 fused_ordering(906) 00:18:02.889 fused_ordering(907) 00:18:02.889 fused_ordering(908) 00:18:02.889 fused_ordering(909) 00:18:02.889 fused_ordering(910) 00:18:02.889 fused_ordering(911) 00:18:02.889 fused_ordering(912) 00:18:02.889 fused_ordering(913) 00:18:02.889 fused_ordering(914) 00:18:02.889 fused_ordering(915) 00:18:02.889 fused_ordering(916) 00:18:02.889 fused_ordering(917) 00:18:02.889 fused_ordering(918) 00:18:02.889 fused_ordering(919) 00:18:02.889 fused_ordering(920) 00:18:02.889 fused_ordering(921) 00:18:02.889 fused_ordering(922) 00:18:02.889 fused_ordering(923) 00:18:02.889 fused_ordering(924) 00:18:02.889 fused_ordering(925) 00:18:02.889 fused_ordering(926) 00:18:02.889 fused_ordering(927) 00:18:02.889 fused_ordering(928) 00:18:02.889 fused_ordering(929) 00:18:02.889 fused_ordering(930) 00:18:02.889 fused_ordering(931) 00:18:02.889 fused_ordering(932) 00:18:02.889 fused_ordering(933) 00:18:02.889 fused_ordering(934) 00:18:02.889 fused_ordering(935) 00:18:02.889 fused_ordering(936) 00:18:02.889 fused_ordering(937) 00:18:02.889 fused_ordering(938) 00:18:02.889 fused_ordering(939) 00:18:02.889 fused_ordering(940) 00:18:02.889 fused_ordering(941) 00:18:02.889 fused_ordering(942) 00:18:02.889 fused_ordering(943) 00:18:02.889 fused_ordering(944) 00:18:02.889 fused_ordering(945) 00:18:02.889 fused_ordering(946) 00:18:02.889 fused_ordering(947) 00:18:02.889 fused_ordering(948) 00:18:02.889 fused_ordering(949) 00:18:02.889 fused_ordering(950) 00:18:02.889 fused_ordering(951) 00:18:02.889 fused_ordering(952) 00:18:02.889 fused_ordering(953) 00:18:02.889 fused_ordering(954) 00:18:02.889 fused_ordering(955) 00:18:02.889 fused_ordering(956) 00:18:02.889 fused_ordering(957) 00:18:02.889 fused_ordering(958) 00:18:02.889 fused_ordering(959) 00:18:02.889 fused_ordering(960) 00:18:02.889 fused_ordering(961) 00:18:02.889 fused_ordering(962) 00:18:02.889 fused_ordering(963) 00:18:02.889 fused_ordering(964) 00:18:02.889 fused_ordering(965) 00:18:02.889 fused_ordering(966) 00:18:02.889 fused_ordering(967) 00:18:02.889 fused_ordering(968) 00:18:02.889 fused_ordering(969) 00:18:02.889 fused_ordering(970) 00:18:02.889 fused_ordering(971) 00:18:02.889 fused_ordering(972) 00:18:02.889 fused_ordering(973) 00:18:02.889 fused_ordering(974) 00:18:02.889 fused_ordering(975) 00:18:02.889 fused_ordering(976) 00:18:02.889 fused_ordering(977) 00:18:02.889 fused_ordering(978) 00:18:02.889 fused_ordering(979) 00:18:02.889 fused_ordering(980) 00:18:02.889 fused_ordering(981) 00:18:02.889 fused_ordering(982) 00:18:02.889 fused_ordering(983) 00:18:02.889 fused_ordering(984) 00:18:02.889 fused_ordering(985) 00:18:02.889 fused_ordering(986) 00:18:02.889 fused_ordering(987) 00:18:02.889 fused_ordering(988) 00:18:02.889 fused_ordering(989) 00:18:02.889 fused_ordering(990) 00:18:02.889 fused_ordering(991) 00:18:02.889 fused_ordering(992) 00:18:02.889 fused_ordering(993) 00:18:02.889 fused_ordering(994) 00:18:02.889 fused_ordering(995) 00:18:02.889 fused_ordering(996) 00:18:02.889 fused_ordering(997) 00:18:02.889 fused_ordering(998) 00:18:02.889 fused_ordering(999) 00:18:02.889 fused_ordering(1000) 00:18:02.889 fused_ordering(1001) 00:18:02.889 fused_ordering(1002) 00:18:02.889 fused_ordering(1003) 00:18:02.889 fused_ordering(1004) 00:18:02.889 fused_ordering(1005) 00:18:02.889 fused_ordering(1006) 00:18:02.889 fused_ordering(1007) 00:18:02.889 fused_ordering(1008) 00:18:02.889 fused_ordering(1009) 00:18:02.889 fused_ordering(1010) 00:18:02.889 fused_ordering(1011) 00:18:02.889 fused_ordering(1012) 00:18:02.889 fused_ordering(1013) 00:18:02.889 fused_ordering(1014) 00:18:02.889 fused_ordering(1015) 00:18:02.889 fused_ordering(1016) 00:18:02.889 fused_ordering(1017) 00:18:02.889 fused_ordering(1018) 00:18:02.889 fused_ordering(1019) 00:18:02.889 fused_ordering(1020) 00:18:02.889 fused_ordering(1021) 00:18:02.889 fused_ordering(1022) 00:18:02.889 fused_ordering(1023) 00:18:02.889 07:41:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:18:02.889 07:41:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:18:02.889 07:41:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:02.889 07:41:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:18:02.889 07:41:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:02.889 07:41:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:18:02.889 07:41:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:02.889 07:41:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:02.889 rmmod nvme_tcp 00:18:02.889 rmmod nvme_fabrics 00:18:02.889 rmmod nvme_keyring 00:18:02.889 07:41:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:02.889 07:41:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:18:02.889 07:41:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:18:02.889 07:41:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 2948817 ']' 00:18:02.889 07:41:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 2948817 00:18:02.889 07:41:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 2948817 ']' 00:18:02.889 07:41:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 2948817 00:18:02.889 07:41:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:18:02.889 07:41:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:02.889 07:41:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2948817 00:18:02.889 07:41:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:02.889 07:41:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:02.889 07:41:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2948817' 00:18:02.889 killing process with pid 2948817 00:18:02.889 07:41:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 2948817 00:18:02.889 07:41:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 2948817 00:18:04.269 07:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:04.269 07:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:04.269 07:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:04.269 07:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:18:04.269 07:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:18:04.269 07:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:04.269 07:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:18:04.269 07:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:04.269 07:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:04.269 07:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:04.269 07:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:04.269 07:41:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:06.193 07:41:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:06.193 00:18:06.193 real 0m9.958s 00:18:06.193 user 0m8.143s 00:18:06.193 sys 0m3.542s 00:18:06.193 07:41:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:06.193 07:41:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:06.193 ************************************ 00:18:06.193 END TEST nvmf_fused_ordering 00:18:06.193 ************************************ 00:18:06.193 07:41:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:18:06.193 07:41:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:06.193 07:41:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:06.193 07:41:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:06.193 ************************************ 00:18:06.193 START TEST nvmf_ns_masking 00:18:06.193 ************************************ 00:18:06.193 07:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:18:06.193 * Looking for test storage... 00:18:06.193 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:06.193 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:06.193 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:18:06.193 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:06.193 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:06.193 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:06.193 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:06.193 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:06.193 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:18:06.193 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:18:06.193 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:18:06.193 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:18:06.193 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:18:06.193 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:18:06.193 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:18:06.193 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:06.193 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:18:06.193 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:18:06.193 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:06.193 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:06.193 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:18:06.193 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:18:06.193 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:06.193 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:18:06.193 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:18:06.193 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:18:06.193 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:18:06.193 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:06.193 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:18:06.193 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:18:06.193 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:06.193 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:06.193 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:18:06.193 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:06.193 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:06.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:06.193 --rc genhtml_branch_coverage=1 00:18:06.193 --rc genhtml_function_coverage=1 00:18:06.193 --rc genhtml_legend=1 00:18:06.193 --rc geninfo_all_blocks=1 00:18:06.193 --rc geninfo_unexecuted_blocks=1 00:18:06.193 00:18:06.193 ' 00:18:06.194 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:06.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:06.194 --rc genhtml_branch_coverage=1 00:18:06.194 --rc genhtml_function_coverage=1 00:18:06.194 --rc genhtml_legend=1 00:18:06.194 --rc geninfo_all_blocks=1 00:18:06.194 --rc geninfo_unexecuted_blocks=1 00:18:06.194 00:18:06.194 ' 00:18:06.194 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:06.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:06.194 --rc genhtml_branch_coverage=1 00:18:06.194 --rc genhtml_function_coverage=1 00:18:06.194 --rc genhtml_legend=1 00:18:06.194 --rc geninfo_all_blocks=1 00:18:06.194 --rc geninfo_unexecuted_blocks=1 00:18:06.194 00:18:06.194 ' 00:18:06.194 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:06.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:06.194 --rc genhtml_branch_coverage=1 00:18:06.194 --rc genhtml_function_coverage=1 00:18:06.194 --rc genhtml_legend=1 00:18:06.194 --rc geninfo_all_blocks=1 00:18:06.194 --rc geninfo_unexecuted_blocks=1 00:18:06.194 00:18:06.194 ' 00:18:06.194 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:06.194 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:18:06.194 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:06.194 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:06.194 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:06.194 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:06.194 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:06.194 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:06.194 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:06.194 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:06.194 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:06.194 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:06.194 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:06.194 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:06.194 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:06.194 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:06.194 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:06.194 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:06.194 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:06.194 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:18:06.194 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:06.194 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:06.194 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:06.194 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:06.194 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:06.194 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:06.194 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:18:06.194 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:06.194 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:18:06.194 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:06.194 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:06.194 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:06.194 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:06.194 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:06.194 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:06.194 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:06.194 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:06.194 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:06.194 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:06.194 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:06.194 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:18:06.194 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:18:06.194 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:18:06.194 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=527b7df6-818f-4c27-bc78-f16bb36e97d7 00:18:06.194 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:18:06.194 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=250b1895-aa03-491b-855d-5058fadf41b6 00:18:06.194 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:18:06.194 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:18:06.194 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:18:06.194 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:18:06.194 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=57cf1981-b3ed-488e-8973-e56cdfa86f61 00:18:06.194 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:18:06.194 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:06.194 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:06.194 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:06.194 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:06.194 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:06.194 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:06.194 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:06.194 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:06.194 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:06.194 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:06.194 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:18:06.194 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:08.100 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:08.100 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:18:08.100 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:08.100 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:08.100 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:08.100 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:08.100 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:08.100 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:18:08.100 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:08.100 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:18:08.100 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:18:08.100 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:18:08.100 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:18:08.100 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:18:08.100 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:18:08.100 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:08.100 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:08.100 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:08.100 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:08.100 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:08.100 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:08.100 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:08.100 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:08.100 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:08.100 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:08.100 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:08.100 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:08.100 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:08.100 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:08.100 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:08.100 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:08.100 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:08.100 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:08.100 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:08.100 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:08.100 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:08.100 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:08.100 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:08.100 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:08.100 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:08.100 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:08.100 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:08.100 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:08.100 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:08.100 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:08.100 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:08.100 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:08.100 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:08.100 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:08.100 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:08.100 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:08.100 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:08.100 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:08.100 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:08.101 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:08.101 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:08.101 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:08.101 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:08.101 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:08.101 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:08.101 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:08.101 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:08.101 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:08.101 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:08.101 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:08.101 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:08.101 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:08.101 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:08.101 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:08.101 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:08.101 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:08.101 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:08.101 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:08.101 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:18:08.101 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:08.101 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:08.101 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:08.101 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:08.101 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:08.101 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:08.101 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:08.101 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:08.101 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:08.101 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:08.101 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:08.101 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:08.101 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:08.101 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:08.101 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:08.101 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:08.101 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:08.101 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:08.360 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:08.360 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:08.360 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:08.360 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:08.360 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:08.360 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:08.360 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:08.360 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:08.360 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:08.360 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:18:08.360 00:18:08.360 --- 10.0.0.2 ping statistics --- 00:18:08.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:08.360 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:18:08.360 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:08.360 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:08.360 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.069 ms 00:18:08.360 00:18:08.360 --- 10.0.0.1 ping statistics --- 00:18:08.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:08.360 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:18:08.360 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:08.360 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:18:08.360 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:08.360 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:08.360 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:08.360 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:08.360 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:08.360 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:08.360 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:08.360 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:18:08.360 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:08.360 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:08.360 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:08.360 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:08.360 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=2951453 00:18:08.360 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 2951453 00:18:08.360 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2951453 ']' 00:18:08.360 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:08.360 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:08.360 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:08.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:08.360 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:08.360 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:08.360 [2024-11-19 07:42:00.268676] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:18:08.360 [2024-11-19 07:42:00.268841] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:08.620 [2024-11-19 07:42:00.432529] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:08.878 [2024-11-19 07:42:00.569951] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:08.878 [2024-11-19 07:42:00.570039] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:08.878 [2024-11-19 07:42:00.570063] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:08.878 [2024-11-19 07:42:00.570087] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:08.879 [2024-11-19 07:42:00.570106] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:08.879 [2024-11-19 07:42:00.571768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:09.445 07:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:09.445 07:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:18:09.445 07:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:09.445 07:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:09.445 07:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:09.445 07:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:09.445 07:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:09.706 [2024-11-19 07:42:01.540434] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:09.706 07:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:18:09.706 07:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:18:09.706 07:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:10.307 Malloc1 00:18:10.307 07:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:10.573 Malloc2 00:18:10.574 07:42:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:10.832 07:42:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:18:11.090 07:42:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:11.348 [2024-11-19 07:42:03.126098] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:11.348 07:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:18:11.348 07:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 57cf1981-b3ed-488e-8973-e56cdfa86f61 -a 10.0.0.2 -s 4420 -i 4 00:18:11.348 07:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:18:11.348 07:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:18:11.348 07:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:11.348 07:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:11.348 07:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:18:13.882 07:42:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:13.882 07:42:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:13.882 07:42:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:13.882 07:42:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:13.882 07:42:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:13.882 07:42:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:18:13.882 07:42:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:13.882 07:42:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:13.882 07:42:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:13.882 07:42:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:13.882 07:42:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:18:13.882 07:42:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:13.882 07:42:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:13.882 [ 0]:0x1 00:18:13.882 07:42:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:13.882 07:42:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:13.882 07:42:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d25311bdfa5941198860a30c9f16244a 00:18:13.882 07:42:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d25311bdfa5941198860a30c9f16244a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:13.882 07:42:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:18:13.882 07:42:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:18:13.882 07:42:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:13.882 07:42:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:13.882 [ 0]:0x1 00:18:13.882 07:42:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:13.882 07:42:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:13.882 07:42:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d25311bdfa5941198860a30c9f16244a 00:18:13.882 07:42:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d25311bdfa5941198860a30c9f16244a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:13.882 07:42:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:18:13.882 07:42:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:13.882 07:42:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:13.882 [ 1]:0x2 00:18:13.882 07:42:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:13.882 07:42:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:13.882 07:42:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9145c28ffa464f44ac1da2a0f4a6ce44 00:18:13.882 07:42:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9145c28ffa464f44ac1da2a0f4a6ce44 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:13.882 07:42:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:18:13.882 07:42:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:14.140 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:14.140 07:42:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:14.399 07:42:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:18:14.658 07:42:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:18:14.658 07:42:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 57cf1981-b3ed-488e-8973-e56cdfa86f61 -a 10.0.0.2 -s 4420 -i 4 00:18:14.918 07:42:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:18:14.918 07:42:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:18:14.918 07:42:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:14.918 07:42:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:18:14.918 07:42:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:18:14.918 07:42:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:18:16.824 07:42:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:16.824 07:42:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:16.824 07:42:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:16.824 07:42:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:16.824 07:42:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:16.824 07:42:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:18:16.824 07:42:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:16.824 07:42:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:17.083 07:42:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:17.083 07:42:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:17.083 07:42:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:18:17.083 07:42:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:17.083 07:42:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:18:17.083 07:42:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:18:17.083 07:42:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:17.083 07:42:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:18:17.083 07:42:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:17.083 07:42:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:18:17.083 07:42:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:17.083 07:42:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:17.083 07:42:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:17.083 07:42:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:17.083 07:42:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:17.083 07:42:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:17.083 07:42:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:17.083 07:42:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:17.083 07:42:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:17.083 07:42:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:17.083 07:42:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:18:17.083 07:42:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:17.083 07:42:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:17.083 [ 0]:0x2 00:18:17.083 07:42:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:17.083 07:42:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:17.083 07:42:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9145c28ffa464f44ac1da2a0f4a6ce44 00:18:17.083 07:42:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9145c28ffa464f44ac1da2a0f4a6ce44 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:17.083 07:42:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:17.342 07:42:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:18:17.342 07:42:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:17.342 07:42:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:17.342 [ 0]:0x1 00:18:17.342 07:42:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:17.342 07:42:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:17.599 07:42:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d25311bdfa5941198860a30c9f16244a 00:18:17.599 07:42:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d25311bdfa5941198860a30c9f16244a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:17.599 07:42:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:18:17.599 07:42:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:17.599 07:42:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:17.599 [ 1]:0x2 00:18:17.599 07:42:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:17.599 07:42:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:17.599 07:42:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9145c28ffa464f44ac1da2a0f4a6ce44 00:18:17.599 07:42:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9145c28ffa464f44ac1da2a0f4a6ce44 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:17.599 07:42:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:17.857 07:42:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:18:17.857 07:42:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:17.857 07:42:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:18:17.857 07:42:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:18:17.857 07:42:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:17.857 07:42:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:18:17.857 07:42:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:17.857 07:42:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:18:17.857 07:42:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:17.857 07:42:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:17.857 07:42:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:17.857 07:42:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:17.857 07:42:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:17.857 07:42:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:17.857 07:42:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:17.857 07:42:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:17.857 07:42:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:17.857 07:42:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:17.857 07:42:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:18:17.857 07:42:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:17.857 07:42:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:17.857 [ 0]:0x2 00:18:17.857 07:42:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:17.857 07:42:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:17.857 07:42:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9145c28ffa464f44ac1da2a0f4a6ce44 00:18:17.857 07:42:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9145c28ffa464f44ac1da2a0f4a6ce44 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:17.857 07:42:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:18:17.857 07:42:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:17.857 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:17.857 07:42:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:18.117 07:42:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:18:18.117 07:42:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 57cf1981-b3ed-488e-8973-e56cdfa86f61 -a 10.0.0.2 -s 4420 -i 4 00:18:18.376 07:42:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:18.376 07:42:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:18:18.376 07:42:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:18.376 07:42:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:18:18.376 07:42:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:18:18.376 07:42:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:18:20.279 07:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:20.279 07:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:20.279 07:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:20.279 07:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:18:20.279 07:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:20.279 07:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:18:20.279 07:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:20.279 07:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:20.537 07:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:20.537 07:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:20.537 07:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:18:20.537 07:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:20.537 07:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:20.537 [ 0]:0x1 00:18:20.537 07:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:20.537 07:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:20.537 07:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d25311bdfa5941198860a30c9f16244a 00:18:20.537 07:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d25311bdfa5941198860a30c9f16244a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:20.537 07:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:18:20.537 07:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:20.537 07:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:20.537 [ 1]:0x2 00:18:20.537 07:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:20.537 07:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:20.537 07:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9145c28ffa464f44ac1da2a0f4a6ce44 00:18:20.537 07:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9145c28ffa464f44ac1da2a0f4a6ce44 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:20.537 07:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:20.796 07:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:18:20.796 07:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:20.796 07:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:18:20.796 07:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:18:20.796 07:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:20.796 07:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:18:20.796 07:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:20.796 07:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:18:20.796 07:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:20.796 07:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:20.796 07:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:20.796 07:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:20.796 07:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:20.796 07:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:20.796 07:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:20.796 07:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:20.796 07:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:20.796 07:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:20.796 07:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:18:20.796 07:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:20.796 07:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:20.796 [ 0]:0x2 00:18:20.796 07:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:20.796 07:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:20.796 07:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9145c28ffa464f44ac1da2a0f4a6ce44 00:18:20.796 07:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9145c28ffa464f44ac1da2a0f4a6ce44 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:20.796 07:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:20.796 07:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:20.796 07:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:20.796 07:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:20.796 07:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:20.796 07:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:20.796 07:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:20.796 07:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:20.796 07:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:20.796 07:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:20.796 07:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:20.796 07:42:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:21.362 [2024-11-19 07:42:13.027741] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:18:21.362 request: 00:18:21.362 { 00:18:21.362 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:21.362 "nsid": 2, 00:18:21.362 "host": "nqn.2016-06.io.spdk:host1", 00:18:21.362 "method": "nvmf_ns_remove_host", 00:18:21.362 "req_id": 1 00:18:21.362 } 00:18:21.362 Got JSON-RPC error response 00:18:21.362 response: 00:18:21.362 { 00:18:21.362 "code": -32602, 00:18:21.362 "message": "Invalid parameters" 00:18:21.362 } 00:18:21.362 07:42:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:21.362 07:42:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:21.362 07:42:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:21.362 07:42:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:21.362 07:42:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:18:21.362 07:42:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:21.362 07:42:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:18:21.362 07:42:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:18:21.362 07:42:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:21.362 07:42:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:18:21.362 07:42:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:21.363 07:42:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:18:21.363 07:42:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:21.363 07:42:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:21.363 07:42:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:21.363 07:42:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:21.363 07:42:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:21.363 07:42:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:21.363 07:42:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:21.363 07:42:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:21.363 07:42:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:21.363 07:42:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:21.363 07:42:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:18:21.363 07:42:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:21.363 07:42:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:21.363 [ 0]:0x2 00:18:21.363 07:42:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:21.363 07:42:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:21.363 07:42:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9145c28ffa464f44ac1da2a0f4a6ce44 00:18:21.363 07:42:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9145c28ffa464f44ac1da2a0f4a6ce44 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:21.363 07:42:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:18:21.363 07:42:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:21.621 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:21.621 07:42:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2953692 00:18:21.621 07:42:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:18:21.621 07:42:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:18:21.621 07:42:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2953692 /var/tmp/host.sock 00:18:21.621 07:42:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2953692 ']' 00:18:21.621 07:42:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:18:21.621 07:42:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:21.621 07:42:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:21.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:21.621 07:42:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:21.621 07:42:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:21.621 [2024-11-19 07:42:13.444808] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:18:21.621 [2024-11-19 07:42:13.444971] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2953692 ] 00:18:21.880 [2024-11-19 07:42:13.602515] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:21.880 [2024-11-19 07:42:13.743154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:22.815 07:42:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:22.815 07:42:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:18:22.815 07:42:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:23.073 07:42:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:23.639 07:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 527b7df6-818f-4c27-bc78-f16bb36e97d7 00:18:23.639 07:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:23.639 07:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 527B7DF6818F4C27BC78F16BB36E97D7 -i 00:18:23.897 07:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 250b1895-aa03-491b-855d-5058fadf41b6 00:18:23.897 07:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:23.897 07:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 250B1895AA03491B855D5058FADF41B6 -i 00:18:24.155 07:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:24.414 07:42:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:18:24.672 07:42:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:18:24.672 07:42:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:18:24.930 nvme0n1 00:18:24.930 07:42:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:18:24.930 07:42:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:18:25.189 nvme1n2 00:18:25.448 07:42:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:18:25.448 07:42:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:18:25.448 07:42:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:18:25.448 07:42:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:18:25.448 07:42:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:18:25.706 07:42:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:18:25.706 07:42:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:18:25.706 07:42:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:18:25.706 07:42:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:18:25.965 07:42:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 527b7df6-818f-4c27-bc78-f16bb36e97d7 == \5\2\7\b\7\d\f\6\-\8\1\8\f\-\4\c\2\7\-\b\c\7\8\-\f\1\6\b\b\3\6\e\9\7\d\7 ]] 00:18:25.965 07:42:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:18:25.965 07:42:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:18:25.965 07:42:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:18:26.221 07:42:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 250b1895-aa03-491b-855d-5058fadf41b6 == \2\5\0\b\1\8\9\5\-\a\a\0\3\-\4\9\1\b\-\8\5\5\d\-\5\0\5\8\f\a\d\f\4\1\b\6 ]] 00:18:26.221 07:42:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:26.479 07:42:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:26.738 07:42:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 527b7df6-818f-4c27-bc78-f16bb36e97d7 00:18:26.738 07:42:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:26.738 07:42:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 527B7DF6818F4C27BC78F16BB36E97D7 00:18:26.738 07:42:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:26.738 07:42:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 527B7DF6818F4C27BC78F16BB36E97D7 00:18:26.738 07:42:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:26.738 07:42:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:26.738 07:42:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:26.738 07:42:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:26.738 07:42:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:26.738 07:42:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:26.738 07:42:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:26.738 07:42:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:26.738 07:42:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 527B7DF6818F4C27BC78F16BB36E97D7 00:18:26.997 [2024-11-19 07:42:18.853951] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:18:26.997 [2024-11-19 07:42:18.854010] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:18:26.997 [2024-11-19 07:42:18.854062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.997 request: 00:18:26.997 { 00:18:26.997 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:26.997 "namespace": { 00:18:26.997 "bdev_name": "invalid", 00:18:26.997 "nsid": 1, 00:18:26.997 "nguid": "527B7DF6818F4C27BC78F16BB36E97D7", 00:18:26.997 "no_auto_visible": false 00:18:26.997 }, 00:18:26.997 "method": "nvmf_subsystem_add_ns", 00:18:26.997 "req_id": 1 00:18:26.997 } 00:18:26.997 Got JSON-RPC error response 00:18:26.997 response: 00:18:26.997 { 00:18:26.997 "code": -32602, 00:18:26.997 "message": "Invalid parameters" 00:18:26.997 } 00:18:26.997 07:42:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:26.997 07:42:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:26.997 07:42:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:26.998 07:42:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:26.998 07:42:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 527b7df6-818f-4c27-bc78-f16bb36e97d7 00:18:26.998 07:42:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:26.998 07:42:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 527B7DF6818F4C27BC78F16BB36E97D7 -i 00:18:27.256 07:42:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:18:29.790 07:42:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:18:29.790 07:42:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:18:29.790 07:42:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:18:29.790 07:42:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:18:29.790 07:42:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 2953692 00:18:29.790 07:42:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2953692 ']' 00:18:29.790 07:42:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2953692 00:18:29.790 07:42:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:18:29.790 07:42:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:29.790 07:42:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2953692 00:18:29.790 07:42:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:29.790 07:42:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:29.790 07:42:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2953692' 00:18:29.790 killing process with pid 2953692 00:18:29.790 07:42:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2953692 00:18:29.790 07:42:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2953692 00:18:32.331 07:42:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:32.331 07:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:18:32.331 07:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:18:32.331 07:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:32.331 07:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:18:32.331 07:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:32.331 07:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:18:32.331 07:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:32.331 07:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:32.331 rmmod nvme_tcp 00:18:32.331 rmmod nvme_fabrics 00:18:32.331 rmmod nvme_keyring 00:18:32.331 07:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:32.331 07:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:18:32.331 07:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:18:32.331 07:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 2951453 ']' 00:18:32.331 07:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 2951453 00:18:32.331 07:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2951453 ']' 00:18:32.331 07:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2951453 00:18:32.331 07:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:18:32.331 07:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:32.331 07:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2951453 00:18:32.331 07:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:32.331 07:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:32.331 07:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2951453' 00:18:32.331 killing process with pid 2951453 00:18:32.331 07:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2951453 00:18:32.331 07:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2951453 00:18:33.707 07:42:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:33.707 07:42:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:33.707 07:42:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:33.707 07:42:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:18:33.707 07:42:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:18:33.707 07:42:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:33.707 07:42:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:18:33.966 07:42:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:33.966 07:42:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:33.966 07:42:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:33.966 07:42:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:33.966 07:42:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:35.873 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:35.873 00:18:35.873 real 0m29.736s 00:18:35.873 user 0m44.269s 00:18:35.873 sys 0m4.821s 00:18:35.873 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:35.873 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:35.873 ************************************ 00:18:35.873 END TEST nvmf_ns_masking 00:18:35.873 ************************************ 00:18:35.873 07:42:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:18:35.873 07:42:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:18:35.873 07:42:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:35.873 07:42:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:35.873 07:42:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:35.873 ************************************ 00:18:35.873 START TEST nvmf_nvme_cli 00:18:35.873 ************************************ 00:18:35.873 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:18:35.873 * Looking for test storage... 00:18:35.873 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:35.873 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:35.873 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:18:35.873 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:36.133 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:36.133 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:36.133 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:36.133 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:36.133 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:18:36.133 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:18:36.133 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:18:36.133 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:18:36.133 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:18:36.133 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:18:36.133 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:18:36.133 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:36.133 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:18:36.133 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:18:36.133 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:36.133 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:36.133 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:18:36.133 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:18:36.133 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:36.133 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:18:36.133 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:18:36.133 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:18:36.133 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:18:36.133 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:36.133 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:18:36.133 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:18:36.133 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:36.133 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:36.133 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:18:36.133 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:36.133 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:36.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:36.133 --rc genhtml_branch_coverage=1 00:18:36.133 --rc genhtml_function_coverage=1 00:18:36.133 --rc genhtml_legend=1 00:18:36.133 --rc geninfo_all_blocks=1 00:18:36.133 --rc geninfo_unexecuted_blocks=1 00:18:36.133 00:18:36.133 ' 00:18:36.133 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:36.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:36.133 --rc genhtml_branch_coverage=1 00:18:36.133 --rc genhtml_function_coverage=1 00:18:36.133 --rc genhtml_legend=1 00:18:36.133 --rc geninfo_all_blocks=1 00:18:36.133 --rc geninfo_unexecuted_blocks=1 00:18:36.133 00:18:36.133 ' 00:18:36.133 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:36.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:36.133 --rc genhtml_branch_coverage=1 00:18:36.133 --rc genhtml_function_coverage=1 00:18:36.133 --rc genhtml_legend=1 00:18:36.133 --rc geninfo_all_blocks=1 00:18:36.133 --rc geninfo_unexecuted_blocks=1 00:18:36.133 00:18:36.133 ' 00:18:36.133 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:36.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:36.133 --rc genhtml_branch_coverage=1 00:18:36.133 --rc genhtml_function_coverage=1 00:18:36.133 --rc genhtml_legend=1 00:18:36.133 --rc geninfo_all_blocks=1 00:18:36.133 --rc geninfo_unexecuted_blocks=1 00:18:36.133 00:18:36.133 ' 00:18:36.133 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:36.133 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:18:36.133 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:36.133 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:36.133 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:36.133 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:36.133 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:36.133 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:36.133 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:36.133 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:36.133 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:36.133 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:36.133 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:36.133 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:36.133 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:36.133 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:36.133 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:36.133 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:36.133 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:36.133 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:18:36.133 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:36.133 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:36.133 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:36.134 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.134 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.134 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.134 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:18:36.134 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.134 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:18:36.134 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:36.134 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:36.134 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:36.134 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:36.134 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:36.134 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:36.134 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:36.134 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:36.134 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:36.134 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:36.134 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:36.134 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:36.134 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:18:36.134 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:18:36.134 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:36.134 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:36.134 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:36.134 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:36.134 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:36.134 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:36.134 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:36.134 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:36.134 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:36.134 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:36.134 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:18:36.134 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:38.038 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:38.038 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:18:38.038 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:38.038 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:38.038 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:38.038 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:38.038 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:38.038 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:18:38.038 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:38.038 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:18:38.038 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:18:38.038 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:18:38.038 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:18:38.038 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:18:38.038 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:18:38.038 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:38.038 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:38.038 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:38.038 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:38.038 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:38.038 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:38.038 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:38.038 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:38.038 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:38.038 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:38.038 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:38.038 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:38.038 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:38.038 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:38.038 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:38.038 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:38.038 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:38.038 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:38.038 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:38.038 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:38.038 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:38.038 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:38.038 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:38.038 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:38.038 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:38.038 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:38.038 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:38.038 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:38.038 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:38.038 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:38.038 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:38.038 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:38.038 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:38.038 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:38.038 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:38.038 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:38.038 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:38.038 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:38.038 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:38.038 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:38.038 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:38.038 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:38.038 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:38.038 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:38.038 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:38.038 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:38.038 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:38.038 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:38.038 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:38.038 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:38.038 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:38.038 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:38.038 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:38.038 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:38.038 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:38.038 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:38.038 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:38.038 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:38.038 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:18:38.038 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:38.038 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:38.038 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:38.038 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:38.038 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:38.038 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:38.038 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:38.038 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:38.039 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:38.039 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:38.039 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:38.039 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:38.039 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:38.039 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:38.039 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:38.039 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:38.039 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:38.039 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:38.039 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:38.039 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:38.039 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:38.039 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:38.039 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:38.039 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:38.039 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:38.039 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:38.039 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:38.039 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.345 ms 00:18:38.039 00:18:38.039 --- 10.0.0.2 ping statistics --- 00:18:38.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:38.039 rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms 00:18:38.298 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:38.298 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:38.298 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:18:38.298 00:18:38.298 --- 10.0.0.1 ping statistics --- 00:18:38.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:38.298 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:18:38.298 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:38.298 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:18:38.298 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:38.298 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:38.298 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:38.298 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:38.298 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:38.298 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:38.298 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:38.298 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:18:38.298 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:38.298 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:38.298 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:38.298 07:42:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=2957108 00:18:38.298 07:42:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:38.298 07:42:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 2957108 00:18:38.298 07:42:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 2957108 ']' 00:18:38.298 07:42:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:38.298 07:42:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:38.298 07:42:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:38.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:38.298 07:42:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:38.298 07:42:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:38.298 [2024-11-19 07:42:30.106906] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:18:38.298 [2024-11-19 07:42:30.107070] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:38.557 [2024-11-19 07:42:30.271268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:38.557 [2024-11-19 07:42:30.416714] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:38.557 [2024-11-19 07:42:30.416785] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:38.557 [2024-11-19 07:42:30.416811] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:38.557 [2024-11-19 07:42:30.416835] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:38.557 [2024-11-19 07:42:30.416855] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:38.557 [2024-11-19 07:42:30.419613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:38.557 [2024-11-19 07:42:30.419667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:38.558 [2024-11-19 07:42:30.419739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:38.558 [2024-11-19 07:42:30.419742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:39.492 07:42:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:39.492 07:42:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:18:39.492 07:42:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:39.492 07:42:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:39.492 07:42:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:39.493 07:42:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:39.493 07:42:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:39.493 07:42:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.493 07:42:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:39.493 [2024-11-19 07:42:31.136274] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:39.493 07:42:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.493 07:42:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:39.493 07:42:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.493 07:42:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:39.493 Malloc0 00:18:39.493 07:42:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.493 07:42:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:39.493 07:42:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.493 07:42:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:39.493 Malloc1 00:18:39.493 07:42:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.493 07:42:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:18:39.493 07:42:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.493 07:42:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:39.493 07:42:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.493 07:42:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:39.493 07:42:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.493 07:42:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:39.493 07:42:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.493 07:42:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:39.493 07:42:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.493 07:42:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:39.493 07:42:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.493 07:42:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:39.493 07:42:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.493 07:42:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:39.493 [2024-11-19 07:42:31.331082] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:39.493 07:42:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.493 07:42:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:39.493 07:42:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.493 07:42:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:39.493 07:42:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.493 07:42:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:18:39.753 00:18:39.753 Discovery Log Number of Records 2, Generation counter 2 00:18:39.753 =====Discovery Log Entry 0====== 00:18:39.753 trtype: tcp 00:18:39.753 adrfam: ipv4 00:18:39.753 subtype: current discovery subsystem 00:18:39.753 treq: not required 00:18:39.753 portid: 0 00:18:39.753 trsvcid: 4420 00:18:39.753 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:39.753 traddr: 10.0.0.2 00:18:39.753 eflags: explicit discovery connections, duplicate discovery information 00:18:39.753 sectype: none 00:18:39.753 =====Discovery Log Entry 1====== 00:18:39.753 trtype: tcp 00:18:39.753 adrfam: ipv4 00:18:39.753 subtype: nvme subsystem 00:18:39.753 treq: not required 00:18:39.753 portid: 0 00:18:39.753 trsvcid: 4420 00:18:39.753 subnqn: nqn.2016-06.io.spdk:cnode1 00:18:39.753 traddr: 10.0.0.2 00:18:39.753 eflags: none 00:18:39.753 sectype: none 00:18:39.753 07:42:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:18:39.753 07:42:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:18:39.753 07:42:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:18:39.753 07:42:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:39.753 07:42:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:18:39.753 07:42:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:18:39.753 07:42:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:39.753 07:42:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:18:39.753 07:42:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:39.753 07:42:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:18:39.753 07:42:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:40.321 07:42:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:40.321 07:42:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:18:40.321 07:42:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:40.321 07:42:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:18:40.321 07:42:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:18:40.321 07:42:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:18:42.855 07:42:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:42.855 07:42:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:42.855 07:42:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:42.855 07:42:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:18:42.855 07:42:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:42.855 07:42:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:18:42.855 07:42:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:18:42.855 07:42:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:18:42.855 07:42:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:42.855 07:42:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:18:42.855 07:42:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:18:42.855 07:42:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:42.855 07:42:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:18:42.855 07:42:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:42.855 07:42:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:42.855 07:42:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:18:42.855 07:42:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:42.855 07:42:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:42.855 07:42:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:18:42.855 07:42:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:42.855 07:42:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:18:42.855 /dev/nvme0n2 ]] 00:18:42.855 07:42:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:18:42.855 07:42:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:18:42.855 07:42:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:18:42.855 07:42:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:42.855 07:42:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:18:42.855 07:42:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:18:42.855 07:42:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:42.855 07:42:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:18:42.855 07:42:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:42.855 07:42:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:42.855 07:42:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:18:42.855 07:42:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:42.855 07:42:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:42.855 07:42:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:18:42.855 07:42:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:42.855 07:42:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:18:42.855 07:42:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:43.116 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:43.116 07:42:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:43.116 07:42:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:18:43.116 07:42:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:43.116 07:42:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:43.116 07:42:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:43.116 07:42:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:43.116 07:42:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:18:43.116 07:42:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:18:43.116 07:42:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:43.116 07:42:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.116 07:42:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:43.116 07:42:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.116 07:42:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:18:43.116 07:42:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:18:43.116 07:42:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:43.116 07:42:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:18:43.116 07:42:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:43.116 07:42:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:18:43.116 07:42:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:43.116 07:42:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:43.116 rmmod nvme_tcp 00:18:43.116 rmmod nvme_fabrics 00:18:43.116 rmmod nvme_keyring 00:18:43.116 07:42:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:43.116 07:42:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:18:43.116 07:42:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:18:43.116 07:42:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 2957108 ']' 00:18:43.116 07:42:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 2957108 00:18:43.116 07:42:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 2957108 ']' 00:18:43.116 07:42:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 2957108 00:18:43.116 07:42:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:18:43.116 07:42:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:43.116 07:42:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2957108 00:18:43.116 07:42:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:43.116 07:42:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:43.116 07:42:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2957108' 00:18:43.116 killing process with pid 2957108 00:18:43.116 07:42:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 2957108 00:18:43.116 07:42:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 2957108 00:18:44.544 07:42:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:44.544 07:42:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:44.544 07:42:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:44.544 07:42:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:18:44.544 07:42:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:18:44.544 07:42:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:44.544 07:42:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:18:44.544 07:42:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:44.544 07:42:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:44.544 07:42:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:44.544 07:42:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:44.544 07:42:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:47.092 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:47.092 00:18:47.092 real 0m10.751s 00:18:47.092 user 0m23.563s 00:18:47.092 sys 0m2.518s 00:18:47.092 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:47.092 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:47.092 ************************************ 00:18:47.092 END TEST nvmf_nvme_cli 00:18:47.092 ************************************ 00:18:47.092 07:42:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 0 -eq 1 ]] 00:18:47.092 07:42:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:47.092 07:42:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:47.092 07:42:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:47.092 07:42:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:47.092 ************************************ 00:18:47.092 START TEST nvmf_auth_target 00:18:47.092 ************************************ 00:18:47.092 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:47.092 * Looking for test storage... 00:18:47.092 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:47.092 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:47.092 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:18:47.092 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:47.092 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:47.092 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:47.092 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:47.092 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:47.092 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:18:47.092 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:18:47.092 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:18:47.092 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:18:47.092 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:18:47.092 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:18:47.092 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:18:47.092 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:47.092 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:18:47.092 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:18:47.092 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:47.092 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:47.092 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:18:47.092 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:18:47.092 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:47.092 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:18:47.092 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:18:47.092 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:18:47.092 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:18:47.092 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:47.092 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:18:47.092 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:18:47.092 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:47.092 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:47.092 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:18:47.092 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:47.092 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:47.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:47.092 --rc genhtml_branch_coverage=1 00:18:47.092 --rc genhtml_function_coverage=1 00:18:47.092 --rc genhtml_legend=1 00:18:47.092 --rc geninfo_all_blocks=1 00:18:47.092 --rc geninfo_unexecuted_blocks=1 00:18:47.092 00:18:47.092 ' 00:18:47.092 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:47.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:47.092 --rc genhtml_branch_coverage=1 00:18:47.092 --rc genhtml_function_coverage=1 00:18:47.092 --rc genhtml_legend=1 00:18:47.092 --rc geninfo_all_blocks=1 00:18:47.092 --rc geninfo_unexecuted_blocks=1 00:18:47.092 00:18:47.092 ' 00:18:47.092 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:47.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:47.092 --rc genhtml_branch_coverage=1 00:18:47.092 --rc genhtml_function_coverage=1 00:18:47.092 --rc genhtml_legend=1 00:18:47.092 --rc geninfo_all_blocks=1 00:18:47.092 --rc geninfo_unexecuted_blocks=1 00:18:47.092 00:18:47.092 ' 00:18:47.092 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:47.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:47.092 --rc genhtml_branch_coverage=1 00:18:47.092 --rc genhtml_function_coverage=1 00:18:47.092 --rc genhtml_legend=1 00:18:47.092 --rc geninfo_all_blocks=1 00:18:47.092 --rc geninfo_unexecuted_blocks=1 00:18:47.092 00:18:47.092 ' 00:18:47.092 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:47.093 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:18:47.093 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:47.093 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:47.093 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:47.093 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:47.093 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:47.093 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:47.093 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:47.093 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:47.093 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:47.093 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:47.093 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:47.093 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:47.093 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:47.093 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:47.093 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:47.093 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:47.093 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:47.093 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:18:47.093 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:47.093 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:47.093 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:47.093 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.093 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.093 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.093 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:18:47.093 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.093 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:18:47.093 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:47.093 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:47.093 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:47.093 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:47.093 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:47.093 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:47.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:47.093 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:47.093 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:47.093 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:47.093 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:47.093 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:47.093 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:18:47.093 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:47.093 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:18:47.093 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:18:47.093 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:18:47.093 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:18:47.093 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:47.093 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:47.093 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:47.093 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:47.093 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:47.093 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:47.093 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:47.093 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:47.093 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:47.093 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:47.093 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:18:47.093 07:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.998 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:48.999 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:48.999 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:48.999 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:48.999 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:48.999 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:49.258 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:49.258 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:49.258 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:49.258 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:49.258 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:49.258 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:18:49.258 00:18:49.258 --- 10.0.0.2 ping statistics --- 00:18:49.258 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:49.258 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:18:49.258 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:49.258 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:49.258 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.093 ms 00:18:49.258 00:18:49.258 --- 10.0.0.1 ping statistics --- 00:18:49.258 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:49.258 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:18:49.258 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:49.258 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:18:49.258 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:49.258 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:49.258 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:49.258 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:49.258 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:49.258 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:49.258 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:49.258 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:18:49.258 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:49.258 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:49.258 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.258 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2959789 00:18:49.259 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:18:49.259 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2959789 00:18:49.259 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2959789 ']' 00:18:49.259 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:49.259 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:49.259 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:49.259 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:49.259 07:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.194 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:50.194 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:50.194 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:50.194 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:50.194 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.194 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:50.194 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=2959934 00:18:50.194 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:18:50.194 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:50.194 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:18:50.194 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:50.194 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:50.194 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:50.194 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:18:50.194 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:18:50.194 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:50.194 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=7a82493ec04feb3a32bcba30bbd414947dadd1710f35e273 00:18:50.194 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:18:50.194 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.zFr 00:18:50.194 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 7a82493ec04feb3a32bcba30bbd414947dadd1710f35e273 0 00:18:50.194 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 7a82493ec04feb3a32bcba30bbd414947dadd1710f35e273 0 00:18:50.194 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:50.194 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:50.194 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=7a82493ec04feb3a32bcba30bbd414947dadd1710f35e273 00:18:50.194 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:18:50.194 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.zFr 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.zFr 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.zFr 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=c27d1d90fad6515b1b5f7de7d86b3354eec8e02e010ea20d6f187c13738e7b2a 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.I6k 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key c27d1d90fad6515b1b5f7de7d86b3354eec8e02e010ea20d6f187c13738e7b2a 3 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 c27d1d90fad6515b1b5f7de7d86b3354eec8e02e010ea20d6f187c13738e7b2a 3 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=c27d1d90fad6515b1b5f7de7d86b3354eec8e02e010ea20d6f187c13738e7b2a 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.I6k 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.I6k 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.I6k 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=40d159e4785b22062f2e80c411766f4e 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.5pa 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 40d159e4785b22062f2e80c411766f4e 1 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 40d159e4785b22062f2e80c411766f4e 1 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=40d159e4785b22062f2e80c411766f4e 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.5pa 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.5pa 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.5pa 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b462cd9fedb0b439eca7a64f6a11c54ddd909aab48b73438 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.ehs 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b462cd9fedb0b439eca7a64f6a11c54ddd909aab48b73438 2 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b462cd9fedb0b439eca7a64f6a11c54ddd909aab48b73438 2 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b462cd9fedb0b439eca7a64f6a11c54ddd909aab48b73438 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.ehs 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.ehs 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.ehs 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=3b778632fdc2ad007898241fc13885def54276c84c8d378c 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.FBM 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 3b778632fdc2ad007898241fc13885def54276c84c8d378c 2 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 3b778632fdc2ad007898241fc13885def54276c84c8d378c 2 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=3b778632fdc2ad007898241fc13885def54276c84c8d378c 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.FBM 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.FBM 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.FBM 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=9d145de15c6c859e53d7de683db3e79b 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.p8x 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 9d145de15c6c859e53d7de683db3e79b 1 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 9d145de15c6c859e53d7de683db3e79b 1 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:50.454 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=9d145de15c6c859e53d7de683db3e79b 00:18:50.455 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:18:50.455 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:50.713 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.p8x 00:18:50.713 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.p8x 00:18:50.713 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.p8x 00:18:50.713 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:18:50.713 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:50.713 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:50.713 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:50.713 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:18:50.713 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:18:50.713 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:50.713 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a81e19b53861a9d1adc37040cfa4336fd10cea20674cd199a224037209aa276f 00:18:50.714 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:18:50.714 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.jCF 00:18:50.714 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a81e19b53861a9d1adc37040cfa4336fd10cea20674cd199a224037209aa276f 3 00:18:50.714 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a81e19b53861a9d1adc37040cfa4336fd10cea20674cd199a224037209aa276f 3 00:18:50.714 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:50.714 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:50.714 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a81e19b53861a9d1adc37040cfa4336fd10cea20674cd199a224037209aa276f 00:18:50.714 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:18:50.714 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:50.714 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.jCF 00:18:50.714 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.jCF 00:18:50.714 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.jCF 00:18:50.714 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:18:50.714 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 2959789 00:18:50.714 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2959789 ']' 00:18:50.714 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:50.714 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:50.714 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:50.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:50.714 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:50.714 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.973 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:50.973 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:50.973 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 2959934 /var/tmp/host.sock 00:18:50.973 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2959934 ']' 00:18:50.973 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:18:50.973 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:50.973 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:50.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:50.973 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:50.973 07:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.540 07:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:51.540 07:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:51.540 07:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:18:51.540 07:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.540 07:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.540 07:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.540 07:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:51.540 07:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.zFr 00:18:51.540 07:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.540 07:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.540 07:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.540 07:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.zFr 00:18:51.540 07:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.zFr 00:18:52.107 07:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.I6k ]] 00:18:52.107 07:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.I6k 00:18:52.107 07:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.107 07:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.107 07:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.107 07:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.I6k 00:18:52.107 07:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.I6k 00:18:52.365 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:52.365 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.5pa 00:18:52.365 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.365 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.365 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.365 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.5pa 00:18:52.365 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.5pa 00:18:52.623 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.ehs ]] 00:18:52.623 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ehs 00:18:52.623 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.623 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.623 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.623 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ehs 00:18:52.623 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ehs 00:18:52.882 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:52.882 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.FBM 00:18:52.882 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.882 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.882 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.882 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.FBM 00:18:52.882 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.FBM 00:18:53.140 07:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.p8x ]] 00:18:53.140 07:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.p8x 00:18:53.140 07:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.140 07:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.140 07:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.140 07:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.p8x 00:18:53.140 07:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.p8x 00:18:53.397 07:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:53.397 07:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.jCF 00:18:53.397 07:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.397 07:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.654 07:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.654 07:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.jCF 00:18:53.654 07:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.jCF 00:18:53.913 07:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:18:53.913 07:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:18:53.913 07:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:53.913 07:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:53.913 07:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:53.913 07:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:54.171 07:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:18:54.171 07:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:54.171 07:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:54.171 07:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:54.171 07:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:54.171 07:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:54.171 07:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:54.171 07:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.171 07:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.171 07:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.171 07:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:54.171 07:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:54.172 07:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:54.430 00:18:54.430 07:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:54.430 07:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:54.431 07:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.689 07:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.689 07:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:54.689 07:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.689 07:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.689 07:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.689 07:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:54.689 { 00:18:54.689 "cntlid": 1, 00:18:54.689 "qid": 0, 00:18:54.689 "state": "enabled", 00:18:54.689 "thread": "nvmf_tgt_poll_group_000", 00:18:54.689 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:54.689 "listen_address": { 00:18:54.689 "trtype": "TCP", 00:18:54.689 "adrfam": "IPv4", 00:18:54.689 "traddr": "10.0.0.2", 00:18:54.689 "trsvcid": "4420" 00:18:54.689 }, 00:18:54.689 "peer_address": { 00:18:54.689 "trtype": "TCP", 00:18:54.689 "adrfam": "IPv4", 00:18:54.689 "traddr": "10.0.0.1", 00:18:54.689 "trsvcid": "58938" 00:18:54.689 }, 00:18:54.689 "auth": { 00:18:54.689 "state": "completed", 00:18:54.689 "digest": "sha256", 00:18:54.689 "dhgroup": "null" 00:18:54.689 } 00:18:54.689 } 00:18:54.689 ]' 00:18:54.689 07:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:54.689 07:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:54.689 07:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:54.948 07:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:54.948 07:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:54.948 07:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.948 07:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.948 07:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:55.206 07:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2E4MjQ5M2VjMDRmZWIzYTMyYmNiYTMwYmJkNDE0OTQ3ZGFkZDE3MTBmMzVlMjcz+T6J9g==: --dhchap-ctrl-secret DHHC-1:03:YzI3ZDFkOTBmYWQ2NTE1YjFiNWY3ZGU3ZDg2YjMzNTRlZWM4ZTAyZTAxMGVhMjBkNmYxODdjMTM3MzhlN2IyYVB+fJc=: 00:18:55.206 07:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:N2E4MjQ5M2VjMDRmZWIzYTMyYmNiYTMwYmJkNDE0OTQ3ZGFkZDE3MTBmMzVlMjcz+T6J9g==: --dhchap-ctrl-secret DHHC-1:03:YzI3ZDFkOTBmYWQ2NTE1YjFiNWY3ZGU3ZDg2YjMzNTRlZWM4ZTAyZTAxMGVhMjBkNmYxODdjMTM3MzhlN2IyYVB+fJc=: 00:18:56.140 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:56.140 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:56.140 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:56.140 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.140 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.140 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.140 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:56.140 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:56.140 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:56.399 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:18:56.399 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:56.399 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:56.399 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:56.399 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:56.399 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:56.399 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:56.399 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.399 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.399 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.399 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:56.399 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:56.399 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:56.965 00:18:56.965 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:56.965 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:56.965 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.225 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.225 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:57.225 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.225 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.225 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.225 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:57.225 { 00:18:57.225 "cntlid": 3, 00:18:57.225 "qid": 0, 00:18:57.225 "state": "enabled", 00:18:57.225 "thread": "nvmf_tgt_poll_group_000", 00:18:57.225 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:57.225 "listen_address": { 00:18:57.225 "trtype": "TCP", 00:18:57.225 "adrfam": "IPv4", 00:18:57.225 "traddr": "10.0.0.2", 00:18:57.225 "trsvcid": "4420" 00:18:57.225 }, 00:18:57.225 "peer_address": { 00:18:57.225 "trtype": "TCP", 00:18:57.225 "adrfam": "IPv4", 00:18:57.225 "traddr": "10.0.0.1", 00:18:57.225 "trsvcid": "44740" 00:18:57.225 }, 00:18:57.225 "auth": { 00:18:57.225 "state": "completed", 00:18:57.225 "digest": "sha256", 00:18:57.225 "dhgroup": "null" 00:18:57.225 } 00:18:57.225 } 00:18:57.225 ]' 00:18:57.225 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:57.225 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:57.225 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:57.225 07:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:57.225 07:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:57.225 07:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:57.225 07:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.225 07:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:57.482 07:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDBkMTU5ZTQ3ODViMjIwNjJmMmU4MGM0MTE3NjZmNGU8Y8L9: --dhchap-ctrl-secret DHHC-1:02:YjQ2MmNkOWZlZGIwYjQzOWVjYTdhNjRmNmExMWM1NGRkZDkwOWFhYjQ4YjczNDM4XYLYyA==: 00:18:57.482 07:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NDBkMTU5ZTQ3ODViMjIwNjJmMmU4MGM0MTE3NjZmNGU8Y8L9: --dhchap-ctrl-secret DHHC-1:02:YjQ2MmNkOWZlZGIwYjQzOWVjYTdhNjRmNmExMWM1NGRkZDkwOWFhYjQ4YjczNDM4XYLYyA==: 00:18:58.415 07:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:58.415 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:58.415 07:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:58.415 07:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.415 07:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.415 07:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.415 07:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:58.415 07:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:58.415 07:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:58.982 07:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:18:58.982 07:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:58.982 07:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:58.982 07:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:58.982 07:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:58.982 07:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:58.982 07:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:58.982 07:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.982 07:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.982 07:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.982 07:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:58.982 07:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:58.982 07:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:59.241 00:18:59.241 07:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:59.241 07:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:59.241 07:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.499 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.499 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:59.499 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.499 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.499 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.499 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:59.499 { 00:18:59.499 "cntlid": 5, 00:18:59.499 "qid": 0, 00:18:59.499 "state": "enabled", 00:18:59.499 "thread": "nvmf_tgt_poll_group_000", 00:18:59.499 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:59.499 "listen_address": { 00:18:59.499 "trtype": "TCP", 00:18:59.499 "adrfam": "IPv4", 00:18:59.499 "traddr": "10.0.0.2", 00:18:59.499 "trsvcid": "4420" 00:18:59.499 }, 00:18:59.499 "peer_address": { 00:18:59.499 "trtype": "TCP", 00:18:59.499 "adrfam": "IPv4", 00:18:59.499 "traddr": "10.0.0.1", 00:18:59.499 "trsvcid": "44764" 00:18:59.499 }, 00:18:59.499 "auth": { 00:18:59.499 "state": "completed", 00:18:59.499 "digest": "sha256", 00:18:59.499 "dhgroup": "null" 00:18:59.499 } 00:18:59.499 } 00:18:59.499 ]' 00:18:59.499 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:59.499 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:59.499 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:59.499 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:59.499 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:59.499 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:59.499 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:59.499 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.758 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2I3Nzg2MzJmZGMyYWQwMDc4OTgyNDFmYzEzODg1ZGVmNTQyNzZjODRjOGQzNzhjsZAAwQ==: --dhchap-ctrl-secret DHHC-1:01:OWQxNDVkZTE1YzZjODU5ZTUzZDdkZTY4M2RiM2U3OWKiD6AM: 00:18:59.758 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:M2I3Nzg2MzJmZGMyYWQwMDc4OTgyNDFmYzEzODg1ZGVmNTQyNzZjODRjOGQzNzhjsZAAwQ==: --dhchap-ctrl-secret DHHC-1:01:OWQxNDVkZTE1YzZjODU5ZTUzZDdkZTY4M2RiM2U3OWKiD6AM: 00:19:01.133 07:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:01.133 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:01.133 07:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:01.133 07:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.133 07:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.133 07:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.133 07:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:01.133 07:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:01.133 07:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:01.133 07:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:19:01.133 07:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:01.133 07:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:01.133 07:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:01.133 07:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:01.133 07:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:01.133 07:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:01.133 07:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.133 07:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.133 07:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.133 07:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:01.133 07:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:01.133 07:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:01.391 00:19:01.391 07:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:01.391 07:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:01.391 07:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:01.649 07:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.649 07:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:01.649 07:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.649 07:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.649 07:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.649 07:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:01.649 { 00:19:01.649 "cntlid": 7, 00:19:01.649 "qid": 0, 00:19:01.649 "state": "enabled", 00:19:01.650 "thread": "nvmf_tgt_poll_group_000", 00:19:01.650 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:01.650 "listen_address": { 00:19:01.650 "trtype": "TCP", 00:19:01.650 "adrfam": "IPv4", 00:19:01.650 "traddr": "10.0.0.2", 00:19:01.650 "trsvcid": "4420" 00:19:01.650 }, 00:19:01.650 "peer_address": { 00:19:01.650 "trtype": "TCP", 00:19:01.650 "adrfam": "IPv4", 00:19:01.650 "traddr": "10.0.0.1", 00:19:01.650 "trsvcid": "44788" 00:19:01.650 }, 00:19:01.650 "auth": { 00:19:01.650 "state": "completed", 00:19:01.650 "digest": "sha256", 00:19:01.650 "dhgroup": "null" 00:19:01.650 } 00:19:01.650 } 00:19:01.650 ]' 00:19:01.650 07:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:01.908 07:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:01.908 07:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:01.908 07:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:01.908 07:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:01.908 07:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:01.908 07:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:01.908 07:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:02.167 07:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTgxZTE5YjUzODYxYTlkMWFkYzM3MDQwY2ZhNDMzNmZkMTBjZWEyMDY3NGNkMTk5YTIyNDAzNzIwOWFhMjc2ZhRtH/g=: 00:19:02.167 07:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YTgxZTE5YjUzODYxYTlkMWFkYzM3MDQwY2ZhNDMzNmZkMTBjZWEyMDY3NGNkMTk5YTIyNDAzNzIwOWFhMjc2ZhRtH/g=: 00:19:03.101 07:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:03.101 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:03.101 07:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:03.101 07:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.101 07:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.101 07:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.101 07:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:03.101 07:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:03.101 07:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:03.101 07:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:03.667 07:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:19:03.668 07:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:03.668 07:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:03.668 07:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:03.668 07:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:03.668 07:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:03.668 07:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:03.668 07:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.668 07:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.668 07:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.668 07:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:03.668 07:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:03.668 07:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:03.926 00:19:03.926 07:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:03.926 07:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:03.926 07:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:04.185 07:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.185 07:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:04.185 07:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.185 07:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.185 07:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.185 07:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:04.185 { 00:19:04.185 "cntlid": 9, 00:19:04.185 "qid": 0, 00:19:04.185 "state": "enabled", 00:19:04.185 "thread": "nvmf_tgt_poll_group_000", 00:19:04.185 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:04.185 "listen_address": { 00:19:04.185 "trtype": "TCP", 00:19:04.185 "adrfam": "IPv4", 00:19:04.185 "traddr": "10.0.0.2", 00:19:04.185 "trsvcid": "4420" 00:19:04.185 }, 00:19:04.185 "peer_address": { 00:19:04.185 "trtype": "TCP", 00:19:04.185 "adrfam": "IPv4", 00:19:04.185 "traddr": "10.0.0.1", 00:19:04.185 "trsvcid": "44810" 00:19:04.185 }, 00:19:04.185 "auth": { 00:19:04.185 "state": "completed", 00:19:04.185 "digest": "sha256", 00:19:04.185 "dhgroup": "ffdhe2048" 00:19:04.185 } 00:19:04.185 } 00:19:04.185 ]' 00:19:04.185 07:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:04.185 07:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:04.185 07:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:04.185 07:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:04.185 07:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:04.185 07:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:04.185 07:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:04.185 07:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:04.443 07:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2E4MjQ5M2VjMDRmZWIzYTMyYmNiYTMwYmJkNDE0OTQ3ZGFkZDE3MTBmMzVlMjcz+T6J9g==: --dhchap-ctrl-secret DHHC-1:03:YzI3ZDFkOTBmYWQ2NTE1YjFiNWY3ZGU3ZDg2YjMzNTRlZWM4ZTAyZTAxMGVhMjBkNmYxODdjMTM3MzhlN2IyYVB+fJc=: 00:19:04.444 07:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:N2E4MjQ5M2VjMDRmZWIzYTMyYmNiYTMwYmJkNDE0OTQ3ZGFkZDE3MTBmMzVlMjcz+T6J9g==: --dhchap-ctrl-secret DHHC-1:03:YzI3ZDFkOTBmYWQ2NTE1YjFiNWY3ZGU3ZDg2YjMzNTRlZWM4ZTAyZTAxMGVhMjBkNmYxODdjMTM3MzhlN2IyYVB+fJc=: 00:19:05.818 07:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:05.818 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:05.818 07:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:05.818 07:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.818 07:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.818 07:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.818 07:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:05.818 07:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:05.818 07:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:05.818 07:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:19:05.818 07:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:05.818 07:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:05.818 07:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:05.818 07:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:05.818 07:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:05.818 07:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:05.818 07:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.818 07:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.818 07:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.818 07:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:05.818 07:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:05.818 07:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:06.385 00:19:06.385 07:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:06.385 07:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:06.385 07:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.385 07:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.385 07:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:06.385 07:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.385 07:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.643 07:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.643 07:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:06.643 { 00:19:06.643 "cntlid": 11, 00:19:06.643 "qid": 0, 00:19:06.643 "state": "enabled", 00:19:06.643 "thread": "nvmf_tgt_poll_group_000", 00:19:06.643 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:06.643 "listen_address": { 00:19:06.643 "trtype": "TCP", 00:19:06.643 "adrfam": "IPv4", 00:19:06.643 "traddr": "10.0.0.2", 00:19:06.643 "trsvcid": "4420" 00:19:06.643 }, 00:19:06.644 "peer_address": { 00:19:06.644 "trtype": "TCP", 00:19:06.644 "adrfam": "IPv4", 00:19:06.644 "traddr": "10.0.0.1", 00:19:06.644 "trsvcid": "60820" 00:19:06.644 }, 00:19:06.644 "auth": { 00:19:06.644 "state": "completed", 00:19:06.644 "digest": "sha256", 00:19:06.644 "dhgroup": "ffdhe2048" 00:19:06.644 } 00:19:06.644 } 00:19:06.644 ]' 00:19:06.644 07:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:06.644 07:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:06.644 07:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:06.644 07:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:06.644 07:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:06.644 07:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:06.644 07:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:06.644 07:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:06.902 07:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDBkMTU5ZTQ3ODViMjIwNjJmMmU4MGM0MTE3NjZmNGU8Y8L9: --dhchap-ctrl-secret DHHC-1:02:YjQ2MmNkOWZlZGIwYjQzOWVjYTdhNjRmNmExMWM1NGRkZDkwOWFhYjQ4YjczNDM4XYLYyA==: 00:19:06.902 07:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NDBkMTU5ZTQ3ODViMjIwNjJmMmU4MGM0MTE3NjZmNGU8Y8L9: --dhchap-ctrl-secret DHHC-1:02:YjQ2MmNkOWZlZGIwYjQzOWVjYTdhNjRmNmExMWM1NGRkZDkwOWFhYjQ4YjczNDM4XYLYyA==: 00:19:07.835 07:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:07.835 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:07.835 07:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:07.835 07:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.835 07:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.835 07:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.835 07:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:07.835 07:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:07.835 07:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:08.094 07:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:19:08.094 07:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:08.094 07:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:08.094 07:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:08.094 07:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:08.094 07:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:08.094 07:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:08.094 07:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.094 07:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.094 07:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.094 07:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:08.094 07:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:08.094 07:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:08.661 00:19:08.661 07:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:08.661 07:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:08.661 07:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.920 07:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.920 07:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:08.920 07:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.920 07:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.920 07:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.920 07:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:08.920 { 00:19:08.920 "cntlid": 13, 00:19:08.920 "qid": 0, 00:19:08.920 "state": "enabled", 00:19:08.920 "thread": "nvmf_tgt_poll_group_000", 00:19:08.920 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:08.920 "listen_address": { 00:19:08.920 "trtype": "TCP", 00:19:08.920 "adrfam": "IPv4", 00:19:08.920 "traddr": "10.0.0.2", 00:19:08.920 "trsvcid": "4420" 00:19:08.920 }, 00:19:08.920 "peer_address": { 00:19:08.920 "trtype": "TCP", 00:19:08.920 "adrfam": "IPv4", 00:19:08.920 "traddr": "10.0.0.1", 00:19:08.920 "trsvcid": "60834" 00:19:08.920 }, 00:19:08.920 "auth": { 00:19:08.920 "state": "completed", 00:19:08.920 "digest": "sha256", 00:19:08.920 "dhgroup": "ffdhe2048" 00:19:08.920 } 00:19:08.920 } 00:19:08.920 ]' 00:19:08.920 07:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:08.920 07:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:08.920 07:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:08.920 07:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:08.920 07:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:08.920 07:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:08.920 07:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:08.920 07:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.179 07:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2I3Nzg2MzJmZGMyYWQwMDc4OTgyNDFmYzEzODg1ZGVmNTQyNzZjODRjOGQzNzhjsZAAwQ==: --dhchap-ctrl-secret DHHC-1:01:OWQxNDVkZTE1YzZjODU5ZTUzZDdkZTY4M2RiM2U3OWKiD6AM: 00:19:09.179 07:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:M2I3Nzg2MzJmZGMyYWQwMDc4OTgyNDFmYzEzODg1ZGVmNTQyNzZjODRjOGQzNzhjsZAAwQ==: --dhchap-ctrl-secret DHHC-1:01:OWQxNDVkZTE1YzZjODU5ZTUzZDdkZTY4M2RiM2U3OWKiD6AM: 00:19:10.554 07:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.554 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.554 07:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:10.554 07:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.554 07:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.554 07:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.554 07:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:10.554 07:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:10.554 07:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:10.554 07:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:19:10.554 07:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:10.554 07:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:10.554 07:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:10.554 07:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:10.554 07:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:10.555 07:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:10.555 07:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.555 07:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.555 07:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.555 07:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:10.555 07:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:10.555 07:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:10.813 00:19:10.813 07:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:10.813 07:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:10.813 07:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:11.071 07:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.071 07:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:11.071 07:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.071 07:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.071 07:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.071 07:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:11.071 { 00:19:11.071 "cntlid": 15, 00:19:11.071 "qid": 0, 00:19:11.071 "state": "enabled", 00:19:11.071 "thread": "nvmf_tgt_poll_group_000", 00:19:11.071 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:11.071 "listen_address": { 00:19:11.071 "trtype": "TCP", 00:19:11.071 "adrfam": "IPv4", 00:19:11.071 "traddr": "10.0.0.2", 00:19:11.071 "trsvcid": "4420" 00:19:11.071 }, 00:19:11.071 "peer_address": { 00:19:11.071 "trtype": "TCP", 00:19:11.071 "adrfam": "IPv4", 00:19:11.071 "traddr": "10.0.0.1", 00:19:11.071 "trsvcid": "60868" 00:19:11.071 }, 00:19:11.071 "auth": { 00:19:11.071 "state": "completed", 00:19:11.071 "digest": "sha256", 00:19:11.071 "dhgroup": "ffdhe2048" 00:19:11.071 } 00:19:11.071 } 00:19:11.071 ]' 00:19:11.071 07:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:11.330 07:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:11.330 07:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:11.330 07:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:11.330 07:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:11.330 07:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:11.330 07:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:11.330 07:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:11.588 07:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTgxZTE5YjUzODYxYTlkMWFkYzM3MDQwY2ZhNDMzNmZkMTBjZWEyMDY3NGNkMTk5YTIyNDAzNzIwOWFhMjc2ZhRtH/g=: 00:19:11.588 07:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YTgxZTE5YjUzODYxYTlkMWFkYzM3MDQwY2ZhNDMzNmZkMTBjZWEyMDY3NGNkMTk5YTIyNDAzNzIwOWFhMjc2ZhRtH/g=: 00:19:12.524 07:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:12.524 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:12.524 07:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:12.524 07:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.524 07:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.524 07:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.524 07:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:12.524 07:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:12.524 07:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:12.524 07:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:12.783 07:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:19:12.783 07:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:12.783 07:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:12.783 07:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:12.783 07:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:12.783 07:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:12.783 07:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:12.783 07:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.783 07:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.041 07:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.041 07:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:13.041 07:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:13.041 07:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:13.300 00:19:13.300 07:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:13.300 07:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:13.300 07:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.559 07:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.559 07:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:13.559 07:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.559 07:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.559 07:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.559 07:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:13.559 { 00:19:13.559 "cntlid": 17, 00:19:13.559 "qid": 0, 00:19:13.559 "state": "enabled", 00:19:13.559 "thread": "nvmf_tgt_poll_group_000", 00:19:13.559 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:13.559 "listen_address": { 00:19:13.559 "trtype": "TCP", 00:19:13.559 "adrfam": "IPv4", 00:19:13.559 "traddr": "10.0.0.2", 00:19:13.559 "trsvcid": "4420" 00:19:13.559 }, 00:19:13.559 "peer_address": { 00:19:13.559 "trtype": "TCP", 00:19:13.559 "adrfam": "IPv4", 00:19:13.559 "traddr": "10.0.0.1", 00:19:13.559 "trsvcid": "60898" 00:19:13.559 }, 00:19:13.559 "auth": { 00:19:13.559 "state": "completed", 00:19:13.559 "digest": "sha256", 00:19:13.559 "dhgroup": "ffdhe3072" 00:19:13.559 } 00:19:13.559 } 00:19:13.559 ]' 00:19:13.559 07:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:13.559 07:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:13.559 07:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:13.559 07:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:13.559 07:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:13.817 07:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:13.817 07:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:13.817 07:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.076 07:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2E4MjQ5M2VjMDRmZWIzYTMyYmNiYTMwYmJkNDE0OTQ3ZGFkZDE3MTBmMzVlMjcz+T6J9g==: --dhchap-ctrl-secret DHHC-1:03:YzI3ZDFkOTBmYWQ2NTE1YjFiNWY3ZGU3ZDg2YjMzNTRlZWM4ZTAyZTAxMGVhMjBkNmYxODdjMTM3MzhlN2IyYVB+fJc=: 00:19:14.076 07:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:N2E4MjQ5M2VjMDRmZWIzYTMyYmNiYTMwYmJkNDE0OTQ3ZGFkZDE3MTBmMzVlMjcz+T6J9g==: --dhchap-ctrl-secret DHHC-1:03:YzI3ZDFkOTBmYWQ2NTE1YjFiNWY3ZGU3ZDg2YjMzNTRlZWM4ZTAyZTAxMGVhMjBkNmYxODdjMTM3MzhlN2IyYVB+fJc=: 00:19:15.009 07:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:15.009 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:15.009 07:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:15.009 07:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.009 07:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.009 07:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.009 07:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:15.009 07:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:15.009 07:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:15.267 07:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:19:15.267 07:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:15.267 07:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:15.267 07:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:15.267 07:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:15.267 07:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:15.267 07:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:15.267 07:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.267 07:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.267 07:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.267 07:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:15.268 07:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:15.268 07:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:15.527 00:19:15.527 07:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:15.527 07:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.527 07:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:15.812 07:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.812 07:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.812 07:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.812 07:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.812 07:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.812 07:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:15.812 { 00:19:15.812 "cntlid": 19, 00:19:15.812 "qid": 0, 00:19:15.812 "state": "enabled", 00:19:15.812 "thread": "nvmf_tgt_poll_group_000", 00:19:15.812 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:15.812 "listen_address": { 00:19:15.812 "trtype": "TCP", 00:19:15.812 "adrfam": "IPv4", 00:19:15.812 "traddr": "10.0.0.2", 00:19:15.812 "trsvcid": "4420" 00:19:15.812 }, 00:19:15.812 "peer_address": { 00:19:15.812 "trtype": "TCP", 00:19:15.812 "adrfam": "IPv4", 00:19:15.812 "traddr": "10.0.0.1", 00:19:15.812 "trsvcid": "44706" 00:19:15.812 }, 00:19:15.812 "auth": { 00:19:15.812 "state": "completed", 00:19:15.812 "digest": "sha256", 00:19:15.812 "dhgroup": "ffdhe3072" 00:19:15.812 } 00:19:15.812 } 00:19:15.812 ]' 00:19:15.812 07:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:16.101 07:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:16.101 07:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:16.101 07:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:16.101 07:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:16.101 07:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:16.101 07:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:16.101 07:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:16.360 07:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDBkMTU5ZTQ3ODViMjIwNjJmMmU4MGM0MTE3NjZmNGU8Y8L9: --dhchap-ctrl-secret DHHC-1:02:YjQ2MmNkOWZlZGIwYjQzOWVjYTdhNjRmNmExMWM1NGRkZDkwOWFhYjQ4YjczNDM4XYLYyA==: 00:19:16.360 07:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NDBkMTU5ZTQ3ODViMjIwNjJmMmU4MGM0MTE3NjZmNGU8Y8L9: --dhchap-ctrl-secret DHHC-1:02:YjQ2MmNkOWZlZGIwYjQzOWVjYTdhNjRmNmExMWM1NGRkZDkwOWFhYjQ4YjczNDM4XYLYyA==: 00:19:17.294 07:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:17.294 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:17.294 07:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:17.294 07:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.294 07:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.294 07:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.294 07:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:17.294 07:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:17.294 07:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:17.551 07:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:19:17.551 07:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:17.551 07:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:17.551 07:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:17.551 07:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:17.551 07:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:17.551 07:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:17.551 07:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.551 07:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.551 07:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.551 07:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:17.551 07:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:17.551 07:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:18.117 00:19:18.117 07:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:18.117 07:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:18.117 07:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:18.117 07:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.117 07:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:18.117 07:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.117 07:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.375 07:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.375 07:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:18.375 { 00:19:18.375 "cntlid": 21, 00:19:18.375 "qid": 0, 00:19:18.375 "state": "enabled", 00:19:18.375 "thread": "nvmf_tgt_poll_group_000", 00:19:18.375 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:18.375 "listen_address": { 00:19:18.375 "trtype": "TCP", 00:19:18.375 "adrfam": "IPv4", 00:19:18.375 "traddr": "10.0.0.2", 00:19:18.375 "trsvcid": "4420" 00:19:18.375 }, 00:19:18.375 "peer_address": { 00:19:18.375 "trtype": "TCP", 00:19:18.375 "adrfam": "IPv4", 00:19:18.375 "traddr": "10.0.0.1", 00:19:18.375 "trsvcid": "44724" 00:19:18.375 }, 00:19:18.375 "auth": { 00:19:18.375 "state": "completed", 00:19:18.375 "digest": "sha256", 00:19:18.375 "dhgroup": "ffdhe3072" 00:19:18.375 } 00:19:18.375 } 00:19:18.375 ]' 00:19:18.375 07:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:18.375 07:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:18.375 07:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:18.375 07:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:18.375 07:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:18.375 07:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:18.375 07:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.375 07:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:18.634 07:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2I3Nzg2MzJmZGMyYWQwMDc4OTgyNDFmYzEzODg1ZGVmNTQyNzZjODRjOGQzNzhjsZAAwQ==: --dhchap-ctrl-secret DHHC-1:01:OWQxNDVkZTE1YzZjODU5ZTUzZDdkZTY4M2RiM2U3OWKiD6AM: 00:19:18.634 07:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:M2I3Nzg2MzJmZGMyYWQwMDc4OTgyNDFmYzEzODg1ZGVmNTQyNzZjODRjOGQzNzhjsZAAwQ==: --dhchap-ctrl-secret DHHC-1:01:OWQxNDVkZTE1YzZjODU5ZTUzZDdkZTY4M2RiM2U3OWKiD6AM: 00:19:19.568 07:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:19.568 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:19.568 07:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:19.568 07:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.568 07:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.568 07:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.568 07:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:19.568 07:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:19.568 07:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:19.826 07:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:19:19.826 07:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:19.826 07:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:19.826 07:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:19.826 07:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:19.826 07:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:19.826 07:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:19.826 07:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.826 07:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.826 07:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.826 07:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:19.826 07:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:19.826 07:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:20.393 00:19:20.393 07:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:20.393 07:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.393 07:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:20.651 07:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.651 07:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.651 07:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.651 07:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.651 07:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.651 07:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:20.651 { 00:19:20.651 "cntlid": 23, 00:19:20.651 "qid": 0, 00:19:20.651 "state": "enabled", 00:19:20.651 "thread": "nvmf_tgt_poll_group_000", 00:19:20.651 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:20.651 "listen_address": { 00:19:20.651 "trtype": "TCP", 00:19:20.651 "adrfam": "IPv4", 00:19:20.651 "traddr": "10.0.0.2", 00:19:20.651 "trsvcid": "4420" 00:19:20.651 }, 00:19:20.651 "peer_address": { 00:19:20.651 "trtype": "TCP", 00:19:20.651 "adrfam": "IPv4", 00:19:20.651 "traddr": "10.0.0.1", 00:19:20.651 "trsvcid": "44760" 00:19:20.651 }, 00:19:20.651 "auth": { 00:19:20.651 "state": "completed", 00:19:20.651 "digest": "sha256", 00:19:20.651 "dhgroup": "ffdhe3072" 00:19:20.651 } 00:19:20.651 } 00:19:20.651 ]' 00:19:20.651 07:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:20.651 07:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:20.651 07:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:20.651 07:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:20.651 07:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:20.651 07:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:20.651 07:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.651 07:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:20.910 07:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTgxZTE5YjUzODYxYTlkMWFkYzM3MDQwY2ZhNDMzNmZkMTBjZWEyMDY3NGNkMTk5YTIyNDAzNzIwOWFhMjc2ZhRtH/g=: 00:19:20.910 07:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YTgxZTE5YjUzODYxYTlkMWFkYzM3MDQwY2ZhNDMzNmZkMTBjZWEyMDY3NGNkMTk5YTIyNDAzNzIwOWFhMjc2ZhRtH/g=: 00:19:21.844 07:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:21.844 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:21.844 07:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:21.844 07:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.844 07:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.844 07:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.844 07:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:21.844 07:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:21.844 07:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:21.844 07:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:22.412 07:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:19:22.412 07:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:22.412 07:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:22.412 07:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:22.412 07:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:22.412 07:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:22.412 07:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.412 07:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.412 07:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.412 07:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.412 07:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.412 07:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.412 07:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.670 00:19:22.670 07:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:22.670 07:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:22.670 07:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.928 07:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.928 07:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:22.928 07:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.928 07:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.928 07:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.928 07:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:22.928 { 00:19:22.928 "cntlid": 25, 00:19:22.928 "qid": 0, 00:19:22.928 "state": "enabled", 00:19:22.928 "thread": "nvmf_tgt_poll_group_000", 00:19:22.928 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:22.928 "listen_address": { 00:19:22.928 "trtype": "TCP", 00:19:22.928 "adrfam": "IPv4", 00:19:22.928 "traddr": "10.0.0.2", 00:19:22.928 "trsvcid": "4420" 00:19:22.928 }, 00:19:22.928 "peer_address": { 00:19:22.928 "trtype": "TCP", 00:19:22.928 "adrfam": "IPv4", 00:19:22.928 "traddr": "10.0.0.1", 00:19:22.928 "trsvcid": "44774" 00:19:22.928 }, 00:19:22.928 "auth": { 00:19:22.928 "state": "completed", 00:19:22.928 "digest": "sha256", 00:19:22.928 "dhgroup": "ffdhe4096" 00:19:22.928 } 00:19:22.928 } 00:19:22.928 ]' 00:19:22.928 07:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:22.928 07:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:22.928 07:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:23.187 07:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:23.187 07:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:23.187 07:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:23.187 07:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:23.187 07:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.445 07:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2E4MjQ5M2VjMDRmZWIzYTMyYmNiYTMwYmJkNDE0OTQ3ZGFkZDE3MTBmMzVlMjcz+T6J9g==: --dhchap-ctrl-secret DHHC-1:03:YzI3ZDFkOTBmYWQ2NTE1YjFiNWY3ZGU3ZDg2YjMzNTRlZWM4ZTAyZTAxMGVhMjBkNmYxODdjMTM3MzhlN2IyYVB+fJc=: 00:19:23.445 07:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:N2E4MjQ5M2VjMDRmZWIzYTMyYmNiYTMwYmJkNDE0OTQ3ZGFkZDE3MTBmMzVlMjcz+T6J9g==: --dhchap-ctrl-secret DHHC-1:03:YzI3ZDFkOTBmYWQ2NTE1YjFiNWY3ZGU3ZDg2YjMzNTRlZWM4ZTAyZTAxMGVhMjBkNmYxODdjMTM3MzhlN2IyYVB+fJc=: 00:19:24.380 07:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:24.380 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:24.380 07:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:24.380 07:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.380 07:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.380 07:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.380 07:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:24.380 07:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:24.380 07:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:24.638 07:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:19:24.638 07:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:24.638 07:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:24.638 07:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:24.638 07:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:24.638 07:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:24.638 07:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.638 07:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.638 07:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.638 07:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.638 07:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.638 07:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.638 07:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:25.205 00:19:25.205 07:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:25.205 07:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:25.206 07:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:25.464 07:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.464 07:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:25.464 07:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.464 07:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.464 07:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.464 07:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:25.464 { 00:19:25.464 "cntlid": 27, 00:19:25.464 "qid": 0, 00:19:25.464 "state": "enabled", 00:19:25.464 "thread": "nvmf_tgt_poll_group_000", 00:19:25.464 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:25.464 "listen_address": { 00:19:25.464 "trtype": "TCP", 00:19:25.464 "adrfam": "IPv4", 00:19:25.464 "traddr": "10.0.0.2", 00:19:25.464 "trsvcid": "4420" 00:19:25.464 }, 00:19:25.464 "peer_address": { 00:19:25.464 "trtype": "TCP", 00:19:25.464 "adrfam": "IPv4", 00:19:25.464 "traddr": "10.0.0.1", 00:19:25.464 "trsvcid": "44790" 00:19:25.464 }, 00:19:25.464 "auth": { 00:19:25.464 "state": "completed", 00:19:25.464 "digest": "sha256", 00:19:25.464 "dhgroup": "ffdhe4096" 00:19:25.464 } 00:19:25.464 } 00:19:25.464 ]' 00:19:25.464 07:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:25.464 07:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:25.464 07:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:25.464 07:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:25.464 07:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:25.464 07:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:25.464 07:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:25.464 07:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:25.723 07:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDBkMTU5ZTQ3ODViMjIwNjJmMmU4MGM0MTE3NjZmNGU8Y8L9: --dhchap-ctrl-secret DHHC-1:02:YjQ2MmNkOWZlZGIwYjQzOWVjYTdhNjRmNmExMWM1NGRkZDkwOWFhYjQ4YjczNDM4XYLYyA==: 00:19:25.723 07:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NDBkMTU5ZTQ3ODViMjIwNjJmMmU4MGM0MTE3NjZmNGU8Y8L9: --dhchap-ctrl-secret DHHC-1:02:YjQ2MmNkOWZlZGIwYjQzOWVjYTdhNjRmNmExMWM1NGRkZDkwOWFhYjQ4YjczNDM4XYLYyA==: 00:19:26.658 07:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:26.658 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:26.658 07:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:26.658 07:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.658 07:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.917 07:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.917 07:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:26.917 07:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:26.917 07:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:27.176 07:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:19:27.176 07:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:27.176 07:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:27.176 07:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:27.176 07:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:27.176 07:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:27.176 07:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:27.176 07:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.176 07:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.176 07:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.176 07:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:27.176 07:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:27.176 07:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:27.434 00:19:27.434 07:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:27.434 07:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:27.434 07:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:27.692 07:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.692 07:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:27.692 07:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.692 07:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.692 07:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.692 07:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:27.692 { 00:19:27.692 "cntlid": 29, 00:19:27.692 "qid": 0, 00:19:27.692 "state": "enabled", 00:19:27.692 "thread": "nvmf_tgt_poll_group_000", 00:19:27.692 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:27.692 "listen_address": { 00:19:27.692 "trtype": "TCP", 00:19:27.692 "adrfam": "IPv4", 00:19:27.692 "traddr": "10.0.0.2", 00:19:27.692 "trsvcid": "4420" 00:19:27.692 }, 00:19:27.692 "peer_address": { 00:19:27.692 "trtype": "TCP", 00:19:27.692 "adrfam": "IPv4", 00:19:27.692 "traddr": "10.0.0.1", 00:19:27.692 "trsvcid": "49860" 00:19:27.692 }, 00:19:27.692 "auth": { 00:19:27.692 "state": "completed", 00:19:27.692 "digest": "sha256", 00:19:27.692 "dhgroup": "ffdhe4096" 00:19:27.692 } 00:19:27.692 } 00:19:27.692 ]' 00:19:27.692 07:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:27.692 07:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:27.692 07:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:27.950 07:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:27.950 07:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:27.950 07:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:27.950 07:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:27.950 07:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:28.208 07:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2I3Nzg2MzJmZGMyYWQwMDc4OTgyNDFmYzEzODg1ZGVmNTQyNzZjODRjOGQzNzhjsZAAwQ==: --dhchap-ctrl-secret DHHC-1:01:OWQxNDVkZTE1YzZjODU5ZTUzZDdkZTY4M2RiM2U3OWKiD6AM: 00:19:28.208 07:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:M2I3Nzg2MzJmZGMyYWQwMDc4OTgyNDFmYzEzODg1ZGVmNTQyNzZjODRjOGQzNzhjsZAAwQ==: --dhchap-ctrl-secret DHHC-1:01:OWQxNDVkZTE1YzZjODU5ZTUzZDdkZTY4M2RiM2U3OWKiD6AM: 00:19:29.143 07:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:29.143 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:29.143 07:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:29.143 07:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.143 07:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.143 07:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.143 07:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:29.143 07:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:29.143 07:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:29.401 07:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:19:29.401 07:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:29.401 07:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:29.401 07:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:29.401 07:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:29.401 07:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:29.401 07:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:29.401 07:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.401 07:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.401 07:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.401 07:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:29.401 07:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:29.401 07:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:29.966 00:19:29.966 07:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:29.966 07:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:29.966 07:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:30.224 07:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.224 07:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:30.224 07:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.224 07:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.224 07:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.224 07:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:30.224 { 00:19:30.224 "cntlid": 31, 00:19:30.224 "qid": 0, 00:19:30.224 "state": "enabled", 00:19:30.224 "thread": "nvmf_tgt_poll_group_000", 00:19:30.224 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:30.224 "listen_address": { 00:19:30.224 "trtype": "TCP", 00:19:30.224 "adrfam": "IPv4", 00:19:30.224 "traddr": "10.0.0.2", 00:19:30.224 "trsvcid": "4420" 00:19:30.224 }, 00:19:30.224 "peer_address": { 00:19:30.224 "trtype": "TCP", 00:19:30.224 "adrfam": "IPv4", 00:19:30.224 "traddr": "10.0.0.1", 00:19:30.224 "trsvcid": "49900" 00:19:30.224 }, 00:19:30.224 "auth": { 00:19:30.224 "state": "completed", 00:19:30.224 "digest": "sha256", 00:19:30.224 "dhgroup": "ffdhe4096" 00:19:30.224 } 00:19:30.224 } 00:19:30.224 ]' 00:19:30.224 07:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:30.224 07:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:30.224 07:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:30.224 07:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:30.224 07:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:30.224 07:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:30.224 07:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:30.224 07:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:30.482 07:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTgxZTE5YjUzODYxYTlkMWFkYzM3MDQwY2ZhNDMzNmZkMTBjZWEyMDY3NGNkMTk5YTIyNDAzNzIwOWFhMjc2ZhRtH/g=: 00:19:30.482 07:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YTgxZTE5YjUzODYxYTlkMWFkYzM3MDQwY2ZhNDMzNmZkMTBjZWEyMDY3NGNkMTk5YTIyNDAzNzIwOWFhMjc2ZhRtH/g=: 00:19:31.416 07:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:31.416 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:31.416 07:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:31.416 07:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.416 07:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.416 07:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.416 07:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:31.416 07:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:31.416 07:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:31.416 07:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:31.982 07:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:19:31.982 07:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:31.982 07:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:31.982 07:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:31.982 07:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:31.982 07:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:31.982 07:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:31.982 07:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.982 07:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.982 07:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.982 07:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:31.982 07:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:31.982 07:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:32.547 00:19:32.547 07:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:32.547 07:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:32.547 07:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:32.547 07:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.547 07:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:32.547 07:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.547 07:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.547 07:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.547 07:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:32.547 { 00:19:32.547 "cntlid": 33, 00:19:32.547 "qid": 0, 00:19:32.547 "state": "enabled", 00:19:32.547 "thread": "nvmf_tgt_poll_group_000", 00:19:32.547 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:32.547 "listen_address": { 00:19:32.547 "trtype": "TCP", 00:19:32.547 "adrfam": "IPv4", 00:19:32.547 "traddr": "10.0.0.2", 00:19:32.547 "trsvcid": "4420" 00:19:32.547 }, 00:19:32.547 "peer_address": { 00:19:32.547 "trtype": "TCP", 00:19:32.548 "adrfam": "IPv4", 00:19:32.548 "traddr": "10.0.0.1", 00:19:32.548 "trsvcid": "49930" 00:19:32.548 }, 00:19:32.548 "auth": { 00:19:32.548 "state": "completed", 00:19:32.548 "digest": "sha256", 00:19:32.548 "dhgroup": "ffdhe6144" 00:19:32.548 } 00:19:32.548 } 00:19:32.548 ]' 00:19:32.806 07:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:32.806 07:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:32.806 07:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:32.806 07:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:32.806 07:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:32.806 07:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:32.806 07:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:32.806 07:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.064 07:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2E4MjQ5M2VjMDRmZWIzYTMyYmNiYTMwYmJkNDE0OTQ3ZGFkZDE3MTBmMzVlMjcz+T6J9g==: --dhchap-ctrl-secret DHHC-1:03:YzI3ZDFkOTBmYWQ2NTE1YjFiNWY3ZGU3ZDg2YjMzNTRlZWM4ZTAyZTAxMGVhMjBkNmYxODdjMTM3MzhlN2IyYVB+fJc=: 00:19:33.064 07:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:N2E4MjQ5M2VjMDRmZWIzYTMyYmNiYTMwYmJkNDE0OTQ3ZGFkZDE3MTBmMzVlMjcz+T6J9g==: --dhchap-ctrl-secret DHHC-1:03:YzI3ZDFkOTBmYWQ2NTE1YjFiNWY3ZGU3ZDg2YjMzNTRlZWM4ZTAyZTAxMGVhMjBkNmYxODdjMTM3MzhlN2IyYVB+fJc=: 00:19:33.998 07:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:33.998 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:33.998 07:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:33.998 07:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.998 07:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.998 07:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.998 07:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:33.998 07:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:33.998 07:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:34.256 07:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:19:34.256 07:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:34.256 07:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:34.256 07:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:34.256 07:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:34.256 07:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:34.256 07:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:34.256 07:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.256 07:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.256 07:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.256 07:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:34.256 07:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:34.256 07:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:34.822 00:19:34.822 07:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:34.822 07:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:34.822 07:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.080 07:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.080 07:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:35.080 07:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.080 07:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.080 07:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.080 07:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:35.080 { 00:19:35.080 "cntlid": 35, 00:19:35.080 "qid": 0, 00:19:35.080 "state": "enabled", 00:19:35.080 "thread": "nvmf_tgt_poll_group_000", 00:19:35.080 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:35.080 "listen_address": { 00:19:35.080 "trtype": "TCP", 00:19:35.080 "adrfam": "IPv4", 00:19:35.080 "traddr": "10.0.0.2", 00:19:35.080 "trsvcid": "4420" 00:19:35.080 }, 00:19:35.080 "peer_address": { 00:19:35.080 "trtype": "TCP", 00:19:35.080 "adrfam": "IPv4", 00:19:35.080 "traddr": "10.0.0.1", 00:19:35.080 "trsvcid": "49968" 00:19:35.080 }, 00:19:35.080 "auth": { 00:19:35.080 "state": "completed", 00:19:35.080 "digest": "sha256", 00:19:35.080 "dhgroup": "ffdhe6144" 00:19:35.080 } 00:19:35.080 } 00:19:35.080 ]' 00:19:35.080 07:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:35.339 07:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:35.339 07:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:35.339 07:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:35.339 07:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:35.339 07:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:35.339 07:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.339 07:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:35.597 07:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDBkMTU5ZTQ3ODViMjIwNjJmMmU4MGM0MTE3NjZmNGU8Y8L9: --dhchap-ctrl-secret DHHC-1:02:YjQ2MmNkOWZlZGIwYjQzOWVjYTdhNjRmNmExMWM1NGRkZDkwOWFhYjQ4YjczNDM4XYLYyA==: 00:19:35.597 07:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NDBkMTU5ZTQ3ODViMjIwNjJmMmU4MGM0MTE3NjZmNGU8Y8L9: --dhchap-ctrl-secret DHHC-1:02:YjQ2MmNkOWZlZGIwYjQzOWVjYTdhNjRmNmExMWM1NGRkZDkwOWFhYjQ4YjczNDM4XYLYyA==: 00:19:36.532 07:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:36.532 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:36.532 07:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:36.532 07:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.532 07:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.532 07:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.532 07:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:36.532 07:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:36.532 07:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:37.097 07:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:19:37.097 07:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:37.097 07:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:37.097 07:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:37.097 07:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:37.097 07:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:37.097 07:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:37.097 07:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.097 07:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.097 07:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.097 07:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:37.097 07:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:37.097 07:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:37.663 00:19:37.663 07:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:37.663 07:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:37.663 07:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:37.663 07:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.663 07:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:37.663 07:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.663 07:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.921 07:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.921 07:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:37.921 { 00:19:37.921 "cntlid": 37, 00:19:37.921 "qid": 0, 00:19:37.921 "state": "enabled", 00:19:37.921 "thread": "nvmf_tgt_poll_group_000", 00:19:37.921 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:37.921 "listen_address": { 00:19:37.921 "trtype": "TCP", 00:19:37.921 "adrfam": "IPv4", 00:19:37.921 "traddr": "10.0.0.2", 00:19:37.921 "trsvcid": "4420" 00:19:37.921 }, 00:19:37.921 "peer_address": { 00:19:37.921 "trtype": "TCP", 00:19:37.921 "adrfam": "IPv4", 00:19:37.921 "traddr": "10.0.0.1", 00:19:37.921 "trsvcid": "33710" 00:19:37.921 }, 00:19:37.921 "auth": { 00:19:37.921 "state": "completed", 00:19:37.921 "digest": "sha256", 00:19:37.921 "dhgroup": "ffdhe6144" 00:19:37.921 } 00:19:37.921 } 00:19:37.921 ]' 00:19:37.921 07:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:37.921 07:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:37.921 07:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:37.921 07:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:37.921 07:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:37.921 07:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:37.921 07:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:37.921 07:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.180 07:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2I3Nzg2MzJmZGMyYWQwMDc4OTgyNDFmYzEzODg1ZGVmNTQyNzZjODRjOGQzNzhjsZAAwQ==: --dhchap-ctrl-secret DHHC-1:01:OWQxNDVkZTE1YzZjODU5ZTUzZDdkZTY4M2RiM2U3OWKiD6AM: 00:19:38.180 07:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:M2I3Nzg2MzJmZGMyYWQwMDc4OTgyNDFmYzEzODg1ZGVmNTQyNzZjODRjOGQzNzhjsZAAwQ==: --dhchap-ctrl-secret DHHC-1:01:OWQxNDVkZTE1YzZjODU5ZTUzZDdkZTY4M2RiM2U3OWKiD6AM: 00:19:39.116 07:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.116 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.116 07:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:39.116 07:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.116 07:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.116 07:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.116 07:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:39.116 07:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:39.116 07:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:39.375 07:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:19:39.375 07:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:39.375 07:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:39.375 07:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:39.375 07:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:39.375 07:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.375 07:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:39.375 07:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.375 07:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.375 07:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.375 07:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:39.375 07:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:39.375 07:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:39.942 00:19:40.200 07:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:40.201 07:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:40.201 07:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.459 07:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.459 07:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.459 07:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.459 07:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.459 07:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.459 07:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:40.459 { 00:19:40.459 "cntlid": 39, 00:19:40.459 "qid": 0, 00:19:40.459 "state": "enabled", 00:19:40.459 "thread": "nvmf_tgt_poll_group_000", 00:19:40.459 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:40.459 "listen_address": { 00:19:40.459 "trtype": "TCP", 00:19:40.459 "adrfam": "IPv4", 00:19:40.459 "traddr": "10.0.0.2", 00:19:40.459 "trsvcid": "4420" 00:19:40.459 }, 00:19:40.459 "peer_address": { 00:19:40.459 "trtype": "TCP", 00:19:40.459 "adrfam": "IPv4", 00:19:40.459 "traddr": "10.0.0.1", 00:19:40.459 "trsvcid": "33736" 00:19:40.459 }, 00:19:40.459 "auth": { 00:19:40.459 "state": "completed", 00:19:40.459 "digest": "sha256", 00:19:40.459 "dhgroup": "ffdhe6144" 00:19:40.459 } 00:19:40.459 } 00:19:40.459 ]' 00:19:40.459 07:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:40.459 07:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:40.459 07:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:40.459 07:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:40.459 07:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:40.459 07:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.459 07:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.459 07:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.718 07:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTgxZTE5YjUzODYxYTlkMWFkYzM3MDQwY2ZhNDMzNmZkMTBjZWEyMDY3NGNkMTk5YTIyNDAzNzIwOWFhMjc2ZhRtH/g=: 00:19:40.718 07:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YTgxZTE5YjUzODYxYTlkMWFkYzM3MDQwY2ZhNDMzNmZkMTBjZWEyMDY3NGNkMTk5YTIyNDAzNzIwOWFhMjc2ZhRtH/g=: 00:19:41.652 07:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.652 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.652 07:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:41.652 07:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.652 07:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.652 07:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.652 07:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:41.652 07:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:41.652 07:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:41.652 07:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:41.911 07:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:19:41.911 07:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:41.911 07:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:41.911 07:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:41.911 07:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:41.911 07:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:41.911 07:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:41.911 07:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.911 07:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.911 07:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.911 07:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:41.911 07:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:41.911 07:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.848 00:19:42.848 07:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:42.848 07:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:42.848 07:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.415 07:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.415 07:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:43.415 07:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.415 07:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.415 07:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.415 07:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:43.415 { 00:19:43.415 "cntlid": 41, 00:19:43.415 "qid": 0, 00:19:43.415 "state": "enabled", 00:19:43.415 "thread": "nvmf_tgt_poll_group_000", 00:19:43.415 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:43.415 "listen_address": { 00:19:43.415 "trtype": "TCP", 00:19:43.415 "adrfam": "IPv4", 00:19:43.415 "traddr": "10.0.0.2", 00:19:43.415 "trsvcid": "4420" 00:19:43.415 }, 00:19:43.415 "peer_address": { 00:19:43.415 "trtype": "TCP", 00:19:43.415 "adrfam": "IPv4", 00:19:43.415 "traddr": "10.0.0.1", 00:19:43.415 "trsvcid": "33768" 00:19:43.415 }, 00:19:43.415 "auth": { 00:19:43.415 "state": "completed", 00:19:43.415 "digest": "sha256", 00:19:43.415 "dhgroup": "ffdhe8192" 00:19:43.415 } 00:19:43.415 } 00:19:43.415 ]' 00:19:43.415 07:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:43.415 07:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:43.415 07:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:43.415 07:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:43.415 07:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:43.415 07:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:43.415 07:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.416 07:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.674 07:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2E4MjQ5M2VjMDRmZWIzYTMyYmNiYTMwYmJkNDE0OTQ3ZGFkZDE3MTBmMzVlMjcz+T6J9g==: --dhchap-ctrl-secret DHHC-1:03:YzI3ZDFkOTBmYWQ2NTE1YjFiNWY3ZGU3ZDg2YjMzNTRlZWM4ZTAyZTAxMGVhMjBkNmYxODdjMTM3MzhlN2IyYVB+fJc=: 00:19:43.674 07:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:N2E4MjQ5M2VjMDRmZWIzYTMyYmNiYTMwYmJkNDE0OTQ3ZGFkZDE3MTBmMzVlMjcz+T6J9g==: --dhchap-ctrl-secret DHHC-1:03:YzI3ZDFkOTBmYWQ2NTE1YjFiNWY3ZGU3ZDg2YjMzNTRlZWM4ZTAyZTAxMGVhMjBkNmYxODdjMTM3MzhlN2IyYVB+fJc=: 00:19:44.613 07:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.613 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.613 07:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:44.613 07:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.613 07:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.613 07:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.613 07:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:44.613 07:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:44.613 07:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:44.872 07:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:19:44.872 07:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:44.872 07:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:44.872 07:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:44.872 07:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:44.872 07:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:44.872 07:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:44.872 07:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.872 07:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.872 07:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.872 07:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:44.872 07:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:44.872 07:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:45.841 00:19:45.841 07:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:45.841 07:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:45.841 07:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:46.123 07:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.123 07:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:46.123 07:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.123 07:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.123 07:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.123 07:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:46.123 { 00:19:46.123 "cntlid": 43, 00:19:46.123 "qid": 0, 00:19:46.123 "state": "enabled", 00:19:46.123 "thread": "nvmf_tgt_poll_group_000", 00:19:46.123 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:46.123 "listen_address": { 00:19:46.123 "trtype": "TCP", 00:19:46.123 "adrfam": "IPv4", 00:19:46.123 "traddr": "10.0.0.2", 00:19:46.123 "trsvcid": "4420" 00:19:46.123 }, 00:19:46.123 "peer_address": { 00:19:46.123 "trtype": "TCP", 00:19:46.123 "adrfam": "IPv4", 00:19:46.123 "traddr": "10.0.0.1", 00:19:46.123 "trsvcid": "33360" 00:19:46.123 }, 00:19:46.123 "auth": { 00:19:46.123 "state": "completed", 00:19:46.123 "digest": "sha256", 00:19:46.123 "dhgroup": "ffdhe8192" 00:19:46.123 } 00:19:46.123 } 00:19:46.123 ]' 00:19:46.123 07:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:46.123 07:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:46.123 07:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:46.123 07:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:46.123 07:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:46.123 07:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:46.123 07:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:46.123 07:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:46.381 07:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDBkMTU5ZTQ3ODViMjIwNjJmMmU4MGM0MTE3NjZmNGU8Y8L9: --dhchap-ctrl-secret DHHC-1:02:YjQ2MmNkOWZlZGIwYjQzOWVjYTdhNjRmNmExMWM1NGRkZDkwOWFhYjQ4YjczNDM4XYLYyA==: 00:19:46.381 07:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NDBkMTU5ZTQ3ODViMjIwNjJmMmU4MGM0MTE3NjZmNGU8Y8L9: --dhchap-ctrl-secret DHHC-1:02:YjQ2MmNkOWZlZGIwYjQzOWVjYTdhNjRmNmExMWM1NGRkZDkwOWFhYjQ4YjczNDM4XYLYyA==: 00:19:47.325 07:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:47.325 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:47.325 07:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:47.326 07:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.326 07:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.326 07:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.326 07:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:47.326 07:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:47.326 07:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:47.590 07:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:19:47.590 07:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:47.590 07:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:47.590 07:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:47.590 07:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:47.590 07:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:47.590 07:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:47.590 07:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.590 07:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.590 07:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.590 07:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:47.590 07:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:47.590 07:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.527 00:19:48.527 07:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:48.527 07:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:48.527 07:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:48.785 07:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.785 07:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:48.785 07:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.785 07:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.785 07:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.785 07:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:48.785 { 00:19:48.785 "cntlid": 45, 00:19:48.785 "qid": 0, 00:19:48.785 "state": "enabled", 00:19:48.785 "thread": "nvmf_tgt_poll_group_000", 00:19:48.785 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:48.785 "listen_address": { 00:19:48.785 "trtype": "TCP", 00:19:48.785 "adrfam": "IPv4", 00:19:48.785 "traddr": "10.0.0.2", 00:19:48.785 "trsvcid": "4420" 00:19:48.785 }, 00:19:48.785 "peer_address": { 00:19:48.785 "trtype": "TCP", 00:19:48.785 "adrfam": "IPv4", 00:19:48.785 "traddr": "10.0.0.1", 00:19:48.785 "trsvcid": "33382" 00:19:48.785 }, 00:19:48.785 "auth": { 00:19:48.785 "state": "completed", 00:19:48.785 "digest": "sha256", 00:19:48.785 "dhgroup": "ffdhe8192" 00:19:48.785 } 00:19:48.785 } 00:19:48.785 ]' 00:19:48.785 07:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:48.785 07:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:48.785 07:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:49.042 07:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:49.042 07:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:49.042 07:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:49.042 07:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:49.042 07:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.299 07:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2I3Nzg2MzJmZGMyYWQwMDc4OTgyNDFmYzEzODg1ZGVmNTQyNzZjODRjOGQzNzhjsZAAwQ==: --dhchap-ctrl-secret DHHC-1:01:OWQxNDVkZTE1YzZjODU5ZTUzZDdkZTY4M2RiM2U3OWKiD6AM: 00:19:49.299 07:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:M2I3Nzg2MzJmZGMyYWQwMDc4OTgyNDFmYzEzODg1ZGVmNTQyNzZjODRjOGQzNzhjsZAAwQ==: --dhchap-ctrl-secret DHHC-1:01:OWQxNDVkZTE1YzZjODU5ZTUzZDdkZTY4M2RiM2U3OWKiD6AM: 00:19:50.234 07:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.234 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.234 07:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:50.234 07:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.234 07:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.234 07:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.234 07:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:50.234 07:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:50.234 07:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:50.492 07:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:19:50.492 07:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:50.492 07:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:50.492 07:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:50.492 07:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:50.492 07:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:50.492 07:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:50.492 07:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.492 07:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.492 07:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.492 07:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:50.492 07:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:50.492 07:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:51.427 00:19:51.428 07:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:51.428 07:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:51.428 07:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.686 07:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.686 07:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.686 07:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.686 07:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.686 07:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.686 07:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:51.686 { 00:19:51.686 "cntlid": 47, 00:19:51.686 "qid": 0, 00:19:51.686 "state": "enabled", 00:19:51.686 "thread": "nvmf_tgt_poll_group_000", 00:19:51.686 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:51.686 "listen_address": { 00:19:51.686 "trtype": "TCP", 00:19:51.686 "adrfam": "IPv4", 00:19:51.686 "traddr": "10.0.0.2", 00:19:51.686 "trsvcid": "4420" 00:19:51.686 }, 00:19:51.686 "peer_address": { 00:19:51.686 "trtype": "TCP", 00:19:51.686 "adrfam": "IPv4", 00:19:51.686 "traddr": "10.0.0.1", 00:19:51.686 "trsvcid": "33408" 00:19:51.686 }, 00:19:51.686 "auth": { 00:19:51.686 "state": "completed", 00:19:51.686 "digest": "sha256", 00:19:51.686 "dhgroup": "ffdhe8192" 00:19:51.686 } 00:19:51.686 } 00:19:51.686 ]' 00:19:51.686 07:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:51.944 07:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:51.944 07:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:51.944 07:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:51.944 07:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:51.944 07:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:51.944 07:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.944 07:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.203 07:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTgxZTE5YjUzODYxYTlkMWFkYzM3MDQwY2ZhNDMzNmZkMTBjZWEyMDY3NGNkMTk5YTIyNDAzNzIwOWFhMjc2ZhRtH/g=: 00:19:52.203 07:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YTgxZTE5YjUzODYxYTlkMWFkYzM3MDQwY2ZhNDMzNmZkMTBjZWEyMDY3NGNkMTk5YTIyNDAzNzIwOWFhMjc2ZhRtH/g=: 00:19:53.140 07:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:53.140 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:53.140 07:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:53.140 07:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.140 07:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.140 07:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.140 07:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:53.140 07:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:53.140 07:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:53.140 07:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:53.140 07:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:53.399 07:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:19:53.399 07:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:53.399 07:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:53.399 07:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:53.399 07:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:53.399 07:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:53.399 07:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:53.399 07:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.399 07:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.399 07:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.399 07:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:53.399 07:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:53.399 07:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:53.964 00:19:53.964 07:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:53.964 07:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.964 07:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:53.964 07:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.964 07:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.964 07:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.964 07:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.964 07:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.964 07:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:53.964 { 00:19:53.964 "cntlid": 49, 00:19:53.964 "qid": 0, 00:19:53.964 "state": "enabled", 00:19:53.964 "thread": "nvmf_tgt_poll_group_000", 00:19:53.964 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:53.964 "listen_address": { 00:19:53.964 "trtype": "TCP", 00:19:53.964 "adrfam": "IPv4", 00:19:53.964 "traddr": "10.0.0.2", 00:19:53.964 "trsvcid": "4420" 00:19:53.964 }, 00:19:53.964 "peer_address": { 00:19:53.964 "trtype": "TCP", 00:19:53.964 "adrfam": "IPv4", 00:19:53.964 "traddr": "10.0.0.1", 00:19:53.964 "trsvcid": "33428" 00:19:53.964 }, 00:19:53.964 "auth": { 00:19:53.964 "state": "completed", 00:19:53.964 "digest": "sha384", 00:19:53.964 "dhgroup": "null" 00:19:53.964 } 00:19:53.964 } 00:19:53.964 ]' 00:19:53.964 07:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:54.222 07:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:54.222 07:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:54.222 07:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:54.222 07:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:54.222 07:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:54.222 07:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:54.222 07:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:54.480 07:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2E4MjQ5M2VjMDRmZWIzYTMyYmNiYTMwYmJkNDE0OTQ3ZGFkZDE3MTBmMzVlMjcz+T6J9g==: --dhchap-ctrl-secret DHHC-1:03:YzI3ZDFkOTBmYWQ2NTE1YjFiNWY3ZGU3ZDg2YjMzNTRlZWM4ZTAyZTAxMGVhMjBkNmYxODdjMTM3MzhlN2IyYVB+fJc=: 00:19:54.480 07:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:N2E4MjQ5M2VjMDRmZWIzYTMyYmNiYTMwYmJkNDE0OTQ3ZGFkZDE3MTBmMzVlMjcz+T6J9g==: --dhchap-ctrl-secret DHHC-1:03:YzI3ZDFkOTBmYWQ2NTE1YjFiNWY3ZGU3ZDg2YjMzNTRlZWM4ZTAyZTAxMGVhMjBkNmYxODdjMTM3MzhlN2IyYVB+fJc=: 00:19:55.416 07:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:55.416 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:55.416 07:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:55.416 07:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.416 07:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.416 07:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.416 07:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:55.416 07:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:55.416 07:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:55.674 07:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:19:55.674 07:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:55.674 07:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:55.674 07:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:55.674 07:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:55.674 07:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:55.674 07:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:55.674 07:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.674 07:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.674 07:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.674 07:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:55.674 07:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:55.674 07:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:55.933 00:19:56.194 07:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:56.194 07:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:56.194 07:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.452 07:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.452 07:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.452 07:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.452 07:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.452 07:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.452 07:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:56.452 { 00:19:56.452 "cntlid": 51, 00:19:56.452 "qid": 0, 00:19:56.452 "state": "enabled", 00:19:56.452 "thread": "nvmf_tgt_poll_group_000", 00:19:56.452 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:56.452 "listen_address": { 00:19:56.452 "trtype": "TCP", 00:19:56.452 "adrfam": "IPv4", 00:19:56.452 "traddr": "10.0.0.2", 00:19:56.452 "trsvcid": "4420" 00:19:56.452 }, 00:19:56.452 "peer_address": { 00:19:56.452 "trtype": "TCP", 00:19:56.452 "adrfam": "IPv4", 00:19:56.452 "traddr": "10.0.0.1", 00:19:56.452 "trsvcid": "40002" 00:19:56.452 }, 00:19:56.452 "auth": { 00:19:56.452 "state": "completed", 00:19:56.452 "digest": "sha384", 00:19:56.452 "dhgroup": "null" 00:19:56.452 } 00:19:56.452 } 00:19:56.452 ]' 00:19:56.452 07:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:56.452 07:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:56.452 07:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:56.452 07:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:56.452 07:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:56.452 07:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.452 07:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.452 07:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:56.711 07:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDBkMTU5ZTQ3ODViMjIwNjJmMmU4MGM0MTE3NjZmNGU8Y8L9: --dhchap-ctrl-secret DHHC-1:02:YjQ2MmNkOWZlZGIwYjQzOWVjYTdhNjRmNmExMWM1NGRkZDkwOWFhYjQ4YjczNDM4XYLYyA==: 00:19:56.711 07:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NDBkMTU5ZTQ3ODViMjIwNjJmMmU4MGM0MTE3NjZmNGU8Y8L9: --dhchap-ctrl-secret DHHC-1:02:YjQ2MmNkOWZlZGIwYjQzOWVjYTdhNjRmNmExMWM1NGRkZDkwOWFhYjQ4YjczNDM4XYLYyA==: 00:19:57.646 07:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.646 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.646 07:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:57.646 07:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.646 07:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.646 07:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.646 07:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:57.646 07:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:57.646 07:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:57.904 07:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:19:57.904 07:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:57.904 07:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:57.904 07:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:57.904 07:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:57.904 07:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:57.904 07:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:57.904 07:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.904 07:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.904 07:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.904 07:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:57.904 07:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:57.904 07:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:58.472 00:19:58.472 07:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:58.472 07:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:58.472 07:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.730 07:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.730 07:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.730 07:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.730 07:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.730 07:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.730 07:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:58.730 { 00:19:58.730 "cntlid": 53, 00:19:58.730 "qid": 0, 00:19:58.730 "state": "enabled", 00:19:58.730 "thread": "nvmf_tgt_poll_group_000", 00:19:58.730 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:58.730 "listen_address": { 00:19:58.730 "trtype": "TCP", 00:19:58.730 "adrfam": "IPv4", 00:19:58.730 "traddr": "10.0.0.2", 00:19:58.730 "trsvcid": "4420" 00:19:58.730 }, 00:19:58.730 "peer_address": { 00:19:58.730 "trtype": "TCP", 00:19:58.730 "adrfam": "IPv4", 00:19:58.730 "traddr": "10.0.0.1", 00:19:58.730 "trsvcid": "40034" 00:19:58.730 }, 00:19:58.730 "auth": { 00:19:58.730 "state": "completed", 00:19:58.730 "digest": "sha384", 00:19:58.730 "dhgroup": "null" 00:19:58.730 } 00:19:58.730 } 00:19:58.730 ]' 00:19:58.730 07:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:58.730 07:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:58.730 07:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:58.730 07:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:58.730 07:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:58.730 07:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.730 07:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.730 07:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.988 07:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2I3Nzg2MzJmZGMyYWQwMDc4OTgyNDFmYzEzODg1ZGVmNTQyNzZjODRjOGQzNzhjsZAAwQ==: --dhchap-ctrl-secret DHHC-1:01:OWQxNDVkZTE1YzZjODU5ZTUzZDdkZTY4M2RiM2U3OWKiD6AM: 00:19:58.988 07:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:M2I3Nzg2MzJmZGMyYWQwMDc4OTgyNDFmYzEzODg1ZGVmNTQyNzZjODRjOGQzNzhjsZAAwQ==: --dhchap-ctrl-secret DHHC-1:01:OWQxNDVkZTE1YzZjODU5ZTUzZDdkZTY4M2RiM2U3OWKiD6AM: 00:19:59.925 07:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.925 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.925 07:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:59.925 07:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.925 07:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.925 07:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.925 07:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:59.925 07:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:59.926 07:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:00.183 07:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:20:00.183 07:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:00.183 07:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:00.183 07:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:00.183 07:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:00.183 07:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:00.183 07:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:00.183 07:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.184 07:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.184 07:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.184 07:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:00.184 07:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:00.184 07:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:00.750 00:20:00.750 07:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:00.750 07:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:00.750 07:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:01.008 07:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.008 07:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:01.008 07:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.008 07:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.008 07:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.008 07:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:01.008 { 00:20:01.008 "cntlid": 55, 00:20:01.008 "qid": 0, 00:20:01.008 "state": "enabled", 00:20:01.008 "thread": "nvmf_tgt_poll_group_000", 00:20:01.008 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:01.008 "listen_address": { 00:20:01.008 "trtype": "TCP", 00:20:01.008 "adrfam": "IPv4", 00:20:01.008 "traddr": "10.0.0.2", 00:20:01.008 "trsvcid": "4420" 00:20:01.008 }, 00:20:01.008 "peer_address": { 00:20:01.008 "trtype": "TCP", 00:20:01.008 "adrfam": "IPv4", 00:20:01.008 "traddr": "10.0.0.1", 00:20:01.008 "trsvcid": "40064" 00:20:01.008 }, 00:20:01.008 "auth": { 00:20:01.008 "state": "completed", 00:20:01.008 "digest": "sha384", 00:20:01.008 "dhgroup": "null" 00:20:01.008 } 00:20:01.008 } 00:20:01.008 ]' 00:20:01.008 07:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:01.008 07:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:01.008 07:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:01.008 07:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:01.008 07:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:01.008 07:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:01.008 07:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:01.008 07:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:01.267 07:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTgxZTE5YjUzODYxYTlkMWFkYzM3MDQwY2ZhNDMzNmZkMTBjZWEyMDY3NGNkMTk5YTIyNDAzNzIwOWFhMjc2ZhRtH/g=: 00:20:01.267 07:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YTgxZTE5YjUzODYxYTlkMWFkYzM3MDQwY2ZhNDMzNmZkMTBjZWEyMDY3NGNkMTk5YTIyNDAzNzIwOWFhMjc2ZhRtH/g=: 00:20:02.642 07:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.642 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.642 07:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:02.642 07:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.642 07:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.642 07:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.642 07:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:02.642 07:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:02.642 07:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:02.642 07:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:02.642 07:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:20:02.642 07:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:02.642 07:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:02.642 07:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:02.642 07:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:02.642 07:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:02.642 07:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.643 07:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.643 07:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.643 07:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.643 07:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.643 07:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.643 07:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.901 00:20:02.901 07:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:02.901 07:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:02.901 07:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.158 07:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.158 07:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.158 07:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.158 07:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.158 07:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.158 07:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:03.158 { 00:20:03.158 "cntlid": 57, 00:20:03.158 "qid": 0, 00:20:03.158 "state": "enabled", 00:20:03.158 "thread": "nvmf_tgt_poll_group_000", 00:20:03.158 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:03.158 "listen_address": { 00:20:03.158 "trtype": "TCP", 00:20:03.158 "adrfam": "IPv4", 00:20:03.158 "traddr": "10.0.0.2", 00:20:03.158 "trsvcid": "4420" 00:20:03.158 }, 00:20:03.158 "peer_address": { 00:20:03.158 "trtype": "TCP", 00:20:03.158 "adrfam": "IPv4", 00:20:03.158 "traddr": "10.0.0.1", 00:20:03.158 "trsvcid": "40090" 00:20:03.158 }, 00:20:03.158 "auth": { 00:20:03.158 "state": "completed", 00:20:03.158 "digest": "sha384", 00:20:03.158 "dhgroup": "ffdhe2048" 00:20:03.158 } 00:20:03.158 } 00:20:03.158 ]' 00:20:03.158 07:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:03.417 07:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:03.417 07:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:03.417 07:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:03.417 07:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:03.417 07:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:03.417 07:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:03.417 07:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.675 07:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2E4MjQ5M2VjMDRmZWIzYTMyYmNiYTMwYmJkNDE0OTQ3ZGFkZDE3MTBmMzVlMjcz+T6J9g==: --dhchap-ctrl-secret DHHC-1:03:YzI3ZDFkOTBmYWQ2NTE1YjFiNWY3ZGU3ZDg2YjMzNTRlZWM4ZTAyZTAxMGVhMjBkNmYxODdjMTM3MzhlN2IyYVB+fJc=: 00:20:03.675 07:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:N2E4MjQ5M2VjMDRmZWIzYTMyYmNiYTMwYmJkNDE0OTQ3ZGFkZDE3MTBmMzVlMjcz+T6J9g==: --dhchap-ctrl-secret DHHC-1:03:YzI3ZDFkOTBmYWQ2NTE1YjFiNWY3ZGU3ZDg2YjMzNTRlZWM4ZTAyZTAxMGVhMjBkNmYxODdjMTM3MzhlN2IyYVB+fJc=: 00:20:04.613 07:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:04.613 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:04.613 07:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:04.613 07:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.613 07:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.613 07:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.613 07:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:04.613 07:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:04.613 07:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:04.872 07:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:20:04.872 07:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:04.872 07:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:04.872 07:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:04.872 07:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:04.872 07:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:04.872 07:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.872 07:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.872 07:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.872 07:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.872 07:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.872 07:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.872 07:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:05.442 00:20:05.442 07:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:05.442 07:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:05.442 07:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.442 07:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.442 07:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.442 07:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.442 07:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.701 07:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.701 07:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:05.701 { 00:20:05.701 "cntlid": 59, 00:20:05.701 "qid": 0, 00:20:05.701 "state": "enabled", 00:20:05.701 "thread": "nvmf_tgt_poll_group_000", 00:20:05.701 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:05.701 "listen_address": { 00:20:05.701 "trtype": "TCP", 00:20:05.701 "adrfam": "IPv4", 00:20:05.701 "traddr": "10.0.0.2", 00:20:05.701 "trsvcid": "4420" 00:20:05.701 }, 00:20:05.701 "peer_address": { 00:20:05.701 "trtype": "TCP", 00:20:05.701 "adrfam": "IPv4", 00:20:05.701 "traddr": "10.0.0.1", 00:20:05.701 "trsvcid": "38376" 00:20:05.701 }, 00:20:05.701 "auth": { 00:20:05.701 "state": "completed", 00:20:05.701 "digest": "sha384", 00:20:05.701 "dhgroup": "ffdhe2048" 00:20:05.701 } 00:20:05.701 } 00:20:05.701 ]' 00:20:05.701 07:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:05.701 07:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:05.701 07:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:05.701 07:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:05.701 07:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:05.701 07:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.701 07:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.701 07:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:05.960 07:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDBkMTU5ZTQ3ODViMjIwNjJmMmU4MGM0MTE3NjZmNGU8Y8L9: --dhchap-ctrl-secret DHHC-1:02:YjQ2MmNkOWZlZGIwYjQzOWVjYTdhNjRmNmExMWM1NGRkZDkwOWFhYjQ4YjczNDM4XYLYyA==: 00:20:05.960 07:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NDBkMTU5ZTQ3ODViMjIwNjJmMmU4MGM0MTE3NjZmNGU8Y8L9: --dhchap-ctrl-secret DHHC-1:02:YjQ2MmNkOWZlZGIwYjQzOWVjYTdhNjRmNmExMWM1NGRkZDkwOWFhYjQ4YjczNDM4XYLYyA==: 00:20:06.897 07:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:07.155 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:07.155 07:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:07.155 07:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.155 07:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.155 07:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.155 07:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:07.155 07:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:07.155 07:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:07.415 07:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:20:07.415 07:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:07.415 07:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:07.415 07:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:07.415 07:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:07.415 07:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:07.415 07:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:07.415 07:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.415 07:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.415 07:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.415 07:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:07.415 07:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:07.415 07:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:07.674 00:20:07.674 07:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:07.674 07:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:07.674 07:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:07.933 07:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.933 07:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:07.933 07:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.933 07:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.933 07:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.933 07:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:07.933 { 00:20:07.933 "cntlid": 61, 00:20:07.933 "qid": 0, 00:20:07.933 "state": "enabled", 00:20:07.933 "thread": "nvmf_tgt_poll_group_000", 00:20:07.933 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:07.933 "listen_address": { 00:20:07.933 "trtype": "TCP", 00:20:07.933 "adrfam": "IPv4", 00:20:07.933 "traddr": "10.0.0.2", 00:20:07.933 "trsvcid": "4420" 00:20:07.933 }, 00:20:07.933 "peer_address": { 00:20:07.933 "trtype": "TCP", 00:20:07.933 "adrfam": "IPv4", 00:20:07.933 "traddr": "10.0.0.1", 00:20:07.933 "trsvcid": "38404" 00:20:07.933 }, 00:20:07.933 "auth": { 00:20:07.933 "state": "completed", 00:20:07.933 "digest": "sha384", 00:20:07.933 "dhgroup": "ffdhe2048" 00:20:07.933 } 00:20:07.933 } 00:20:07.933 ]' 00:20:07.933 07:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:07.933 07:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:07.933 07:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:07.933 07:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:07.933 07:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:07.933 07:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:07.933 07:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:07.933 07:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:08.501 07:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2I3Nzg2MzJmZGMyYWQwMDc4OTgyNDFmYzEzODg1ZGVmNTQyNzZjODRjOGQzNzhjsZAAwQ==: --dhchap-ctrl-secret DHHC-1:01:OWQxNDVkZTE1YzZjODU5ZTUzZDdkZTY4M2RiM2U3OWKiD6AM: 00:20:08.501 07:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:M2I3Nzg2MzJmZGMyYWQwMDc4OTgyNDFmYzEzODg1ZGVmNTQyNzZjODRjOGQzNzhjsZAAwQ==: --dhchap-ctrl-secret DHHC-1:01:OWQxNDVkZTE1YzZjODU5ZTUzZDdkZTY4M2RiM2U3OWKiD6AM: 00:20:09.435 07:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:09.435 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:09.435 07:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:09.435 07:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.435 07:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.435 07:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.435 07:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:09.435 07:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:09.435 07:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:09.692 07:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:20:09.692 07:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:09.692 07:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:09.692 07:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:09.692 07:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:09.692 07:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:09.692 07:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:09.692 07:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.692 07:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.692 07:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.692 07:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:09.692 07:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:09.692 07:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:09.949 00:20:09.949 07:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:09.949 07:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:09.949 07:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.208 07:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.208 07:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.208 07:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.208 07:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.208 07:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.208 07:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:10.208 { 00:20:10.208 "cntlid": 63, 00:20:10.208 "qid": 0, 00:20:10.208 "state": "enabled", 00:20:10.208 "thread": "nvmf_tgt_poll_group_000", 00:20:10.208 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:10.208 "listen_address": { 00:20:10.208 "trtype": "TCP", 00:20:10.208 "adrfam": "IPv4", 00:20:10.208 "traddr": "10.0.0.2", 00:20:10.208 "trsvcid": "4420" 00:20:10.208 }, 00:20:10.208 "peer_address": { 00:20:10.208 "trtype": "TCP", 00:20:10.208 "adrfam": "IPv4", 00:20:10.208 "traddr": "10.0.0.1", 00:20:10.208 "trsvcid": "38430" 00:20:10.208 }, 00:20:10.208 "auth": { 00:20:10.208 "state": "completed", 00:20:10.208 "digest": "sha384", 00:20:10.208 "dhgroup": "ffdhe2048" 00:20:10.208 } 00:20:10.208 } 00:20:10.208 ]' 00:20:10.208 07:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:10.208 07:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:10.208 07:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:10.466 07:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:10.466 07:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:10.466 07:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:10.466 07:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:10.466 07:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.725 07:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTgxZTE5YjUzODYxYTlkMWFkYzM3MDQwY2ZhNDMzNmZkMTBjZWEyMDY3NGNkMTk5YTIyNDAzNzIwOWFhMjc2ZhRtH/g=: 00:20:10.725 07:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YTgxZTE5YjUzODYxYTlkMWFkYzM3MDQwY2ZhNDMzNmZkMTBjZWEyMDY3NGNkMTk5YTIyNDAzNzIwOWFhMjc2ZhRtH/g=: 00:20:11.662 07:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.662 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.662 07:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:11.662 07:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.662 07:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.662 07:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.662 07:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:11.662 07:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:11.662 07:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:11.662 07:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:11.921 07:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:20:11.921 07:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:11.921 07:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:11.921 07:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:11.921 07:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:11.921 07:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:11.921 07:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:11.921 07:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.921 07:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.921 07:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.921 07:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:11.921 07:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:11.921 07:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:12.179 00:20:12.438 07:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:12.438 07:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.438 07:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:12.696 07:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.696 07:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:12.696 07:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.696 07:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.696 07:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.696 07:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:12.696 { 00:20:12.696 "cntlid": 65, 00:20:12.696 "qid": 0, 00:20:12.696 "state": "enabled", 00:20:12.696 "thread": "nvmf_tgt_poll_group_000", 00:20:12.696 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:12.696 "listen_address": { 00:20:12.696 "trtype": "TCP", 00:20:12.696 "adrfam": "IPv4", 00:20:12.696 "traddr": "10.0.0.2", 00:20:12.696 "trsvcid": "4420" 00:20:12.696 }, 00:20:12.696 "peer_address": { 00:20:12.696 "trtype": "TCP", 00:20:12.696 "adrfam": "IPv4", 00:20:12.696 "traddr": "10.0.0.1", 00:20:12.696 "trsvcid": "38440" 00:20:12.696 }, 00:20:12.696 "auth": { 00:20:12.696 "state": "completed", 00:20:12.696 "digest": "sha384", 00:20:12.696 "dhgroup": "ffdhe3072" 00:20:12.696 } 00:20:12.696 } 00:20:12.696 ]' 00:20:12.696 07:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:12.696 07:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:12.696 07:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:12.696 07:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:12.696 07:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:12.696 07:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:12.696 07:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:12.696 07:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:12.956 07:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2E4MjQ5M2VjMDRmZWIzYTMyYmNiYTMwYmJkNDE0OTQ3ZGFkZDE3MTBmMzVlMjcz+T6J9g==: --dhchap-ctrl-secret DHHC-1:03:YzI3ZDFkOTBmYWQ2NTE1YjFiNWY3ZGU3ZDg2YjMzNTRlZWM4ZTAyZTAxMGVhMjBkNmYxODdjMTM3MzhlN2IyYVB+fJc=: 00:20:12.956 07:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:N2E4MjQ5M2VjMDRmZWIzYTMyYmNiYTMwYmJkNDE0OTQ3ZGFkZDE3MTBmMzVlMjcz+T6J9g==: --dhchap-ctrl-secret DHHC-1:03:YzI3ZDFkOTBmYWQ2NTE1YjFiNWY3ZGU3ZDg2YjMzNTRlZWM4ZTAyZTAxMGVhMjBkNmYxODdjMTM3MzhlN2IyYVB+fJc=: 00:20:13.892 07:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:13.892 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:13.892 07:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:13.892 07:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.892 07:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.892 07:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.892 07:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:13.892 07:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:13.892 07:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:14.151 07:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:20:14.151 07:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:14.151 07:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:14.151 07:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:14.151 07:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:14.151 07:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:14.151 07:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:14.151 07:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.151 07:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.151 07:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.151 07:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:14.151 07:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:14.151 07:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:14.719 00:20:14.719 07:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:14.719 07:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:14.719 07:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.978 07:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.978 07:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.978 07:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.978 07:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.978 07:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.978 07:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:14.978 { 00:20:14.978 "cntlid": 67, 00:20:14.978 "qid": 0, 00:20:14.978 "state": "enabled", 00:20:14.978 "thread": "nvmf_tgt_poll_group_000", 00:20:14.978 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:14.978 "listen_address": { 00:20:14.978 "trtype": "TCP", 00:20:14.978 "adrfam": "IPv4", 00:20:14.978 "traddr": "10.0.0.2", 00:20:14.978 "trsvcid": "4420" 00:20:14.978 }, 00:20:14.978 "peer_address": { 00:20:14.978 "trtype": "TCP", 00:20:14.978 "adrfam": "IPv4", 00:20:14.978 "traddr": "10.0.0.1", 00:20:14.978 "trsvcid": "38468" 00:20:14.978 }, 00:20:14.978 "auth": { 00:20:14.978 "state": "completed", 00:20:14.978 "digest": "sha384", 00:20:14.978 "dhgroup": "ffdhe3072" 00:20:14.978 } 00:20:14.978 } 00:20:14.978 ]' 00:20:14.978 07:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:14.978 07:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:14.978 07:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:14.978 07:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:14.978 07:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:14.978 07:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:14.978 07:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:14.978 07:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.236 07:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDBkMTU5ZTQ3ODViMjIwNjJmMmU4MGM0MTE3NjZmNGU8Y8L9: --dhchap-ctrl-secret DHHC-1:02:YjQ2MmNkOWZlZGIwYjQzOWVjYTdhNjRmNmExMWM1NGRkZDkwOWFhYjQ4YjczNDM4XYLYyA==: 00:20:15.236 07:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NDBkMTU5ZTQ3ODViMjIwNjJmMmU4MGM0MTE3NjZmNGU8Y8L9: --dhchap-ctrl-secret DHHC-1:02:YjQ2MmNkOWZlZGIwYjQzOWVjYTdhNjRmNmExMWM1NGRkZDkwOWFhYjQ4YjczNDM4XYLYyA==: 00:20:16.217 07:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:16.217 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:16.217 07:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:16.217 07:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.217 07:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.217 07:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.217 07:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:16.217 07:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:16.217 07:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:16.475 07:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:20:16.475 07:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:16.475 07:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:16.475 07:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:16.475 07:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:16.475 07:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:16.475 07:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.475 07:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.475 07:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.475 07:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.475 07:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.475 07:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.475 07:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:17.042 00:20:17.042 07:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:17.042 07:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:17.042 07:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:17.300 07:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.300 07:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:17.300 07:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.300 07:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.300 07:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.300 07:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:17.300 { 00:20:17.300 "cntlid": 69, 00:20:17.300 "qid": 0, 00:20:17.300 "state": "enabled", 00:20:17.300 "thread": "nvmf_tgt_poll_group_000", 00:20:17.300 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:17.300 "listen_address": { 00:20:17.300 "trtype": "TCP", 00:20:17.300 "adrfam": "IPv4", 00:20:17.300 "traddr": "10.0.0.2", 00:20:17.300 "trsvcid": "4420" 00:20:17.300 }, 00:20:17.300 "peer_address": { 00:20:17.300 "trtype": "TCP", 00:20:17.300 "adrfam": "IPv4", 00:20:17.300 "traddr": "10.0.0.1", 00:20:17.300 "trsvcid": "54946" 00:20:17.300 }, 00:20:17.300 "auth": { 00:20:17.300 "state": "completed", 00:20:17.300 "digest": "sha384", 00:20:17.300 "dhgroup": "ffdhe3072" 00:20:17.300 } 00:20:17.300 } 00:20:17.300 ]' 00:20:17.300 07:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:17.300 07:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:17.300 07:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:17.300 07:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:17.300 07:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:17.300 07:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:17.300 07:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.300 07:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:17.558 07:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2I3Nzg2MzJmZGMyYWQwMDc4OTgyNDFmYzEzODg1ZGVmNTQyNzZjODRjOGQzNzhjsZAAwQ==: --dhchap-ctrl-secret DHHC-1:01:OWQxNDVkZTE1YzZjODU5ZTUzZDdkZTY4M2RiM2U3OWKiD6AM: 00:20:17.558 07:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:M2I3Nzg2MzJmZGMyYWQwMDc4OTgyNDFmYzEzODg1ZGVmNTQyNzZjODRjOGQzNzhjsZAAwQ==: --dhchap-ctrl-secret DHHC-1:01:OWQxNDVkZTE1YzZjODU5ZTUzZDdkZTY4M2RiM2U3OWKiD6AM: 00:20:18.937 07:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:18.937 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:18.937 07:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:18.937 07:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.937 07:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.937 07:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.937 07:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:18.937 07:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:18.937 07:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:18.937 07:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:20:18.937 07:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:18.937 07:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:18.937 07:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:18.937 07:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:18.937 07:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:18.937 07:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:18.937 07:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.937 07:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.937 07:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.937 07:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:18.937 07:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:18.937 07:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:19.506 00:20:19.506 07:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:19.506 07:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:19.506 07:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.764 07:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.764 07:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.764 07:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.764 07:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.764 07:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.764 07:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:19.764 { 00:20:19.764 "cntlid": 71, 00:20:19.764 "qid": 0, 00:20:19.764 "state": "enabled", 00:20:19.764 "thread": "nvmf_tgt_poll_group_000", 00:20:19.764 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:19.764 "listen_address": { 00:20:19.764 "trtype": "TCP", 00:20:19.764 "adrfam": "IPv4", 00:20:19.764 "traddr": "10.0.0.2", 00:20:19.764 "trsvcid": "4420" 00:20:19.764 }, 00:20:19.764 "peer_address": { 00:20:19.764 "trtype": "TCP", 00:20:19.764 "adrfam": "IPv4", 00:20:19.764 "traddr": "10.0.0.1", 00:20:19.764 "trsvcid": "54968" 00:20:19.764 }, 00:20:19.764 "auth": { 00:20:19.764 "state": "completed", 00:20:19.764 "digest": "sha384", 00:20:19.764 "dhgroup": "ffdhe3072" 00:20:19.764 } 00:20:19.764 } 00:20:19.764 ]' 00:20:19.764 07:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:19.764 07:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:19.764 07:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:19.764 07:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:19.764 07:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:19.764 07:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:19.764 07:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.764 07:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:20.022 07:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTgxZTE5YjUzODYxYTlkMWFkYzM3MDQwY2ZhNDMzNmZkMTBjZWEyMDY3NGNkMTk5YTIyNDAzNzIwOWFhMjc2ZhRtH/g=: 00:20:20.022 07:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YTgxZTE5YjUzODYxYTlkMWFkYzM3MDQwY2ZhNDMzNmZkMTBjZWEyMDY3NGNkMTk5YTIyNDAzNzIwOWFhMjc2ZhRtH/g=: 00:20:20.957 07:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.957 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.957 07:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:20.957 07:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.957 07:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.957 07:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.957 07:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:20.957 07:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:20.957 07:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:20.957 07:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:21.215 07:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:20:21.215 07:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:21.215 07:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:21.215 07:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:21.215 07:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:21.215 07:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:21.215 07:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:21.215 07:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.215 07:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.215 07:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.215 07:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:21.215 07:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:21.215 07:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:21.784 00:20:21.784 07:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:21.784 07:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:21.784 07:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:22.042 07:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.042 07:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:22.042 07:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.042 07:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.042 07:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.042 07:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:22.042 { 00:20:22.042 "cntlid": 73, 00:20:22.042 "qid": 0, 00:20:22.042 "state": "enabled", 00:20:22.042 "thread": "nvmf_tgt_poll_group_000", 00:20:22.042 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:22.042 "listen_address": { 00:20:22.042 "trtype": "TCP", 00:20:22.042 "adrfam": "IPv4", 00:20:22.042 "traddr": "10.0.0.2", 00:20:22.042 "trsvcid": "4420" 00:20:22.042 }, 00:20:22.042 "peer_address": { 00:20:22.042 "trtype": "TCP", 00:20:22.042 "adrfam": "IPv4", 00:20:22.042 "traddr": "10.0.0.1", 00:20:22.042 "trsvcid": "54982" 00:20:22.042 }, 00:20:22.042 "auth": { 00:20:22.042 "state": "completed", 00:20:22.042 "digest": "sha384", 00:20:22.042 "dhgroup": "ffdhe4096" 00:20:22.042 } 00:20:22.042 } 00:20:22.042 ]' 00:20:22.042 07:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:22.042 07:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:22.042 07:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:22.042 07:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:22.042 07:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:22.042 07:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:22.042 07:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:22.042 07:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:22.301 07:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2E4MjQ5M2VjMDRmZWIzYTMyYmNiYTMwYmJkNDE0OTQ3ZGFkZDE3MTBmMzVlMjcz+T6J9g==: --dhchap-ctrl-secret DHHC-1:03:YzI3ZDFkOTBmYWQ2NTE1YjFiNWY3ZGU3ZDg2YjMzNTRlZWM4ZTAyZTAxMGVhMjBkNmYxODdjMTM3MzhlN2IyYVB+fJc=: 00:20:22.301 07:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:N2E4MjQ5M2VjMDRmZWIzYTMyYmNiYTMwYmJkNDE0OTQ3ZGFkZDE3MTBmMzVlMjcz+T6J9g==: --dhchap-ctrl-secret DHHC-1:03:YzI3ZDFkOTBmYWQ2NTE1YjFiNWY3ZGU3ZDg2YjMzNTRlZWM4ZTAyZTAxMGVhMjBkNmYxODdjMTM3MzhlN2IyYVB+fJc=: 00:20:23.240 07:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:23.498 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:23.498 07:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:23.498 07:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.498 07:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.498 07:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.498 07:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:23.498 07:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:23.498 07:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:23.756 07:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:20:23.756 07:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:23.756 07:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:23.756 07:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:23.756 07:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:23.756 07:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:23.756 07:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:23.756 07:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.756 07:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.756 07:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.756 07:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:23.756 07:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:23.756 07:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:24.014 00:20:24.014 07:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:24.014 07:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:24.014 07:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.273 07:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.273 07:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:24.273 07:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.273 07:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.273 07:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.273 07:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:24.273 { 00:20:24.273 "cntlid": 75, 00:20:24.273 "qid": 0, 00:20:24.273 "state": "enabled", 00:20:24.273 "thread": "nvmf_tgt_poll_group_000", 00:20:24.273 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:24.273 "listen_address": { 00:20:24.273 "trtype": "TCP", 00:20:24.273 "adrfam": "IPv4", 00:20:24.273 "traddr": "10.0.0.2", 00:20:24.273 "trsvcid": "4420" 00:20:24.273 }, 00:20:24.273 "peer_address": { 00:20:24.273 "trtype": "TCP", 00:20:24.273 "adrfam": "IPv4", 00:20:24.273 "traddr": "10.0.0.1", 00:20:24.273 "trsvcid": "55000" 00:20:24.273 }, 00:20:24.273 "auth": { 00:20:24.273 "state": "completed", 00:20:24.273 "digest": "sha384", 00:20:24.273 "dhgroup": "ffdhe4096" 00:20:24.273 } 00:20:24.273 } 00:20:24.273 ]' 00:20:24.273 07:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:24.530 07:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:24.530 07:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:24.530 07:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:24.530 07:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:24.530 07:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.530 07:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.530 07:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.788 07:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDBkMTU5ZTQ3ODViMjIwNjJmMmU4MGM0MTE3NjZmNGU8Y8L9: --dhchap-ctrl-secret DHHC-1:02:YjQ2MmNkOWZlZGIwYjQzOWVjYTdhNjRmNmExMWM1NGRkZDkwOWFhYjQ4YjczNDM4XYLYyA==: 00:20:24.788 07:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NDBkMTU5ZTQ3ODViMjIwNjJmMmU4MGM0MTE3NjZmNGU8Y8L9: --dhchap-ctrl-secret DHHC-1:02:YjQ2MmNkOWZlZGIwYjQzOWVjYTdhNjRmNmExMWM1NGRkZDkwOWFhYjQ4YjczNDM4XYLYyA==: 00:20:25.722 07:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.722 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.722 07:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:25.722 07:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.722 07:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.722 07:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.722 07:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:25.722 07:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:25.722 07:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:25.980 07:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:20:25.980 07:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:25.980 07:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:25.980 07:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:25.980 07:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:25.980 07:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.980 07:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:25.980 07:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.980 07:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.980 07:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.980 07:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:25.980 07:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:25.980 07:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:26.547 00:20:26.547 07:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:26.547 07:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:26.547 07:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.806 07:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.806 07:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.806 07:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.806 07:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.806 07:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.806 07:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:26.806 { 00:20:26.806 "cntlid": 77, 00:20:26.806 "qid": 0, 00:20:26.806 "state": "enabled", 00:20:26.806 "thread": "nvmf_tgt_poll_group_000", 00:20:26.806 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:26.806 "listen_address": { 00:20:26.806 "trtype": "TCP", 00:20:26.806 "adrfam": "IPv4", 00:20:26.806 "traddr": "10.0.0.2", 00:20:26.806 "trsvcid": "4420" 00:20:26.806 }, 00:20:26.806 "peer_address": { 00:20:26.806 "trtype": "TCP", 00:20:26.806 "adrfam": "IPv4", 00:20:26.806 "traddr": "10.0.0.1", 00:20:26.806 "trsvcid": "45658" 00:20:26.806 }, 00:20:26.806 "auth": { 00:20:26.806 "state": "completed", 00:20:26.806 "digest": "sha384", 00:20:26.806 "dhgroup": "ffdhe4096" 00:20:26.806 } 00:20:26.806 } 00:20:26.806 ]' 00:20:26.806 07:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:26.806 07:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:26.806 07:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:26.806 07:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:26.806 07:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:27.064 07:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:27.064 07:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:27.064 07:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:27.322 07:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2I3Nzg2MzJmZGMyYWQwMDc4OTgyNDFmYzEzODg1ZGVmNTQyNzZjODRjOGQzNzhjsZAAwQ==: --dhchap-ctrl-secret DHHC-1:01:OWQxNDVkZTE1YzZjODU5ZTUzZDdkZTY4M2RiM2U3OWKiD6AM: 00:20:27.323 07:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:M2I3Nzg2MzJmZGMyYWQwMDc4OTgyNDFmYzEzODg1ZGVmNTQyNzZjODRjOGQzNzhjsZAAwQ==: --dhchap-ctrl-secret DHHC-1:01:OWQxNDVkZTE1YzZjODU5ZTUzZDdkZTY4M2RiM2U3OWKiD6AM: 00:20:28.255 07:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:28.255 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:28.255 07:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:28.255 07:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.255 07:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.255 07:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.255 07:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:28.255 07:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:28.255 07:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:28.513 07:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:20:28.513 07:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:28.513 07:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:28.513 07:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:28.513 07:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:28.513 07:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:28.513 07:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:28.513 07:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.513 07:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.513 07:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.513 07:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:28.513 07:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:28.513 07:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:28.771 00:20:28.771 07:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:28.771 07:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.771 07:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:29.029 07:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.029 07:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.029 07:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.029 07:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.287 07:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.287 07:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:29.287 { 00:20:29.287 "cntlid": 79, 00:20:29.287 "qid": 0, 00:20:29.287 "state": "enabled", 00:20:29.287 "thread": "nvmf_tgt_poll_group_000", 00:20:29.287 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:29.287 "listen_address": { 00:20:29.287 "trtype": "TCP", 00:20:29.287 "adrfam": "IPv4", 00:20:29.287 "traddr": "10.0.0.2", 00:20:29.287 "trsvcid": "4420" 00:20:29.287 }, 00:20:29.287 "peer_address": { 00:20:29.287 "trtype": "TCP", 00:20:29.287 "adrfam": "IPv4", 00:20:29.287 "traddr": "10.0.0.1", 00:20:29.287 "trsvcid": "45684" 00:20:29.287 }, 00:20:29.287 "auth": { 00:20:29.287 "state": "completed", 00:20:29.287 "digest": "sha384", 00:20:29.287 "dhgroup": "ffdhe4096" 00:20:29.287 } 00:20:29.287 } 00:20:29.287 ]' 00:20:29.287 07:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:29.287 07:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:29.287 07:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:29.287 07:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:29.287 07:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:29.287 07:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:29.287 07:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:29.287 07:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:29.545 07:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTgxZTE5YjUzODYxYTlkMWFkYzM3MDQwY2ZhNDMzNmZkMTBjZWEyMDY3NGNkMTk5YTIyNDAzNzIwOWFhMjc2ZhRtH/g=: 00:20:29.545 07:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YTgxZTE5YjUzODYxYTlkMWFkYzM3MDQwY2ZhNDMzNmZkMTBjZWEyMDY3NGNkMTk5YTIyNDAzNzIwOWFhMjc2ZhRtH/g=: 00:20:30.481 07:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:30.481 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:30.481 07:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:30.481 07:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.481 07:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.481 07:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.481 07:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:30.481 07:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:30.481 07:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:30.481 07:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:30.740 07:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:20:30.740 07:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:30.740 07:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:30.740 07:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:30.740 07:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:30.740 07:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:30.740 07:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:30.740 07:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.740 07:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.998 07:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.998 07:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:30.998 07:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:30.998 07:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:31.567 00:20:31.567 07:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:31.567 07:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:31.567 07:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:31.825 07:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.825 07:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:31.825 07:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.825 07:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.825 07:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.825 07:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:31.825 { 00:20:31.825 "cntlid": 81, 00:20:31.825 "qid": 0, 00:20:31.825 "state": "enabled", 00:20:31.825 "thread": "nvmf_tgt_poll_group_000", 00:20:31.825 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:31.825 "listen_address": { 00:20:31.825 "trtype": "TCP", 00:20:31.825 "adrfam": "IPv4", 00:20:31.825 "traddr": "10.0.0.2", 00:20:31.825 "trsvcid": "4420" 00:20:31.825 }, 00:20:31.825 "peer_address": { 00:20:31.825 "trtype": "TCP", 00:20:31.825 "adrfam": "IPv4", 00:20:31.825 "traddr": "10.0.0.1", 00:20:31.825 "trsvcid": "45702" 00:20:31.825 }, 00:20:31.825 "auth": { 00:20:31.825 "state": "completed", 00:20:31.825 "digest": "sha384", 00:20:31.825 "dhgroup": "ffdhe6144" 00:20:31.825 } 00:20:31.825 } 00:20:31.825 ]' 00:20:31.825 07:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:31.826 07:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:31.826 07:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:31.826 07:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:31.826 07:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:31.826 07:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:31.826 07:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:31.826 07:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:32.084 07:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2E4MjQ5M2VjMDRmZWIzYTMyYmNiYTMwYmJkNDE0OTQ3ZGFkZDE3MTBmMzVlMjcz+T6J9g==: --dhchap-ctrl-secret DHHC-1:03:YzI3ZDFkOTBmYWQ2NTE1YjFiNWY3ZGU3ZDg2YjMzNTRlZWM4ZTAyZTAxMGVhMjBkNmYxODdjMTM3MzhlN2IyYVB+fJc=: 00:20:32.084 07:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:N2E4MjQ5M2VjMDRmZWIzYTMyYmNiYTMwYmJkNDE0OTQ3ZGFkZDE3MTBmMzVlMjcz+T6J9g==: --dhchap-ctrl-secret DHHC-1:03:YzI3ZDFkOTBmYWQ2NTE1YjFiNWY3ZGU3ZDg2YjMzNTRlZWM4ZTAyZTAxMGVhMjBkNmYxODdjMTM3MzhlN2IyYVB+fJc=: 00:20:33.018 07:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.018 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.018 07:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:33.018 07:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.018 07:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.018 07:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.018 07:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:33.018 07:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:33.018 07:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:33.586 07:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:20:33.586 07:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:33.586 07:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:33.586 07:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:33.586 07:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:33.586 07:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:33.586 07:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:33.586 07:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.586 07:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.586 07:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.586 07:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:33.586 07:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:33.586 07:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:34.153 00:20:34.153 07:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:34.153 07:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.153 07:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:34.412 07:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.412 07:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.412 07:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.412 07:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.412 07:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.412 07:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:34.412 { 00:20:34.412 "cntlid": 83, 00:20:34.412 "qid": 0, 00:20:34.412 "state": "enabled", 00:20:34.412 "thread": "nvmf_tgt_poll_group_000", 00:20:34.412 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:34.412 "listen_address": { 00:20:34.412 "trtype": "TCP", 00:20:34.412 "adrfam": "IPv4", 00:20:34.412 "traddr": "10.0.0.2", 00:20:34.412 "trsvcid": "4420" 00:20:34.412 }, 00:20:34.412 "peer_address": { 00:20:34.412 "trtype": "TCP", 00:20:34.412 "adrfam": "IPv4", 00:20:34.412 "traddr": "10.0.0.1", 00:20:34.412 "trsvcid": "45728" 00:20:34.412 }, 00:20:34.412 "auth": { 00:20:34.412 "state": "completed", 00:20:34.412 "digest": "sha384", 00:20:34.412 "dhgroup": "ffdhe6144" 00:20:34.412 } 00:20:34.412 } 00:20:34.412 ]' 00:20:34.412 07:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:34.412 07:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:34.412 07:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:34.412 07:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:34.412 07:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:34.412 07:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.412 07:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.412 07:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:34.670 07:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDBkMTU5ZTQ3ODViMjIwNjJmMmU4MGM0MTE3NjZmNGU8Y8L9: --dhchap-ctrl-secret DHHC-1:02:YjQ2MmNkOWZlZGIwYjQzOWVjYTdhNjRmNmExMWM1NGRkZDkwOWFhYjQ4YjczNDM4XYLYyA==: 00:20:34.670 07:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NDBkMTU5ZTQ3ODViMjIwNjJmMmU4MGM0MTE3NjZmNGU8Y8L9: --dhchap-ctrl-secret DHHC-1:02:YjQ2MmNkOWZlZGIwYjQzOWVjYTdhNjRmNmExMWM1NGRkZDkwOWFhYjQ4YjczNDM4XYLYyA==: 00:20:35.603 07:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:35.603 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:35.603 07:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:35.603 07:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.603 07:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.603 07:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.603 07:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:35.603 07:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:35.603 07:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:35.861 07:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:20:35.861 07:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:35.861 07:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:35.861 07:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:35.861 07:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:35.861 07:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:35.861 07:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:35.861 07:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.861 07:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.861 07:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.861 07:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:35.861 07:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:35.861 07:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:36.428 00:20:36.428 07:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:36.428 07:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:36.428 07:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:36.997 07:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.997 07:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:36.997 07:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.997 07:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.997 07:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.997 07:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:36.997 { 00:20:36.997 "cntlid": 85, 00:20:36.997 "qid": 0, 00:20:36.997 "state": "enabled", 00:20:36.997 "thread": "nvmf_tgt_poll_group_000", 00:20:36.997 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:36.997 "listen_address": { 00:20:36.997 "trtype": "TCP", 00:20:36.997 "adrfam": "IPv4", 00:20:36.997 "traddr": "10.0.0.2", 00:20:36.997 "trsvcid": "4420" 00:20:36.997 }, 00:20:36.997 "peer_address": { 00:20:36.997 "trtype": "TCP", 00:20:36.997 "adrfam": "IPv4", 00:20:36.997 "traddr": "10.0.0.1", 00:20:36.997 "trsvcid": "48642" 00:20:36.997 }, 00:20:36.997 "auth": { 00:20:36.997 "state": "completed", 00:20:36.997 "digest": "sha384", 00:20:36.997 "dhgroup": "ffdhe6144" 00:20:36.997 } 00:20:36.997 } 00:20:36.997 ]' 00:20:36.997 07:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:36.997 07:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:36.997 07:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:36.998 07:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:36.998 07:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:36.998 07:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:36.998 07:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:36.998 07:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.256 07:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2I3Nzg2MzJmZGMyYWQwMDc4OTgyNDFmYzEzODg1ZGVmNTQyNzZjODRjOGQzNzhjsZAAwQ==: --dhchap-ctrl-secret DHHC-1:01:OWQxNDVkZTE1YzZjODU5ZTUzZDdkZTY4M2RiM2U3OWKiD6AM: 00:20:37.256 07:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:M2I3Nzg2MzJmZGMyYWQwMDc4OTgyNDFmYzEzODg1ZGVmNTQyNzZjODRjOGQzNzhjsZAAwQ==: --dhchap-ctrl-secret DHHC-1:01:OWQxNDVkZTE1YzZjODU5ZTUzZDdkZTY4M2RiM2U3OWKiD6AM: 00:20:38.197 07:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.197 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.197 07:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:38.197 07:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.197 07:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.197 07:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.197 07:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:38.197 07:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:38.197 07:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:38.456 07:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:20:38.456 07:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:38.456 07:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:38.456 07:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:38.456 07:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:38.456 07:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.456 07:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:38.456 07:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.456 07:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.456 07:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.456 07:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:38.456 07:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:38.456 07:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:39.023 00:20:39.023 07:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:39.023 07:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.023 07:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:39.282 07:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.282 07:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.282 07:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.282 07:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.282 07:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.282 07:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:39.282 { 00:20:39.282 "cntlid": 87, 00:20:39.282 "qid": 0, 00:20:39.282 "state": "enabled", 00:20:39.282 "thread": "nvmf_tgt_poll_group_000", 00:20:39.282 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:39.282 "listen_address": { 00:20:39.282 "trtype": "TCP", 00:20:39.282 "adrfam": "IPv4", 00:20:39.282 "traddr": "10.0.0.2", 00:20:39.282 "trsvcid": "4420" 00:20:39.282 }, 00:20:39.282 "peer_address": { 00:20:39.282 "trtype": "TCP", 00:20:39.282 "adrfam": "IPv4", 00:20:39.282 "traddr": "10.0.0.1", 00:20:39.282 "trsvcid": "48668" 00:20:39.282 }, 00:20:39.282 "auth": { 00:20:39.282 "state": "completed", 00:20:39.282 "digest": "sha384", 00:20:39.282 "dhgroup": "ffdhe6144" 00:20:39.282 } 00:20:39.282 } 00:20:39.282 ]' 00:20:39.282 07:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:39.282 07:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:39.282 07:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:39.540 07:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:39.540 07:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:39.540 07:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.540 07:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.540 07:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.798 07:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTgxZTE5YjUzODYxYTlkMWFkYzM3MDQwY2ZhNDMzNmZkMTBjZWEyMDY3NGNkMTk5YTIyNDAzNzIwOWFhMjc2ZhRtH/g=: 00:20:39.798 07:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YTgxZTE5YjUzODYxYTlkMWFkYzM3MDQwY2ZhNDMzNmZkMTBjZWEyMDY3NGNkMTk5YTIyNDAzNzIwOWFhMjc2ZhRtH/g=: 00:20:40.736 07:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.736 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.736 07:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:40.736 07:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.736 07:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.736 07:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.736 07:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:40.736 07:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:40.736 07:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:40.736 07:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:40.994 07:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:20:40.994 07:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:40.994 07:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:40.994 07:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:40.994 07:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:40.994 07:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:40.994 07:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:40.994 07:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.994 07:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.994 07:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.994 07:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:40.994 07:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:40.994 07:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.933 00:20:41.933 07:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:41.933 07:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:41.933 07:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:42.192 07:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.192 07:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:42.192 07:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.192 07:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.192 07:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.192 07:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:42.192 { 00:20:42.192 "cntlid": 89, 00:20:42.192 "qid": 0, 00:20:42.192 "state": "enabled", 00:20:42.192 "thread": "nvmf_tgt_poll_group_000", 00:20:42.192 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:42.192 "listen_address": { 00:20:42.192 "trtype": "TCP", 00:20:42.192 "adrfam": "IPv4", 00:20:42.192 "traddr": "10.0.0.2", 00:20:42.192 "trsvcid": "4420" 00:20:42.192 }, 00:20:42.192 "peer_address": { 00:20:42.192 "trtype": "TCP", 00:20:42.192 "adrfam": "IPv4", 00:20:42.192 "traddr": "10.0.0.1", 00:20:42.192 "trsvcid": "48698" 00:20:42.192 }, 00:20:42.192 "auth": { 00:20:42.192 "state": "completed", 00:20:42.192 "digest": "sha384", 00:20:42.192 "dhgroup": "ffdhe8192" 00:20:42.192 } 00:20:42.192 } 00:20:42.192 ]' 00:20:42.192 07:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:42.450 07:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:42.450 07:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:42.450 07:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:42.450 07:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:42.450 07:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:42.450 07:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:42.450 07:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:42.709 07:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2E4MjQ5M2VjMDRmZWIzYTMyYmNiYTMwYmJkNDE0OTQ3ZGFkZDE3MTBmMzVlMjcz+T6J9g==: --dhchap-ctrl-secret DHHC-1:03:YzI3ZDFkOTBmYWQ2NTE1YjFiNWY3ZGU3ZDg2YjMzNTRlZWM4ZTAyZTAxMGVhMjBkNmYxODdjMTM3MzhlN2IyYVB+fJc=: 00:20:42.710 07:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:N2E4MjQ5M2VjMDRmZWIzYTMyYmNiYTMwYmJkNDE0OTQ3ZGFkZDE3MTBmMzVlMjcz+T6J9g==: --dhchap-ctrl-secret DHHC-1:03:YzI3ZDFkOTBmYWQ2NTE1YjFiNWY3ZGU3ZDg2YjMzNTRlZWM4ZTAyZTAxMGVhMjBkNmYxODdjMTM3MzhlN2IyYVB+fJc=: 00:20:43.644 07:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.644 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.644 07:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:43.644 07:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.644 07:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.644 07:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.644 07:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:43.644 07:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:43.644 07:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:43.902 07:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:20:43.902 07:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:43.902 07:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:43.902 07:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:43.902 07:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:43.902 07:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:43.902 07:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.902 07:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.902 07:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.902 07:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.902 07:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.902 07:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.902 07:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:44.838 00:20:44.838 07:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:44.838 07:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:44.838 07:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.096 07:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.096 07:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.096 07:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.096 07:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.096 07:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.096 07:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:45.096 { 00:20:45.096 "cntlid": 91, 00:20:45.096 "qid": 0, 00:20:45.096 "state": "enabled", 00:20:45.096 "thread": "nvmf_tgt_poll_group_000", 00:20:45.096 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:45.096 "listen_address": { 00:20:45.096 "trtype": "TCP", 00:20:45.096 "adrfam": "IPv4", 00:20:45.096 "traddr": "10.0.0.2", 00:20:45.096 "trsvcid": "4420" 00:20:45.096 }, 00:20:45.096 "peer_address": { 00:20:45.096 "trtype": "TCP", 00:20:45.096 "adrfam": "IPv4", 00:20:45.096 "traddr": "10.0.0.1", 00:20:45.096 "trsvcid": "48726" 00:20:45.096 }, 00:20:45.096 "auth": { 00:20:45.096 "state": "completed", 00:20:45.096 "digest": "sha384", 00:20:45.096 "dhgroup": "ffdhe8192" 00:20:45.096 } 00:20:45.096 } 00:20:45.096 ]' 00:20:45.096 07:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:45.096 07:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:45.096 07:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:45.354 07:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:45.354 07:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:45.354 07:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:45.354 07:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:45.354 07:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.611 07:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDBkMTU5ZTQ3ODViMjIwNjJmMmU4MGM0MTE3NjZmNGU8Y8L9: --dhchap-ctrl-secret DHHC-1:02:YjQ2MmNkOWZlZGIwYjQzOWVjYTdhNjRmNmExMWM1NGRkZDkwOWFhYjQ4YjczNDM4XYLYyA==: 00:20:45.611 07:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NDBkMTU5ZTQ3ODViMjIwNjJmMmU4MGM0MTE3NjZmNGU8Y8L9: --dhchap-ctrl-secret DHHC-1:02:YjQ2MmNkOWZlZGIwYjQzOWVjYTdhNjRmNmExMWM1NGRkZDkwOWFhYjQ4YjczNDM4XYLYyA==: 00:20:46.648 07:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:46.648 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:46.648 07:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:46.648 07:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.648 07:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.648 07:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.648 07:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:46.648 07:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:46.648 07:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:46.905 07:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:20:46.905 07:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:46.905 07:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:46.905 07:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:46.906 07:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:46.906 07:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:46.906 07:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:46.906 07:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.906 07:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.906 07:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.906 07:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:46.906 07:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:46.906 07:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:47.843 00:20:47.843 07:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:47.843 07:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:47.843 07:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:48.102 07:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.102 07:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:48.102 07:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.102 07:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.102 07:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.102 07:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:48.102 { 00:20:48.102 "cntlid": 93, 00:20:48.102 "qid": 0, 00:20:48.102 "state": "enabled", 00:20:48.102 "thread": "nvmf_tgt_poll_group_000", 00:20:48.102 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:48.102 "listen_address": { 00:20:48.102 "trtype": "TCP", 00:20:48.102 "adrfam": "IPv4", 00:20:48.102 "traddr": "10.0.0.2", 00:20:48.102 "trsvcid": "4420" 00:20:48.102 }, 00:20:48.102 "peer_address": { 00:20:48.102 "trtype": "TCP", 00:20:48.102 "adrfam": "IPv4", 00:20:48.102 "traddr": "10.0.0.1", 00:20:48.102 "trsvcid": "41838" 00:20:48.102 }, 00:20:48.102 "auth": { 00:20:48.102 "state": "completed", 00:20:48.102 "digest": "sha384", 00:20:48.102 "dhgroup": "ffdhe8192" 00:20:48.102 } 00:20:48.102 } 00:20:48.102 ]' 00:20:48.102 07:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:48.102 07:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:48.102 07:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:48.102 07:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:48.102 07:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:48.102 07:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:48.102 07:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:48.102 07:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:48.361 07:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2I3Nzg2MzJmZGMyYWQwMDc4OTgyNDFmYzEzODg1ZGVmNTQyNzZjODRjOGQzNzhjsZAAwQ==: --dhchap-ctrl-secret DHHC-1:01:OWQxNDVkZTE1YzZjODU5ZTUzZDdkZTY4M2RiM2U3OWKiD6AM: 00:20:48.361 07:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:M2I3Nzg2MzJmZGMyYWQwMDc4OTgyNDFmYzEzODg1ZGVmNTQyNzZjODRjOGQzNzhjsZAAwQ==: --dhchap-ctrl-secret DHHC-1:01:OWQxNDVkZTE1YzZjODU5ZTUzZDdkZTY4M2RiM2U3OWKiD6AM: 00:20:49.297 07:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:49.297 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:49.297 07:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:49.298 07:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.298 07:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.558 07:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.558 07:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:49.558 07:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:49.558 07:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:49.816 07:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:20:49.816 07:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:49.816 07:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:49.816 07:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:49.816 07:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:49.816 07:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:49.817 07:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:49.817 07:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.817 07:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.817 07:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.817 07:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:49.817 07:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:49.817 07:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:50.756 00:20:50.756 07:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:50.756 07:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:50.756 07:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:50.756 07:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.756 07:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:50.756 07:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.756 07:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.015 07:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.015 07:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:51.015 { 00:20:51.015 "cntlid": 95, 00:20:51.015 "qid": 0, 00:20:51.015 "state": "enabled", 00:20:51.015 "thread": "nvmf_tgt_poll_group_000", 00:20:51.015 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:51.015 "listen_address": { 00:20:51.015 "trtype": "TCP", 00:20:51.015 "adrfam": "IPv4", 00:20:51.015 "traddr": "10.0.0.2", 00:20:51.015 "trsvcid": "4420" 00:20:51.015 }, 00:20:51.015 "peer_address": { 00:20:51.015 "trtype": "TCP", 00:20:51.015 "adrfam": "IPv4", 00:20:51.015 "traddr": "10.0.0.1", 00:20:51.015 "trsvcid": "41864" 00:20:51.015 }, 00:20:51.015 "auth": { 00:20:51.015 "state": "completed", 00:20:51.015 "digest": "sha384", 00:20:51.015 "dhgroup": "ffdhe8192" 00:20:51.015 } 00:20:51.015 } 00:20:51.015 ]' 00:20:51.015 07:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:51.015 07:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:51.015 07:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:51.015 07:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:51.015 07:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:51.015 07:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:51.015 07:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:51.015 07:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:51.274 07:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTgxZTE5YjUzODYxYTlkMWFkYzM3MDQwY2ZhNDMzNmZkMTBjZWEyMDY3NGNkMTk5YTIyNDAzNzIwOWFhMjc2ZhRtH/g=: 00:20:51.274 07:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YTgxZTE5YjUzODYxYTlkMWFkYzM3MDQwY2ZhNDMzNmZkMTBjZWEyMDY3NGNkMTk5YTIyNDAzNzIwOWFhMjc2ZhRtH/g=: 00:20:52.211 07:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:52.470 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:52.470 07:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:52.470 07:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.470 07:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.470 07:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.470 07:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:52.470 07:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:52.470 07:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:52.470 07:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:52.470 07:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:52.728 07:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:20:52.728 07:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:52.728 07:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:52.728 07:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:52.728 07:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:52.728 07:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:52.728 07:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.728 07:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.728 07:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.728 07:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.728 07:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.728 07:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.728 07:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.987 00:20:52.987 07:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:52.987 07:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:52.987 07:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:53.244 07:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.245 07:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:53.245 07:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.245 07:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.245 07:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.245 07:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:53.245 { 00:20:53.245 "cntlid": 97, 00:20:53.245 "qid": 0, 00:20:53.245 "state": "enabled", 00:20:53.245 "thread": "nvmf_tgt_poll_group_000", 00:20:53.245 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:53.245 "listen_address": { 00:20:53.245 "trtype": "TCP", 00:20:53.245 "adrfam": "IPv4", 00:20:53.245 "traddr": "10.0.0.2", 00:20:53.245 "trsvcid": "4420" 00:20:53.245 }, 00:20:53.245 "peer_address": { 00:20:53.245 "trtype": "TCP", 00:20:53.245 "adrfam": "IPv4", 00:20:53.245 "traddr": "10.0.0.1", 00:20:53.245 "trsvcid": "41894" 00:20:53.245 }, 00:20:53.245 "auth": { 00:20:53.245 "state": "completed", 00:20:53.245 "digest": "sha512", 00:20:53.245 "dhgroup": "null" 00:20:53.245 } 00:20:53.245 } 00:20:53.245 ]' 00:20:53.245 07:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:53.245 07:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:53.245 07:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:53.245 07:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:53.245 07:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:53.245 07:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:53.245 07:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:53.245 07:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.813 07:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2E4MjQ5M2VjMDRmZWIzYTMyYmNiYTMwYmJkNDE0OTQ3ZGFkZDE3MTBmMzVlMjcz+T6J9g==: --dhchap-ctrl-secret DHHC-1:03:YzI3ZDFkOTBmYWQ2NTE1YjFiNWY3ZGU3ZDg2YjMzNTRlZWM4ZTAyZTAxMGVhMjBkNmYxODdjMTM3MzhlN2IyYVB+fJc=: 00:20:53.813 07:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:N2E4MjQ5M2VjMDRmZWIzYTMyYmNiYTMwYmJkNDE0OTQ3ZGFkZDE3MTBmMzVlMjcz+T6J9g==: --dhchap-ctrl-secret DHHC-1:03:YzI3ZDFkOTBmYWQ2NTE1YjFiNWY3ZGU3ZDg2YjMzNTRlZWM4ZTAyZTAxMGVhMjBkNmYxODdjMTM3MzhlN2IyYVB+fJc=: 00:20:54.750 07:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:54.750 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:54.750 07:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:54.750 07:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.750 07:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.750 07:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.750 07:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:54.750 07:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:54.750 07:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:55.009 07:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:20:55.009 07:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:55.009 07:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:55.009 07:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:55.009 07:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:55.009 07:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:55.009 07:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.009 07:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.009 07:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.009 07:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.009 07:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.009 07:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.009 07:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.267 00:20:55.267 07:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:55.267 07:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:55.267 07:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.525 07:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.525 07:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:55.525 07:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.525 07:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.525 07:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.525 07:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:55.525 { 00:20:55.525 "cntlid": 99, 00:20:55.525 "qid": 0, 00:20:55.525 "state": "enabled", 00:20:55.525 "thread": "nvmf_tgt_poll_group_000", 00:20:55.525 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:55.525 "listen_address": { 00:20:55.525 "trtype": "TCP", 00:20:55.525 "adrfam": "IPv4", 00:20:55.525 "traddr": "10.0.0.2", 00:20:55.525 "trsvcid": "4420" 00:20:55.525 }, 00:20:55.525 "peer_address": { 00:20:55.525 "trtype": "TCP", 00:20:55.525 "adrfam": "IPv4", 00:20:55.525 "traddr": "10.0.0.1", 00:20:55.525 "trsvcid": "45344" 00:20:55.525 }, 00:20:55.525 "auth": { 00:20:55.525 "state": "completed", 00:20:55.525 "digest": "sha512", 00:20:55.525 "dhgroup": "null" 00:20:55.525 } 00:20:55.525 } 00:20:55.525 ]' 00:20:55.525 07:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:55.525 07:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:55.525 07:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:55.525 07:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:55.525 07:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:55.525 07:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:55.525 07:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:55.525 07:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.091 07:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDBkMTU5ZTQ3ODViMjIwNjJmMmU4MGM0MTE3NjZmNGU8Y8L9: --dhchap-ctrl-secret DHHC-1:02:YjQ2MmNkOWZlZGIwYjQzOWVjYTdhNjRmNmExMWM1NGRkZDkwOWFhYjQ4YjczNDM4XYLYyA==: 00:20:56.091 07:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NDBkMTU5ZTQ3ODViMjIwNjJmMmU4MGM0MTE3NjZmNGU8Y8L9: --dhchap-ctrl-secret DHHC-1:02:YjQ2MmNkOWZlZGIwYjQzOWVjYTdhNjRmNmExMWM1NGRkZDkwOWFhYjQ4YjczNDM4XYLYyA==: 00:20:57.029 07:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:57.029 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:57.029 07:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:57.029 07:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.029 07:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.029 07:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.029 07:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:57.029 07:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:57.029 07:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:57.288 07:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:20:57.288 07:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:57.288 07:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:57.288 07:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:57.288 07:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:57.288 07:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:57.288 07:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:57.288 07:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.288 07:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.288 07:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.288 07:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:57.288 07:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:57.288 07:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:57.546 00:20:57.546 07:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:57.546 07:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.546 07:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:57.803 07:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.803 07:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:57.803 07:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.803 07:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.803 07:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.803 07:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:57.803 { 00:20:57.803 "cntlid": 101, 00:20:57.803 "qid": 0, 00:20:57.803 "state": "enabled", 00:20:57.803 "thread": "nvmf_tgt_poll_group_000", 00:20:57.803 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:57.803 "listen_address": { 00:20:57.803 "trtype": "TCP", 00:20:57.803 "adrfam": "IPv4", 00:20:57.803 "traddr": "10.0.0.2", 00:20:57.803 "trsvcid": "4420" 00:20:57.803 }, 00:20:57.803 "peer_address": { 00:20:57.803 "trtype": "TCP", 00:20:57.803 "adrfam": "IPv4", 00:20:57.803 "traddr": "10.0.0.1", 00:20:57.804 "trsvcid": "45366" 00:20:57.804 }, 00:20:57.804 "auth": { 00:20:57.804 "state": "completed", 00:20:57.804 "digest": "sha512", 00:20:57.804 "dhgroup": "null" 00:20:57.804 } 00:20:57.804 } 00:20:57.804 ]' 00:20:57.804 07:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:57.804 07:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:57.804 07:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:57.804 07:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:57.804 07:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:57.804 07:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:57.804 07:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:57.804 07:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.062 07:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2I3Nzg2MzJmZGMyYWQwMDc4OTgyNDFmYzEzODg1ZGVmNTQyNzZjODRjOGQzNzhjsZAAwQ==: --dhchap-ctrl-secret DHHC-1:01:OWQxNDVkZTE1YzZjODU5ZTUzZDdkZTY4M2RiM2U3OWKiD6AM: 00:20:58.062 07:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:M2I3Nzg2MzJmZGMyYWQwMDc4OTgyNDFmYzEzODg1ZGVmNTQyNzZjODRjOGQzNzhjsZAAwQ==: --dhchap-ctrl-secret DHHC-1:01:OWQxNDVkZTE1YzZjODU5ZTUzZDdkZTY4M2RiM2U3OWKiD6AM: 00:20:59.436 07:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.436 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.437 07:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:59.437 07:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.437 07:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.437 07:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.437 07:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:59.437 07:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:59.437 07:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:59.437 07:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:20:59.437 07:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:59.437 07:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:59.437 07:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:59.437 07:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:59.437 07:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:59.437 07:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:59.437 07:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.437 07:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.437 07:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.437 07:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:59.437 07:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:59.437 07:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:00.006 00:21:00.006 07:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:00.006 07:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:00.006 07:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.006 07:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.006 07:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:00.006 07:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.006 07:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.006 07:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.006 07:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:00.006 { 00:21:00.006 "cntlid": 103, 00:21:00.006 "qid": 0, 00:21:00.006 "state": "enabled", 00:21:00.006 "thread": "nvmf_tgt_poll_group_000", 00:21:00.006 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:00.006 "listen_address": { 00:21:00.006 "trtype": "TCP", 00:21:00.006 "adrfam": "IPv4", 00:21:00.006 "traddr": "10.0.0.2", 00:21:00.006 "trsvcid": "4420" 00:21:00.006 }, 00:21:00.006 "peer_address": { 00:21:00.006 "trtype": "TCP", 00:21:00.006 "adrfam": "IPv4", 00:21:00.006 "traddr": "10.0.0.1", 00:21:00.006 "trsvcid": "45394" 00:21:00.006 }, 00:21:00.006 "auth": { 00:21:00.006 "state": "completed", 00:21:00.006 "digest": "sha512", 00:21:00.006 "dhgroup": "null" 00:21:00.006 } 00:21:00.006 } 00:21:00.006 ]' 00:21:00.006 07:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:00.264 07:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:00.264 07:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:00.264 07:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:00.264 07:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:00.264 07:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.264 07:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.264 07:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:00.523 07:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTgxZTE5YjUzODYxYTlkMWFkYzM3MDQwY2ZhNDMzNmZkMTBjZWEyMDY3NGNkMTk5YTIyNDAzNzIwOWFhMjc2ZhRtH/g=: 00:21:00.523 07:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YTgxZTE5YjUzODYxYTlkMWFkYzM3MDQwY2ZhNDMzNmZkMTBjZWEyMDY3NGNkMTk5YTIyNDAzNzIwOWFhMjc2ZhRtH/g=: 00:21:01.462 07:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.462 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.462 07:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:01.462 07:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.462 07:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.462 07:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.462 07:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:01.462 07:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:01.462 07:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:01.462 07:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:01.722 07:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:21:01.722 07:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:01.722 07:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:01.722 07:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:01.722 07:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:01.722 07:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:01.722 07:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:01.722 07:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.722 07:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.722 07:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.722 07:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:01.722 07:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:01.722 07:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:02.292 00:21:02.292 07:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:02.292 07:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:02.292 07:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.292 07:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.292 07:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:02.292 07:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.292 07:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.551 07:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.551 07:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:02.551 { 00:21:02.551 "cntlid": 105, 00:21:02.551 "qid": 0, 00:21:02.551 "state": "enabled", 00:21:02.551 "thread": "nvmf_tgt_poll_group_000", 00:21:02.551 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:02.551 "listen_address": { 00:21:02.551 "trtype": "TCP", 00:21:02.551 "adrfam": "IPv4", 00:21:02.551 "traddr": "10.0.0.2", 00:21:02.551 "trsvcid": "4420" 00:21:02.551 }, 00:21:02.551 "peer_address": { 00:21:02.551 "trtype": "TCP", 00:21:02.551 "adrfam": "IPv4", 00:21:02.551 "traddr": "10.0.0.1", 00:21:02.551 "trsvcid": "45418" 00:21:02.551 }, 00:21:02.552 "auth": { 00:21:02.552 "state": "completed", 00:21:02.552 "digest": "sha512", 00:21:02.552 "dhgroup": "ffdhe2048" 00:21:02.552 } 00:21:02.552 } 00:21:02.552 ]' 00:21:02.552 07:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:02.552 07:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:02.552 07:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:02.552 07:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:02.552 07:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:02.552 07:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:02.552 07:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:02.552 07:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:02.810 07:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2E4MjQ5M2VjMDRmZWIzYTMyYmNiYTMwYmJkNDE0OTQ3ZGFkZDE3MTBmMzVlMjcz+T6J9g==: --dhchap-ctrl-secret DHHC-1:03:YzI3ZDFkOTBmYWQ2NTE1YjFiNWY3ZGU3ZDg2YjMzNTRlZWM4ZTAyZTAxMGVhMjBkNmYxODdjMTM3MzhlN2IyYVB+fJc=: 00:21:02.810 07:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:N2E4MjQ5M2VjMDRmZWIzYTMyYmNiYTMwYmJkNDE0OTQ3ZGFkZDE3MTBmMzVlMjcz+T6J9g==: --dhchap-ctrl-secret DHHC-1:03:YzI3ZDFkOTBmYWQ2NTE1YjFiNWY3ZGU3ZDg2YjMzNTRlZWM4ZTAyZTAxMGVhMjBkNmYxODdjMTM3MzhlN2IyYVB+fJc=: 00:21:03.746 07:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:03.746 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:03.746 07:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:03.746 07:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.746 07:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.746 07:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.746 07:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:03.746 07:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:03.747 07:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:04.005 07:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:21:04.005 07:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:04.005 07:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:04.005 07:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:04.005 07:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:04.005 07:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:04.005 07:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:04.005 07:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.005 07:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.005 07:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.005 07:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:04.005 07:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:04.005 07:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:04.571 00:21:04.571 07:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:04.571 07:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.571 07:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:04.829 07:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.829 07:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.829 07:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.829 07:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.829 07:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.829 07:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:04.829 { 00:21:04.829 "cntlid": 107, 00:21:04.829 "qid": 0, 00:21:04.829 "state": "enabled", 00:21:04.829 "thread": "nvmf_tgt_poll_group_000", 00:21:04.829 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:04.829 "listen_address": { 00:21:04.829 "trtype": "TCP", 00:21:04.829 "adrfam": "IPv4", 00:21:04.829 "traddr": "10.0.0.2", 00:21:04.829 "trsvcid": "4420" 00:21:04.829 }, 00:21:04.829 "peer_address": { 00:21:04.829 "trtype": "TCP", 00:21:04.829 "adrfam": "IPv4", 00:21:04.829 "traddr": "10.0.0.1", 00:21:04.829 "trsvcid": "45442" 00:21:04.829 }, 00:21:04.829 "auth": { 00:21:04.829 "state": "completed", 00:21:04.829 "digest": "sha512", 00:21:04.829 "dhgroup": "ffdhe2048" 00:21:04.829 } 00:21:04.829 } 00:21:04.829 ]' 00:21:04.830 07:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:04.830 07:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:04.830 07:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:04.830 07:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:04.830 07:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:04.830 07:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:04.830 07:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:04.830 07:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.087 07:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDBkMTU5ZTQ3ODViMjIwNjJmMmU4MGM0MTE3NjZmNGU8Y8L9: --dhchap-ctrl-secret DHHC-1:02:YjQ2MmNkOWZlZGIwYjQzOWVjYTdhNjRmNmExMWM1NGRkZDkwOWFhYjQ4YjczNDM4XYLYyA==: 00:21:05.087 07:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NDBkMTU5ZTQ3ODViMjIwNjJmMmU4MGM0MTE3NjZmNGU8Y8L9: --dhchap-ctrl-secret DHHC-1:02:YjQ2MmNkOWZlZGIwYjQzOWVjYTdhNjRmNmExMWM1NGRkZDkwOWFhYjQ4YjczNDM4XYLYyA==: 00:21:06.022 07:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:06.022 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.022 07:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:06.022 07:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.022 07:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.022 07:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.022 07:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:06.022 07:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:06.022 07:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:06.280 07:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:21:06.280 07:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:06.280 07:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:06.280 07:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:06.280 07:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:06.280 07:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:06.280 07:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:06.280 07:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.280 07:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.280 07:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.280 07:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:06.280 07:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:06.280 07:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:06.849 00:21:06.849 07:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:06.849 07:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:06.849 07:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:06.849 07:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.849 07:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:06.849 07:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.849 07:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.107 07:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.107 07:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:07.107 { 00:21:07.107 "cntlid": 109, 00:21:07.107 "qid": 0, 00:21:07.107 "state": "enabled", 00:21:07.107 "thread": "nvmf_tgt_poll_group_000", 00:21:07.107 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:07.107 "listen_address": { 00:21:07.107 "trtype": "TCP", 00:21:07.107 "adrfam": "IPv4", 00:21:07.107 "traddr": "10.0.0.2", 00:21:07.107 "trsvcid": "4420" 00:21:07.107 }, 00:21:07.107 "peer_address": { 00:21:07.107 "trtype": "TCP", 00:21:07.107 "adrfam": "IPv4", 00:21:07.107 "traddr": "10.0.0.1", 00:21:07.107 "trsvcid": "47056" 00:21:07.107 }, 00:21:07.107 "auth": { 00:21:07.107 "state": "completed", 00:21:07.107 "digest": "sha512", 00:21:07.107 "dhgroup": "ffdhe2048" 00:21:07.107 } 00:21:07.107 } 00:21:07.108 ]' 00:21:07.108 07:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:07.108 07:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:07.108 07:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:07.108 07:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:07.108 07:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:07.108 07:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:07.108 07:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:07.108 07:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.367 07:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2I3Nzg2MzJmZGMyYWQwMDc4OTgyNDFmYzEzODg1ZGVmNTQyNzZjODRjOGQzNzhjsZAAwQ==: --dhchap-ctrl-secret DHHC-1:01:OWQxNDVkZTE1YzZjODU5ZTUzZDdkZTY4M2RiM2U3OWKiD6AM: 00:21:07.367 07:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:M2I3Nzg2MzJmZGMyYWQwMDc4OTgyNDFmYzEzODg1ZGVmNTQyNzZjODRjOGQzNzhjsZAAwQ==: --dhchap-ctrl-secret DHHC-1:01:OWQxNDVkZTE1YzZjODU5ZTUzZDdkZTY4M2RiM2U3OWKiD6AM: 00:21:08.306 07:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:08.306 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:08.306 07:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:08.306 07:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.306 07:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.306 07:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.306 07:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:08.306 07:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:08.306 07:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:08.564 07:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:21:08.564 07:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:08.564 07:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:08.564 07:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:08.564 07:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:08.564 07:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:08.564 07:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:08.564 07:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.564 07:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.564 07:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.564 07:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:08.564 07:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:08.564 07:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:09.133 00:21:09.133 07:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:09.133 07:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:09.133 07:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.393 07:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.393 07:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:09.393 07:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.393 07:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.393 07:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.393 07:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:09.393 { 00:21:09.393 "cntlid": 111, 00:21:09.393 "qid": 0, 00:21:09.393 "state": "enabled", 00:21:09.393 "thread": "nvmf_tgt_poll_group_000", 00:21:09.393 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:09.393 "listen_address": { 00:21:09.393 "trtype": "TCP", 00:21:09.393 "adrfam": "IPv4", 00:21:09.393 "traddr": "10.0.0.2", 00:21:09.393 "trsvcid": "4420" 00:21:09.393 }, 00:21:09.393 "peer_address": { 00:21:09.393 "trtype": "TCP", 00:21:09.393 "adrfam": "IPv4", 00:21:09.393 "traddr": "10.0.0.1", 00:21:09.393 "trsvcid": "47090" 00:21:09.394 }, 00:21:09.394 "auth": { 00:21:09.394 "state": "completed", 00:21:09.394 "digest": "sha512", 00:21:09.394 "dhgroup": "ffdhe2048" 00:21:09.394 } 00:21:09.394 } 00:21:09.394 ]' 00:21:09.394 07:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:09.394 07:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:09.394 07:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:09.394 07:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:09.394 07:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:09.394 07:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:09.394 07:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:09.394 07:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:09.652 07:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTgxZTE5YjUzODYxYTlkMWFkYzM3MDQwY2ZhNDMzNmZkMTBjZWEyMDY3NGNkMTk5YTIyNDAzNzIwOWFhMjc2ZhRtH/g=: 00:21:09.652 07:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YTgxZTE5YjUzODYxYTlkMWFkYzM3MDQwY2ZhNDMzNmZkMTBjZWEyMDY3NGNkMTk5YTIyNDAzNzIwOWFhMjc2ZhRtH/g=: 00:21:10.589 07:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:10.589 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:10.589 07:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:10.589 07:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.589 07:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.589 07:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.589 07:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:10.589 07:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:10.589 07:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:10.589 07:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:10.847 07:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:21:10.847 07:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:10.847 07:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:10.847 07:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:10.847 07:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:10.847 07:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:10.847 07:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:10.847 07:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.847 07:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.847 07:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.847 07:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:10.847 07:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:10.847 07:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.415 00:21:11.415 07:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:11.415 07:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:11.415 07:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.673 07:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.673 07:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:11.673 07:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.673 07:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.673 07:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.673 07:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:11.673 { 00:21:11.673 "cntlid": 113, 00:21:11.673 "qid": 0, 00:21:11.673 "state": "enabled", 00:21:11.673 "thread": "nvmf_tgt_poll_group_000", 00:21:11.673 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:11.673 "listen_address": { 00:21:11.673 "trtype": "TCP", 00:21:11.673 "adrfam": "IPv4", 00:21:11.673 "traddr": "10.0.0.2", 00:21:11.673 "trsvcid": "4420" 00:21:11.673 }, 00:21:11.673 "peer_address": { 00:21:11.673 "trtype": "TCP", 00:21:11.673 "adrfam": "IPv4", 00:21:11.673 "traddr": "10.0.0.1", 00:21:11.673 "trsvcid": "47118" 00:21:11.673 }, 00:21:11.673 "auth": { 00:21:11.673 "state": "completed", 00:21:11.673 "digest": "sha512", 00:21:11.673 "dhgroup": "ffdhe3072" 00:21:11.673 } 00:21:11.673 } 00:21:11.673 ]' 00:21:11.673 07:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:11.673 07:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:11.673 07:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:11.673 07:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:11.673 07:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:11.673 07:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:11.673 07:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.673 07:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:11.933 07:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2E4MjQ5M2VjMDRmZWIzYTMyYmNiYTMwYmJkNDE0OTQ3ZGFkZDE3MTBmMzVlMjcz+T6J9g==: --dhchap-ctrl-secret DHHC-1:03:YzI3ZDFkOTBmYWQ2NTE1YjFiNWY3ZGU3ZDg2YjMzNTRlZWM4ZTAyZTAxMGVhMjBkNmYxODdjMTM3MzhlN2IyYVB+fJc=: 00:21:11.933 07:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:N2E4MjQ5M2VjMDRmZWIzYTMyYmNiYTMwYmJkNDE0OTQ3ZGFkZDE3MTBmMzVlMjcz+T6J9g==: --dhchap-ctrl-secret DHHC-1:03:YzI3ZDFkOTBmYWQ2NTE1YjFiNWY3ZGU3ZDg2YjMzNTRlZWM4ZTAyZTAxMGVhMjBkNmYxODdjMTM3MzhlN2IyYVB+fJc=: 00:21:12.869 07:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:12.869 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:12.869 07:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:12.869 07:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.869 07:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.869 07:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.869 07:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:12.869 07:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:12.869 07:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:13.435 07:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:21:13.435 07:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:13.435 07:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:13.435 07:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:13.435 07:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:13.435 07:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:13.435 07:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:13.435 07:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.435 07:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.435 07:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.435 07:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:13.435 07:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:13.435 07:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:13.693 00:21:13.693 07:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:13.693 07:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:13.693 07:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.951 07:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.951 07:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:13.951 07:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.951 07:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.951 07:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.951 07:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:13.951 { 00:21:13.951 "cntlid": 115, 00:21:13.951 "qid": 0, 00:21:13.951 "state": "enabled", 00:21:13.951 "thread": "nvmf_tgt_poll_group_000", 00:21:13.951 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:13.951 "listen_address": { 00:21:13.951 "trtype": "TCP", 00:21:13.951 "adrfam": "IPv4", 00:21:13.951 "traddr": "10.0.0.2", 00:21:13.951 "trsvcid": "4420" 00:21:13.951 }, 00:21:13.951 "peer_address": { 00:21:13.951 "trtype": "TCP", 00:21:13.951 "adrfam": "IPv4", 00:21:13.951 "traddr": "10.0.0.1", 00:21:13.951 "trsvcid": "47148" 00:21:13.951 }, 00:21:13.951 "auth": { 00:21:13.951 "state": "completed", 00:21:13.951 "digest": "sha512", 00:21:13.951 "dhgroup": "ffdhe3072" 00:21:13.951 } 00:21:13.951 } 00:21:13.951 ]' 00:21:13.951 07:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:13.951 07:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:13.951 07:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:13.951 07:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:13.951 07:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:13.951 07:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:13.951 07:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:13.951 07:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:14.519 07:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDBkMTU5ZTQ3ODViMjIwNjJmMmU4MGM0MTE3NjZmNGU8Y8L9: --dhchap-ctrl-secret DHHC-1:02:YjQ2MmNkOWZlZGIwYjQzOWVjYTdhNjRmNmExMWM1NGRkZDkwOWFhYjQ4YjczNDM4XYLYyA==: 00:21:14.519 07:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NDBkMTU5ZTQ3ODViMjIwNjJmMmU4MGM0MTE3NjZmNGU8Y8L9: --dhchap-ctrl-secret DHHC-1:02:YjQ2MmNkOWZlZGIwYjQzOWVjYTdhNjRmNmExMWM1NGRkZDkwOWFhYjQ4YjczNDM4XYLYyA==: 00:21:15.453 07:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:15.453 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:15.453 07:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:15.453 07:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.453 07:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.453 07:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.453 07:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:15.453 07:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:15.453 07:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:15.711 07:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:21:15.711 07:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:15.711 07:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:15.711 07:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:15.711 07:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:15.711 07:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:15.711 07:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:15.711 07:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.711 07:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.711 07:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.711 07:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:15.711 07:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:15.711 07:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:15.969 00:21:15.969 07:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:15.969 07:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:15.969 07:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:16.227 07:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.227 07:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:16.227 07:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.227 07:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.227 07:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.227 07:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:16.227 { 00:21:16.227 "cntlid": 117, 00:21:16.227 "qid": 0, 00:21:16.227 "state": "enabled", 00:21:16.227 "thread": "nvmf_tgt_poll_group_000", 00:21:16.227 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:16.227 "listen_address": { 00:21:16.227 "trtype": "TCP", 00:21:16.227 "adrfam": "IPv4", 00:21:16.227 "traddr": "10.0.0.2", 00:21:16.227 "trsvcid": "4420" 00:21:16.227 }, 00:21:16.227 "peer_address": { 00:21:16.227 "trtype": "TCP", 00:21:16.227 "adrfam": "IPv4", 00:21:16.227 "traddr": "10.0.0.1", 00:21:16.227 "trsvcid": "53940" 00:21:16.227 }, 00:21:16.227 "auth": { 00:21:16.227 "state": "completed", 00:21:16.227 "digest": "sha512", 00:21:16.227 "dhgroup": "ffdhe3072" 00:21:16.227 } 00:21:16.227 } 00:21:16.227 ]' 00:21:16.227 07:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:16.227 07:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:16.227 07:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:16.485 07:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:16.485 07:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:16.485 07:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:16.485 07:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:16.485 07:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:16.774 07:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2I3Nzg2MzJmZGMyYWQwMDc4OTgyNDFmYzEzODg1ZGVmNTQyNzZjODRjOGQzNzhjsZAAwQ==: --dhchap-ctrl-secret DHHC-1:01:OWQxNDVkZTE1YzZjODU5ZTUzZDdkZTY4M2RiM2U3OWKiD6AM: 00:21:16.774 07:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:M2I3Nzg2MzJmZGMyYWQwMDc4OTgyNDFmYzEzODg1ZGVmNTQyNzZjODRjOGQzNzhjsZAAwQ==: --dhchap-ctrl-secret DHHC-1:01:OWQxNDVkZTE1YzZjODU5ZTUzZDdkZTY4M2RiM2U3OWKiD6AM: 00:21:17.739 07:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.739 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.739 07:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:17.739 07:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.739 07:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.739 07:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.739 07:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:17.739 07:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:17.739 07:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:17.998 07:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:21:17.998 07:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:17.998 07:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:17.998 07:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:17.998 07:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:17.998 07:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:17.998 07:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:17.998 07:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.998 07:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.998 07:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.998 07:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:17.998 07:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:17.998 07:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:18.256 00:21:18.256 07:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:18.256 07:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:18.256 07:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:18.514 07:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.514 07:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:18.514 07:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.514 07:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.514 07:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.514 07:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:18.514 { 00:21:18.514 "cntlid": 119, 00:21:18.514 "qid": 0, 00:21:18.514 "state": "enabled", 00:21:18.514 "thread": "nvmf_tgt_poll_group_000", 00:21:18.514 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:18.514 "listen_address": { 00:21:18.514 "trtype": "TCP", 00:21:18.514 "adrfam": "IPv4", 00:21:18.514 "traddr": "10.0.0.2", 00:21:18.514 "trsvcid": "4420" 00:21:18.514 }, 00:21:18.514 "peer_address": { 00:21:18.514 "trtype": "TCP", 00:21:18.514 "adrfam": "IPv4", 00:21:18.514 "traddr": "10.0.0.1", 00:21:18.514 "trsvcid": "53968" 00:21:18.514 }, 00:21:18.514 "auth": { 00:21:18.514 "state": "completed", 00:21:18.514 "digest": "sha512", 00:21:18.514 "dhgroup": "ffdhe3072" 00:21:18.514 } 00:21:18.514 } 00:21:18.514 ]' 00:21:18.514 07:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:18.773 07:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:18.773 07:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:18.773 07:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:18.773 07:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:18.773 07:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:18.773 07:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:18.773 07:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:19.031 07:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTgxZTE5YjUzODYxYTlkMWFkYzM3MDQwY2ZhNDMzNmZkMTBjZWEyMDY3NGNkMTk5YTIyNDAzNzIwOWFhMjc2ZhRtH/g=: 00:21:19.031 07:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YTgxZTE5YjUzODYxYTlkMWFkYzM3MDQwY2ZhNDMzNmZkMTBjZWEyMDY3NGNkMTk5YTIyNDAzNzIwOWFhMjc2ZhRtH/g=: 00:21:19.968 07:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:19.968 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:19.968 07:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:19.969 07:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.969 07:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.969 07:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.969 07:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:19.969 07:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:19.969 07:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:19.969 07:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:20.227 07:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:21:20.227 07:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:20.227 07:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:20.227 07:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:20.227 07:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:20.227 07:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:20.227 07:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:20.227 07:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.227 07:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.227 07:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.227 07:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:20.227 07:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:20.227 07:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:20.794 00:21:20.794 07:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:20.794 07:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:20.794 07:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.053 07:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.053 07:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:21.053 07:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.053 07:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.053 07:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.053 07:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:21.053 { 00:21:21.053 "cntlid": 121, 00:21:21.053 "qid": 0, 00:21:21.053 "state": "enabled", 00:21:21.053 "thread": "nvmf_tgt_poll_group_000", 00:21:21.053 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:21.053 "listen_address": { 00:21:21.053 "trtype": "TCP", 00:21:21.053 "adrfam": "IPv4", 00:21:21.053 "traddr": "10.0.0.2", 00:21:21.053 "trsvcid": "4420" 00:21:21.053 }, 00:21:21.053 "peer_address": { 00:21:21.053 "trtype": "TCP", 00:21:21.053 "adrfam": "IPv4", 00:21:21.053 "traddr": "10.0.0.1", 00:21:21.053 "trsvcid": "53992" 00:21:21.053 }, 00:21:21.053 "auth": { 00:21:21.053 "state": "completed", 00:21:21.053 "digest": "sha512", 00:21:21.053 "dhgroup": "ffdhe4096" 00:21:21.053 } 00:21:21.053 } 00:21:21.053 ]' 00:21:21.053 07:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:21.053 07:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:21.053 07:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:21.053 07:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:21.053 07:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:21.053 07:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:21.053 07:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:21.053 07:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:21.313 07:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2E4MjQ5M2VjMDRmZWIzYTMyYmNiYTMwYmJkNDE0OTQ3ZGFkZDE3MTBmMzVlMjcz+T6J9g==: --dhchap-ctrl-secret DHHC-1:03:YzI3ZDFkOTBmYWQ2NTE1YjFiNWY3ZGU3ZDg2YjMzNTRlZWM4ZTAyZTAxMGVhMjBkNmYxODdjMTM3MzhlN2IyYVB+fJc=: 00:21:21.313 07:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:N2E4MjQ5M2VjMDRmZWIzYTMyYmNiYTMwYmJkNDE0OTQ3ZGFkZDE3MTBmMzVlMjcz+T6J9g==: --dhchap-ctrl-secret DHHC-1:03:YzI3ZDFkOTBmYWQ2NTE1YjFiNWY3ZGU3ZDg2YjMzNTRlZWM4ZTAyZTAxMGVhMjBkNmYxODdjMTM3MzhlN2IyYVB+fJc=: 00:21:22.250 07:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:22.250 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:22.250 07:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:22.250 07:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.250 07:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.250 07:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.250 07:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:22.250 07:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:22.250 07:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:22.819 07:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:21:22.819 07:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:22.819 07:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:22.819 07:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:22.819 07:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:22.819 07:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:22.819 07:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:22.819 07:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.819 07:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.819 07:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.819 07:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:22.819 07:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:22.819 07:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:23.078 00:21:23.078 07:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:23.078 07:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:23.078 07:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:23.336 07:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.336 07:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:23.336 07:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.336 07:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.336 07:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.336 07:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:23.336 { 00:21:23.336 "cntlid": 123, 00:21:23.336 "qid": 0, 00:21:23.336 "state": "enabled", 00:21:23.336 "thread": "nvmf_tgt_poll_group_000", 00:21:23.336 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:23.336 "listen_address": { 00:21:23.336 "trtype": "TCP", 00:21:23.336 "adrfam": "IPv4", 00:21:23.336 "traddr": "10.0.0.2", 00:21:23.336 "trsvcid": "4420" 00:21:23.336 }, 00:21:23.336 "peer_address": { 00:21:23.336 "trtype": "TCP", 00:21:23.336 "adrfam": "IPv4", 00:21:23.336 "traddr": "10.0.0.1", 00:21:23.336 "trsvcid": "54012" 00:21:23.336 }, 00:21:23.336 "auth": { 00:21:23.336 "state": "completed", 00:21:23.336 "digest": "sha512", 00:21:23.336 "dhgroup": "ffdhe4096" 00:21:23.336 } 00:21:23.336 } 00:21:23.336 ]' 00:21:23.336 07:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:23.336 07:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:23.336 07:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:23.336 07:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:23.336 07:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:23.595 07:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:23.595 07:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:23.595 07:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:23.852 07:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDBkMTU5ZTQ3ODViMjIwNjJmMmU4MGM0MTE3NjZmNGU8Y8L9: --dhchap-ctrl-secret DHHC-1:02:YjQ2MmNkOWZlZGIwYjQzOWVjYTdhNjRmNmExMWM1NGRkZDkwOWFhYjQ4YjczNDM4XYLYyA==: 00:21:23.853 07:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NDBkMTU5ZTQ3ODViMjIwNjJmMmU4MGM0MTE3NjZmNGU8Y8L9: --dhchap-ctrl-secret DHHC-1:02:YjQ2MmNkOWZlZGIwYjQzOWVjYTdhNjRmNmExMWM1NGRkZDkwOWFhYjQ4YjczNDM4XYLYyA==: 00:21:24.788 07:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:24.788 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:24.788 07:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:24.788 07:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.788 07:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.788 07:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.788 07:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:24.788 07:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:24.788 07:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:25.045 07:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:21:25.045 07:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:25.045 07:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:25.045 07:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:25.045 07:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:25.045 07:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:25.045 07:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:25.045 07:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.045 07:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.045 07:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.045 07:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:25.045 07:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:25.045 07:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:25.303 00:21:25.303 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:25.303 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:25.303 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:25.562 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:25.562 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:25.562 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.562 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.820 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.820 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:25.820 { 00:21:25.820 "cntlid": 125, 00:21:25.820 "qid": 0, 00:21:25.820 "state": "enabled", 00:21:25.820 "thread": "nvmf_tgt_poll_group_000", 00:21:25.820 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:25.820 "listen_address": { 00:21:25.820 "trtype": "TCP", 00:21:25.820 "adrfam": "IPv4", 00:21:25.820 "traddr": "10.0.0.2", 00:21:25.820 "trsvcid": "4420" 00:21:25.820 }, 00:21:25.820 "peer_address": { 00:21:25.820 "trtype": "TCP", 00:21:25.820 "adrfam": "IPv4", 00:21:25.820 "traddr": "10.0.0.1", 00:21:25.820 "trsvcid": "55356" 00:21:25.820 }, 00:21:25.820 "auth": { 00:21:25.820 "state": "completed", 00:21:25.820 "digest": "sha512", 00:21:25.820 "dhgroup": "ffdhe4096" 00:21:25.820 } 00:21:25.820 } 00:21:25.820 ]' 00:21:25.820 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:25.820 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:25.820 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:25.820 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:25.820 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:25.820 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:25.820 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:25.820 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:26.078 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2I3Nzg2MzJmZGMyYWQwMDc4OTgyNDFmYzEzODg1ZGVmNTQyNzZjODRjOGQzNzhjsZAAwQ==: --dhchap-ctrl-secret DHHC-1:01:OWQxNDVkZTE1YzZjODU5ZTUzZDdkZTY4M2RiM2U3OWKiD6AM: 00:21:26.078 07:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:M2I3Nzg2MzJmZGMyYWQwMDc4OTgyNDFmYzEzODg1ZGVmNTQyNzZjODRjOGQzNzhjsZAAwQ==: --dhchap-ctrl-secret DHHC-1:01:OWQxNDVkZTE1YzZjODU5ZTUzZDdkZTY4M2RiM2U3OWKiD6AM: 00:21:27.016 07:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:27.016 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:27.016 07:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:27.016 07:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.016 07:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.016 07:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.016 07:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:27.016 07:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:27.016 07:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:27.277 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:21:27.277 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:27.277 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:27.277 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:27.277 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:27.277 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:27.277 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:27.277 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.277 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.277 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.277 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:27.277 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:27.277 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:27.845 00:21:27.845 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:27.845 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:27.845 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:28.104 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:28.104 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:28.104 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.104 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.104 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.104 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:28.104 { 00:21:28.104 "cntlid": 127, 00:21:28.104 "qid": 0, 00:21:28.104 "state": "enabled", 00:21:28.104 "thread": "nvmf_tgt_poll_group_000", 00:21:28.104 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:28.104 "listen_address": { 00:21:28.104 "trtype": "TCP", 00:21:28.104 "adrfam": "IPv4", 00:21:28.104 "traddr": "10.0.0.2", 00:21:28.104 "trsvcid": "4420" 00:21:28.104 }, 00:21:28.104 "peer_address": { 00:21:28.104 "trtype": "TCP", 00:21:28.104 "adrfam": "IPv4", 00:21:28.104 "traddr": "10.0.0.1", 00:21:28.104 "trsvcid": "55386" 00:21:28.104 }, 00:21:28.104 "auth": { 00:21:28.104 "state": "completed", 00:21:28.104 "digest": "sha512", 00:21:28.104 "dhgroup": "ffdhe4096" 00:21:28.104 } 00:21:28.104 } 00:21:28.104 ]' 00:21:28.104 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:28.104 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:28.104 07:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:28.104 07:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:28.104 07:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:28.362 07:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:28.362 07:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:28.362 07:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:28.620 07:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTgxZTE5YjUzODYxYTlkMWFkYzM3MDQwY2ZhNDMzNmZkMTBjZWEyMDY3NGNkMTk5YTIyNDAzNzIwOWFhMjc2ZhRtH/g=: 00:21:28.620 07:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YTgxZTE5YjUzODYxYTlkMWFkYzM3MDQwY2ZhNDMzNmZkMTBjZWEyMDY3NGNkMTk5YTIyNDAzNzIwOWFhMjc2ZhRtH/g=: 00:21:29.557 07:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:29.557 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:29.557 07:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:29.557 07:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.557 07:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.557 07:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.557 07:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:29.557 07:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:29.557 07:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:29.557 07:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:29.816 07:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:21:29.816 07:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:29.816 07:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:29.816 07:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:29.816 07:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:29.816 07:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:29.816 07:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:29.816 07:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.816 07:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.816 07:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.816 07:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:29.816 07:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:29.816 07:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:30.384 00:21:30.384 07:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:30.384 07:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:30.384 07:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:30.644 07:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:30.644 07:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:30.644 07:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.644 07:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.644 07:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.644 07:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:30.644 { 00:21:30.644 "cntlid": 129, 00:21:30.644 "qid": 0, 00:21:30.644 "state": "enabled", 00:21:30.644 "thread": "nvmf_tgt_poll_group_000", 00:21:30.644 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:30.644 "listen_address": { 00:21:30.644 "trtype": "TCP", 00:21:30.644 "adrfam": "IPv4", 00:21:30.644 "traddr": "10.0.0.2", 00:21:30.644 "trsvcid": "4420" 00:21:30.644 }, 00:21:30.644 "peer_address": { 00:21:30.644 "trtype": "TCP", 00:21:30.644 "adrfam": "IPv4", 00:21:30.644 "traddr": "10.0.0.1", 00:21:30.644 "trsvcid": "55408" 00:21:30.644 }, 00:21:30.644 "auth": { 00:21:30.644 "state": "completed", 00:21:30.644 "digest": "sha512", 00:21:30.644 "dhgroup": "ffdhe6144" 00:21:30.644 } 00:21:30.644 } 00:21:30.644 ]' 00:21:30.644 07:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:30.903 07:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:30.903 07:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:30.903 07:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:30.903 07:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:30.903 07:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:30.903 07:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:30.903 07:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:31.160 07:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2E4MjQ5M2VjMDRmZWIzYTMyYmNiYTMwYmJkNDE0OTQ3ZGFkZDE3MTBmMzVlMjcz+T6J9g==: --dhchap-ctrl-secret DHHC-1:03:YzI3ZDFkOTBmYWQ2NTE1YjFiNWY3ZGU3ZDg2YjMzNTRlZWM4ZTAyZTAxMGVhMjBkNmYxODdjMTM3MzhlN2IyYVB+fJc=: 00:21:31.160 07:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:N2E4MjQ5M2VjMDRmZWIzYTMyYmNiYTMwYmJkNDE0OTQ3ZGFkZDE3MTBmMzVlMjcz+T6J9g==: --dhchap-ctrl-secret DHHC-1:03:YzI3ZDFkOTBmYWQ2NTE1YjFiNWY3ZGU3ZDg2YjMzNTRlZWM4ZTAyZTAxMGVhMjBkNmYxODdjMTM3MzhlN2IyYVB+fJc=: 00:21:32.098 07:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:32.098 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:32.098 07:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:32.098 07:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.098 07:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.098 07:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.098 07:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:32.098 07:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:32.098 07:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:32.356 07:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:21:32.356 07:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:32.356 07:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:32.356 07:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:32.356 07:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:32.356 07:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:32.356 07:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:32.356 07:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.356 07:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.614 07:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.614 07:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:32.615 07:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:32.615 07:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:33.180 00:21:33.180 07:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:33.180 07:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:33.180 07:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:33.439 07:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.439 07:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:33.439 07:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.439 07:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.439 07:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.439 07:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:33.439 { 00:21:33.439 "cntlid": 131, 00:21:33.439 "qid": 0, 00:21:33.439 "state": "enabled", 00:21:33.439 "thread": "nvmf_tgt_poll_group_000", 00:21:33.439 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:33.439 "listen_address": { 00:21:33.439 "trtype": "TCP", 00:21:33.439 "adrfam": "IPv4", 00:21:33.439 "traddr": "10.0.0.2", 00:21:33.439 "trsvcid": "4420" 00:21:33.439 }, 00:21:33.439 "peer_address": { 00:21:33.439 "trtype": "TCP", 00:21:33.439 "adrfam": "IPv4", 00:21:33.439 "traddr": "10.0.0.1", 00:21:33.439 "trsvcid": "55434" 00:21:33.439 }, 00:21:33.439 "auth": { 00:21:33.439 "state": "completed", 00:21:33.439 "digest": "sha512", 00:21:33.439 "dhgroup": "ffdhe6144" 00:21:33.439 } 00:21:33.439 } 00:21:33.439 ]' 00:21:33.439 07:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:33.439 07:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:33.439 07:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:33.439 07:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:33.439 07:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:33.439 07:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:33.439 07:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:33.439 07:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:33.697 07:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDBkMTU5ZTQ3ODViMjIwNjJmMmU4MGM0MTE3NjZmNGU8Y8L9: --dhchap-ctrl-secret DHHC-1:02:YjQ2MmNkOWZlZGIwYjQzOWVjYTdhNjRmNmExMWM1NGRkZDkwOWFhYjQ4YjczNDM4XYLYyA==: 00:21:33.697 07:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NDBkMTU5ZTQ3ODViMjIwNjJmMmU4MGM0MTE3NjZmNGU8Y8L9: --dhchap-ctrl-secret DHHC-1:02:YjQ2MmNkOWZlZGIwYjQzOWVjYTdhNjRmNmExMWM1NGRkZDkwOWFhYjQ4YjczNDM4XYLYyA==: 00:21:34.632 07:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:34.632 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:34.632 07:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:34.632 07:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.632 07:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.632 07:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.632 07:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:34.632 07:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:34.632 07:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:35.199 07:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:21:35.199 07:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:35.199 07:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:35.199 07:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:35.199 07:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:35.199 07:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:35.199 07:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:35.199 07:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.199 07:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.199 07:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.199 07:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:35.199 07:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:35.199 07:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:35.765 00:21:35.765 07:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:35.765 07:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:35.765 07:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:36.024 07:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.024 07:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:36.024 07:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.024 07:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.024 07:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.024 07:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:36.024 { 00:21:36.024 "cntlid": 133, 00:21:36.024 "qid": 0, 00:21:36.024 "state": "enabled", 00:21:36.024 "thread": "nvmf_tgt_poll_group_000", 00:21:36.024 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:36.024 "listen_address": { 00:21:36.024 "trtype": "TCP", 00:21:36.024 "adrfam": "IPv4", 00:21:36.024 "traddr": "10.0.0.2", 00:21:36.024 "trsvcid": "4420" 00:21:36.024 }, 00:21:36.024 "peer_address": { 00:21:36.024 "trtype": "TCP", 00:21:36.024 "adrfam": "IPv4", 00:21:36.024 "traddr": "10.0.0.1", 00:21:36.024 "trsvcid": "40702" 00:21:36.024 }, 00:21:36.024 "auth": { 00:21:36.024 "state": "completed", 00:21:36.024 "digest": "sha512", 00:21:36.024 "dhgroup": "ffdhe6144" 00:21:36.024 } 00:21:36.024 } 00:21:36.024 ]' 00:21:36.024 07:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:36.024 07:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:36.024 07:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:36.024 07:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:36.024 07:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:36.024 07:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:36.024 07:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:36.024 07:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:36.283 07:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2I3Nzg2MzJmZGMyYWQwMDc4OTgyNDFmYzEzODg1ZGVmNTQyNzZjODRjOGQzNzhjsZAAwQ==: --dhchap-ctrl-secret DHHC-1:01:OWQxNDVkZTE1YzZjODU5ZTUzZDdkZTY4M2RiM2U3OWKiD6AM: 00:21:36.283 07:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:M2I3Nzg2MzJmZGMyYWQwMDc4OTgyNDFmYzEzODg1ZGVmNTQyNzZjODRjOGQzNzhjsZAAwQ==: --dhchap-ctrl-secret DHHC-1:01:OWQxNDVkZTE1YzZjODU5ZTUzZDdkZTY4M2RiM2U3OWKiD6AM: 00:21:37.217 07:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:37.474 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:37.474 07:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:37.474 07:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.474 07:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.474 07:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.474 07:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:37.474 07:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:37.474 07:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:37.732 07:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:21:37.732 07:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:37.732 07:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:37.732 07:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:37.732 07:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:37.732 07:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:37.732 07:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:37.732 07:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.732 07:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.732 07:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.732 07:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:37.732 07:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:37.732 07:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:38.297 00:21:38.297 07:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:38.297 07:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:38.297 07:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:38.555 07:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.555 07:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:38.555 07:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.555 07:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.555 07:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.555 07:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:38.555 { 00:21:38.555 "cntlid": 135, 00:21:38.555 "qid": 0, 00:21:38.555 "state": "enabled", 00:21:38.555 "thread": "nvmf_tgt_poll_group_000", 00:21:38.555 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:38.555 "listen_address": { 00:21:38.555 "trtype": "TCP", 00:21:38.555 "adrfam": "IPv4", 00:21:38.555 "traddr": "10.0.0.2", 00:21:38.555 "trsvcid": "4420" 00:21:38.555 }, 00:21:38.555 "peer_address": { 00:21:38.555 "trtype": "TCP", 00:21:38.555 "adrfam": "IPv4", 00:21:38.555 "traddr": "10.0.0.1", 00:21:38.555 "trsvcid": "40730" 00:21:38.555 }, 00:21:38.555 "auth": { 00:21:38.555 "state": "completed", 00:21:38.555 "digest": "sha512", 00:21:38.555 "dhgroup": "ffdhe6144" 00:21:38.555 } 00:21:38.555 } 00:21:38.555 ]' 00:21:38.555 07:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:38.555 07:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:38.555 07:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:38.555 07:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:38.555 07:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:38.555 07:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:38.555 07:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:38.555 07:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:39.121 07:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTgxZTE5YjUzODYxYTlkMWFkYzM3MDQwY2ZhNDMzNmZkMTBjZWEyMDY3NGNkMTk5YTIyNDAzNzIwOWFhMjc2ZhRtH/g=: 00:21:39.122 07:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YTgxZTE5YjUzODYxYTlkMWFkYzM3MDQwY2ZhNDMzNmZkMTBjZWEyMDY3NGNkMTk5YTIyNDAzNzIwOWFhMjc2ZhRtH/g=: 00:21:40.058 07:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:40.058 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:40.058 07:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:40.058 07:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.058 07:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.058 07:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.058 07:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:40.058 07:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:40.058 07:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:40.058 07:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:40.316 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:21:40.316 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:40.316 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:40.316 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:40.316 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:40.316 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:40.316 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:40.316 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.316 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.316 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.316 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:40.316 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:40.316 07:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:41.250 00:21:41.250 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:41.250 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:41.250 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:41.509 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:41.509 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:41.509 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.509 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.509 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.509 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:41.509 { 00:21:41.509 "cntlid": 137, 00:21:41.509 "qid": 0, 00:21:41.509 "state": "enabled", 00:21:41.509 "thread": "nvmf_tgt_poll_group_000", 00:21:41.509 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:41.509 "listen_address": { 00:21:41.509 "trtype": "TCP", 00:21:41.509 "adrfam": "IPv4", 00:21:41.509 "traddr": "10.0.0.2", 00:21:41.509 "trsvcid": "4420" 00:21:41.509 }, 00:21:41.509 "peer_address": { 00:21:41.509 "trtype": "TCP", 00:21:41.509 "adrfam": "IPv4", 00:21:41.509 "traddr": "10.0.0.1", 00:21:41.509 "trsvcid": "40770" 00:21:41.509 }, 00:21:41.509 "auth": { 00:21:41.509 "state": "completed", 00:21:41.509 "digest": "sha512", 00:21:41.509 "dhgroup": "ffdhe8192" 00:21:41.509 } 00:21:41.509 } 00:21:41.509 ]' 00:21:41.509 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:41.509 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:41.509 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:41.767 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:41.767 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:41.767 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:41.767 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:41.767 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:42.025 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2E4MjQ5M2VjMDRmZWIzYTMyYmNiYTMwYmJkNDE0OTQ3ZGFkZDE3MTBmMzVlMjcz+T6J9g==: --dhchap-ctrl-secret DHHC-1:03:YzI3ZDFkOTBmYWQ2NTE1YjFiNWY3ZGU3ZDg2YjMzNTRlZWM4ZTAyZTAxMGVhMjBkNmYxODdjMTM3MzhlN2IyYVB+fJc=: 00:21:42.025 07:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:N2E4MjQ5M2VjMDRmZWIzYTMyYmNiYTMwYmJkNDE0OTQ3ZGFkZDE3MTBmMzVlMjcz+T6J9g==: --dhchap-ctrl-secret DHHC-1:03:YzI3ZDFkOTBmYWQ2NTE1YjFiNWY3ZGU3ZDg2YjMzNTRlZWM4ZTAyZTAxMGVhMjBkNmYxODdjMTM3MzhlN2IyYVB+fJc=: 00:21:42.959 07:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:42.959 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:42.959 07:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:42.959 07:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.959 07:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.959 07:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.959 07:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:42.959 07:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:42.959 07:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:43.217 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:21:43.217 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:43.217 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:43.217 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:43.217 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:43.217 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:43.217 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.217 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.217 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.217 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.217 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.217 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.217 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:44.151 00:21:44.151 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:44.151 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:44.151 07:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:44.409 07:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:44.409 07:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:44.409 07:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.409 07:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.409 07:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.409 07:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:44.409 { 00:21:44.409 "cntlid": 139, 00:21:44.409 "qid": 0, 00:21:44.409 "state": "enabled", 00:21:44.409 "thread": "nvmf_tgt_poll_group_000", 00:21:44.409 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:44.409 "listen_address": { 00:21:44.409 "trtype": "TCP", 00:21:44.409 "adrfam": "IPv4", 00:21:44.409 "traddr": "10.0.0.2", 00:21:44.409 "trsvcid": "4420" 00:21:44.409 }, 00:21:44.409 "peer_address": { 00:21:44.409 "trtype": "TCP", 00:21:44.409 "adrfam": "IPv4", 00:21:44.409 "traddr": "10.0.0.1", 00:21:44.409 "trsvcid": "40800" 00:21:44.409 }, 00:21:44.409 "auth": { 00:21:44.409 "state": "completed", 00:21:44.409 "digest": "sha512", 00:21:44.409 "dhgroup": "ffdhe8192" 00:21:44.409 } 00:21:44.409 } 00:21:44.409 ]' 00:21:44.409 07:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:44.409 07:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:44.409 07:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:44.409 07:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:44.409 07:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:44.667 07:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:44.667 07:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:44.667 07:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:44.925 07:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDBkMTU5ZTQ3ODViMjIwNjJmMmU4MGM0MTE3NjZmNGU8Y8L9: --dhchap-ctrl-secret DHHC-1:02:YjQ2MmNkOWZlZGIwYjQzOWVjYTdhNjRmNmExMWM1NGRkZDkwOWFhYjQ4YjczNDM4XYLYyA==: 00:21:44.925 07:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NDBkMTU5ZTQ3ODViMjIwNjJmMmU4MGM0MTE3NjZmNGU8Y8L9: --dhchap-ctrl-secret DHHC-1:02:YjQ2MmNkOWZlZGIwYjQzOWVjYTdhNjRmNmExMWM1NGRkZDkwOWFhYjQ4YjczNDM4XYLYyA==: 00:21:45.860 07:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:45.861 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:45.861 07:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:45.861 07:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.861 07:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.861 07:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.861 07:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:45.861 07:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:45.861 07:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:46.119 07:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:21:46.119 07:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:46.119 07:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:46.119 07:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:46.119 07:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:46.119 07:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:46.119 07:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:46.119 07:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.119 07:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.119 07:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.119 07:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:46.119 07:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:46.119 07:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:47.159 00:21:47.159 07:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:47.159 07:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:47.159 07:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:47.438 07:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:47.438 07:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:47.438 07:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.438 07:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.438 07:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.438 07:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:47.438 { 00:21:47.438 "cntlid": 141, 00:21:47.438 "qid": 0, 00:21:47.438 "state": "enabled", 00:21:47.438 "thread": "nvmf_tgt_poll_group_000", 00:21:47.438 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:47.438 "listen_address": { 00:21:47.438 "trtype": "TCP", 00:21:47.438 "adrfam": "IPv4", 00:21:47.438 "traddr": "10.0.0.2", 00:21:47.438 "trsvcid": "4420" 00:21:47.438 }, 00:21:47.438 "peer_address": { 00:21:47.438 "trtype": "TCP", 00:21:47.438 "adrfam": "IPv4", 00:21:47.438 "traddr": "10.0.0.1", 00:21:47.438 "trsvcid": "48680" 00:21:47.438 }, 00:21:47.438 "auth": { 00:21:47.438 "state": "completed", 00:21:47.438 "digest": "sha512", 00:21:47.438 "dhgroup": "ffdhe8192" 00:21:47.438 } 00:21:47.438 } 00:21:47.438 ]' 00:21:47.438 07:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:47.438 07:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:47.438 07:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:47.438 07:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:47.438 07:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:47.438 07:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:47.438 07:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:47.438 07:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:47.695 07:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2I3Nzg2MzJmZGMyYWQwMDc4OTgyNDFmYzEzODg1ZGVmNTQyNzZjODRjOGQzNzhjsZAAwQ==: --dhchap-ctrl-secret DHHC-1:01:OWQxNDVkZTE1YzZjODU5ZTUzZDdkZTY4M2RiM2U3OWKiD6AM: 00:21:47.695 07:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:M2I3Nzg2MzJmZGMyYWQwMDc4OTgyNDFmYzEzODg1ZGVmNTQyNzZjODRjOGQzNzhjsZAAwQ==: --dhchap-ctrl-secret DHHC-1:01:OWQxNDVkZTE1YzZjODU5ZTUzZDdkZTY4M2RiM2U3OWKiD6AM: 00:21:48.628 07:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:48.628 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:48.628 07:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:48.628 07:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.628 07:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.628 07:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.628 07:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:48.628 07:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:48.628 07:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:48.886 07:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:21:48.886 07:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:48.886 07:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:48.886 07:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:48.886 07:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:48.886 07:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:48.886 07:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:48.886 07:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.886 07:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.144 07:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.144 07:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:49.144 07:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:49.144 07:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:50.077 00:21:50.077 07:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:50.077 07:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:50.077 07:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:50.336 07:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:50.336 07:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:50.336 07:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.336 07:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.336 07:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.336 07:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:50.336 { 00:21:50.336 "cntlid": 143, 00:21:50.336 "qid": 0, 00:21:50.336 "state": "enabled", 00:21:50.336 "thread": "nvmf_tgt_poll_group_000", 00:21:50.336 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:50.336 "listen_address": { 00:21:50.336 "trtype": "TCP", 00:21:50.336 "adrfam": "IPv4", 00:21:50.336 "traddr": "10.0.0.2", 00:21:50.336 "trsvcid": "4420" 00:21:50.336 }, 00:21:50.336 "peer_address": { 00:21:50.336 "trtype": "TCP", 00:21:50.336 "adrfam": "IPv4", 00:21:50.336 "traddr": "10.0.0.1", 00:21:50.336 "trsvcid": "48710" 00:21:50.336 }, 00:21:50.336 "auth": { 00:21:50.336 "state": "completed", 00:21:50.336 "digest": "sha512", 00:21:50.336 "dhgroup": "ffdhe8192" 00:21:50.336 } 00:21:50.336 } 00:21:50.336 ]' 00:21:50.336 07:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:50.336 07:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:50.336 07:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:50.336 07:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:50.336 07:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:50.336 07:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:50.336 07:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:50.336 07:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:50.594 07:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTgxZTE5YjUzODYxYTlkMWFkYzM3MDQwY2ZhNDMzNmZkMTBjZWEyMDY3NGNkMTk5YTIyNDAzNzIwOWFhMjc2ZhRtH/g=: 00:21:50.594 07:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YTgxZTE5YjUzODYxYTlkMWFkYzM3MDQwY2ZhNDMzNmZkMTBjZWEyMDY3NGNkMTk5YTIyNDAzNzIwOWFhMjc2ZhRtH/g=: 00:21:51.530 07:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:51.530 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:51.530 07:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:51.530 07:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.530 07:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.530 07:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.530 07:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:51.530 07:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:21:51.530 07:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:51.530 07:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:51.530 07:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:51.530 07:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:52.096 07:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:21:52.096 07:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:52.096 07:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:52.096 07:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:52.096 07:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:52.096 07:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:52.096 07:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:52.096 07:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.096 07:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.096 07:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.096 07:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:52.096 07:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:52.096 07:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:53.031 00:21:53.031 07:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:53.031 07:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:53.031 07:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:53.031 07:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:53.031 07:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:53.031 07:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.031 07:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.289 07:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.289 07:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:53.289 { 00:21:53.289 "cntlid": 145, 00:21:53.289 "qid": 0, 00:21:53.289 "state": "enabled", 00:21:53.289 "thread": "nvmf_tgt_poll_group_000", 00:21:53.289 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:53.289 "listen_address": { 00:21:53.289 "trtype": "TCP", 00:21:53.289 "adrfam": "IPv4", 00:21:53.289 "traddr": "10.0.0.2", 00:21:53.289 "trsvcid": "4420" 00:21:53.289 }, 00:21:53.289 "peer_address": { 00:21:53.289 "trtype": "TCP", 00:21:53.289 "adrfam": "IPv4", 00:21:53.289 "traddr": "10.0.0.1", 00:21:53.289 "trsvcid": "48734" 00:21:53.289 }, 00:21:53.289 "auth": { 00:21:53.289 "state": "completed", 00:21:53.289 "digest": "sha512", 00:21:53.289 "dhgroup": "ffdhe8192" 00:21:53.289 } 00:21:53.289 } 00:21:53.289 ]' 00:21:53.289 07:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:53.289 07:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:53.289 07:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:53.289 07:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:53.289 07:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:53.289 07:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:53.289 07:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:53.289 07:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:53.548 07:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2E4MjQ5M2VjMDRmZWIzYTMyYmNiYTMwYmJkNDE0OTQ3ZGFkZDE3MTBmMzVlMjcz+T6J9g==: --dhchap-ctrl-secret DHHC-1:03:YzI3ZDFkOTBmYWQ2NTE1YjFiNWY3ZGU3ZDg2YjMzNTRlZWM4ZTAyZTAxMGVhMjBkNmYxODdjMTM3MzhlN2IyYVB+fJc=: 00:21:53.548 07:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:N2E4MjQ5M2VjMDRmZWIzYTMyYmNiYTMwYmJkNDE0OTQ3ZGFkZDE3MTBmMzVlMjcz+T6J9g==: --dhchap-ctrl-secret DHHC-1:03:YzI3ZDFkOTBmYWQ2NTE1YjFiNWY3ZGU3ZDg2YjMzNTRlZWM4ZTAyZTAxMGVhMjBkNmYxODdjMTM3MzhlN2IyYVB+fJc=: 00:21:54.482 07:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:54.482 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:54.482 07:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:54.482 07:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.482 07:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.482 07:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.482 07:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:21:54.482 07:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.482 07:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.482 07:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.482 07:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:21:54.482 07:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:54.482 07:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:21:54.482 07:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:54.482 07:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:54.482 07:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:54.482 07:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:54.482 07:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:21:54.482 07:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:21:54.482 07:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:21:55.415 request: 00:21:55.415 { 00:21:55.415 "name": "nvme0", 00:21:55.415 "trtype": "tcp", 00:21:55.415 "traddr": "10.0.0.2", 00:21:55.415 "adrfam": "ipv4", 00:21:55.415 "trsvcid": "4420", 00:21:55.415 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:55.415 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:55.415 "prchk_reftag": false, 00:21:55.415 "prchk_guard": false, 00:21:55.415 "hdgst": false, 00:21:55.415 "ddgst": false, 00:21:55.415 "dhchap_key": "key2", 00:21:55.415 "allow_unrecognized_csi": false, 00:21:55.415 "method": "bdev_nvme_attach_controller", 00:21:55.415 "req_id": 1 00:21:55.415 } 00:21:55.415 Got JSON-RPC error response 00:21:55.415 response: 00:21:55.415 { 00:21:55.415 "code": -5, 00:21:55.415 "message": "Input/output error" 00:21:55.415 } 00:21:55.415 07:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:55.415 07:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:55.415 07:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:55.415 07:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:55.415 07:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:55.415 07:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.415 07:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.415 07:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.415 07:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:55.415 07:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.415 07:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.415 07:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.415 07:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:55.415 07:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:55.415 07:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:55.415 07:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:55.415 07:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:55.415 07:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:55.415 07:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:55.415 07:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:55.415 07:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:55.415 07:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:56.348 request: 00:21:56.348 { 00:21:56.348 "name": "nvme0", 00:21:56.348 "trtype": "tcp", 00:21:56.348 "traddr": "10.0.0.2", 00:21:56.348 "adrfam": "ipv4", 00:21:56.348 "trsvcid": "4420", 00:21:56.348 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:56.348 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:56.348 "prchk_reftag": false, 00:21:56.348 "prchk_guard": false, 00:21:56.348 "hdgst": false, 00:21:56.348 "ddgst": false, 00:21:56.348 "dhchap_key": "key1", 00:21:56.348 "dhchap_ctrlr_key": "ckey2", 00:21:56.348 "allow_unrecognized_csi": false, 00:21:56.348 "method": "bdev_nvme_attach_controller", 00:21:56.348 "req_id": 1 00:21:56.348 } 00:21:56.348 Got JSON-RPC error response 00:21:56.348 response: 00:21:56.348 { 00:21:56.348 "code": -5, 00:21:56.348 "message": "Input/output error" 00:21:56.348 } 00:21:56.348 07:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:56.348 07:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:56.348 07:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:56.348 07:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:56.348 07:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:56.348 07:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.348 07:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.348 07:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.348 07:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:21:56.348 07:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.348 07:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.348 07:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.348 07:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:56.348 07:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:56.348 07:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:56.348 07:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:56.348 07:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:56.348 07:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:56.348 07:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:56.348 07:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:56.348 07:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:56.348 07:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:57.282 request: 00:21:57.282 { 00:21:57.282 "name": "nvme0", 00:21:57.282 "trtype": "tcp", 00:21:57.282 "traddr": "10.0.0.2", 00:21:57.282 "adrfam": "ipv4", 00:21:57.282 "trsvcid": "4420", 00:21:57.282 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:57.282 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:57.282 "prchk_reftag": false, 00:21:57.282 "prchk_guard": false, 00:21:57.282 "hdgst": false, 00:21:57.282 "ddgst": false, 00:21:57.282 "dhchap_key": "key1", 00:21:57.282 "dhchap_ctrlr_key": "ckey1", 00:21:57.282 "allow_unrecognized_csi": false, 00:21:57.282 "method": "bdev_nvme_attach_controller", 00:21:57.282 "req_id": 1 00:21:57.282 } 00:21:57.282 Got JSON-RPC error response 00:21:57.282 response: 00:21:57.282 { 00:21:57.282 "code": -5, 00:21:57.282 "message": "Input/output error" 00:21:57.282 } 00:21:57.282 07:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:57.282 07:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:57.282 07:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:57.282 07:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:57.282 07:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:57.282 07:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.282 07:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.282 07:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.282 07:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 2959789 00:21:57.282 07:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2959789 ']' 00:21:57.282 07:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2959789 00:21:57.282 07:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:21:57.282 07:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:57.282 07:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2959789 00:21:57.282 07:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:57.282 07:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:57.282 07:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2959789' 00:21:57.282 killing process with pid 2959789 00:21:57.282 07:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2959789 00:21:57.282 07:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2959789 00:21:58.655 07:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:21:58.655 07:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:58.655 07:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:58.655 07:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.655 07:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2983327 00:21:58.655 07:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:21:58.655 07:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2983327 00:21:58.655 07:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2983327 ']' 00:21:58.655 07:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:58.655 07:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:58.655 07:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:58.655 07:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:58.655 07:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.589 07:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:59.589 07:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:21:59.589 07:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:59.589 07:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:59.590 07:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.590 07:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:59.590 07:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:21:59.590 07:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 2983327 00:21:59.590 07:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2983327 ']' 00:21:59.590 07:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:59.590 07:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:59.590 07:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:59.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:59.590 07:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:59.590 07:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.848 07:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:59.848 07:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:21:59.848 07:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:21:59.848 07:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.848 07:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.106 null0 00:22:00.106 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.106 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:00.106 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.zFr 00:22:00.106 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.106 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.364 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.364 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.I6k ]] 00:22:00.364 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.I6k 00:22:00.364 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.364 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.364 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.364 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:00.364 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.5pa 00:22:00.364 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.364 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.364 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.364 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.ehs ]] 00:22:00.364 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ehs 00:22:00.364 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.364 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.364 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.364 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:00.364 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.FBM 00:22:00.364 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.364 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.364 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.364 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.p8x ]] 00:22:00.364 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.p8x 00:22:00.364 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.364 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.364 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.364 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:00.364 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.jCF 00:22:00.364 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.364 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.364 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.364 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:22:00.364 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:22:00.364 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:00.364 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:00.364 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:00.364 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:00.364 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:00.364 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:00.364 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.364 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.364 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.364 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:00.364 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:00.364 07:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:01.738 nvme0n1 00:22:01.738 07:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:01.738 07:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:01.738 07:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:01.996 07:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:01.996 07:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:01.996 07:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.996 07:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.996 07:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.996 07:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:01.996 { 00:22:01.996 "cntlid": 1, 00:22:01.996 "qid": 0, 00:22:01.996 "state": "enabled", 00:22:01.996 "thread": "nvmf_tgt_poll_group_000", 00:22:01.996 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:01.996 "listen_address": { 00:22:01.996 "trtype": "TCP", 00:22:01.996 "adrfam": "IPv4", 00:22:01.996 "traddr": "10.0.0.2", 00:22:01.996 "trsvcid": "4420" 00:22:01.996 }, 00:22:01.996 "peer_address": { 00:22:01.996 "trtype": "TCP", 00:22:01.996 "adrfam": "IPv4", 00:22:01.996 "traddr": "10.0.0.1", 00:22:01.996 "trsvcid": "52340" 00:22:01.996 }, 00:22:01.996 "auth": { 00:22:01.996 "state": "completed", 00:22:01.996 "digest": "sha512", 00:22:01.996 "dhgroup": "ffdhe8192" 00:22:01.996 } 00:22:01.996 } 00:22:01.996 ]' 00:22:01.996 07:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:01.996 07:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:01.996 07:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:01.996 07:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:02.253 07:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:02.253 07:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:02.253 07:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:02.253 07:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:02.512 07:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTgxZTE5YjUzODYxYTlkMWFkYzM3MDQwY2ZhNDMzNmZkMTBjZWEyMDY3NGNkMTk5YTIyNDAzNzIwOWFhMjc2ZhRtH/g=: 00:22:02.512 07:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:YTgxZTE5YjUzODYxYTlkMWFkYzM3MDQwY2ZhNDMzNmZkMTBjZWEyMDY3NGNkMTk5YTIyNDAzNzIwOWFhMjc2ZhRtH/g=: 00:22:03.445 07:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:03.445 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:03.445 07:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:03.445 07:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.445 07:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.445 07:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.445 07:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:03.445 07:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.445 07:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.445 07:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.445 07:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:22:03.445 07:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:22:03.703 07:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:03.703 07:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:03.703 07:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:03.703 07:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:03.703 07:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:03.703 07:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:03.703 07:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:03.703 07:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:03.703 07:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:03.703 07:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:03.961 request: 00:22:03.961 { 00:22:03.961 "name": "nvme0", 00:22:03.961 "trtype": "tcp", 00:22:03.961 "traddr": "10.0.0.2", 00:22:03.961 "adrfam": "ipv4", 00:22:03.961 "trsvcid": "4420", 00:22:03.961 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:03.961 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:03.961 "prchk_reftag": false, 00:22:03.961 "prchk_guard": false, 00:22:03.961 "hdgst": false, 00:22:03.961 "ddgst": false, 00:22:03.961 "dhchap_key": "key3", 00:22:03.961 "allow_unrecognized_csi": false, 00:22:03.961 "method": "bdev_nvme_attach_controller", 00:22:03.961 "req_id": 1 00:22:03.961 } 00:22:03.961 Got JSON-RPC error response 00:22:03.961 response: 00:22:03.961 { 00:22:03.961 "code": -5, 00:22:03.961 "message": "Input/output error" 00:22:03.961 } 00:22:03.961 07:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:03.961 07:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:03.961 07:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:03.961 07:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:03.961 07:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:22:03.961 07:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:22:03.961 07:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:03.961 07:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:04.220 07:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:04.220 07:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:04.220 07:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:04.220 07:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:04.220 07:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:04.220 07:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:04.220 07:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:04.220 07:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:04.220 07:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:04.220 07:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:04.478 request: 00:22:04.478 { 00:22:04.478 "name": "nvme0", 00:22:04.478 "trtype": "tcp", 00:22:04.478 "traddr": "10.0.0.2", 00:22:04.478 "adrfam": "ipv4", 00:22:04.478 "trsvcid": "4420", 00:22:04.478 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:04.478 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:04.478 "prchk_reftag": false, 00:22:04.478 "prchk_guard": false, 00:22:04.478 "hdgst": false, 00:22:04.478 "ddgst": false, 00:22:04.478 "dhchap_key": "key3", 00:22:04.478 "allow_unrecognized_csi": false, 00:22:04.478 "method": "bdev_nvme_attach_controller", 00:22:04.478 "req_id": 1 00:22:04.478 } 00:22:04.478 Got JSON-RPC error response 00:22:04.478 response: 00:22:04.478 { 00:22:04.478 "code": -5, 00:22:04.478 "message": "Input/output error" 00:22:04.478 } 00:22:04.478 07:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:04.478 07:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:04.478 07:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:04.478 07:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:04.478 07:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:04.478 07:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:22:04.478 07:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:04.478 07:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:04.478 07:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:04.478 07:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:05.044 07:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:05.044 07:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.044 07:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.044 07:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.044 07:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:05.044 07:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.044 07:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.044 07:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.044 07:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:05.044 07:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:05.044 07:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:05.044 07:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:05.044 07:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:05.044 07:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:05.044 07:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:05.044 07:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:05.044 07:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:05.044 07:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:05.610 request: 00:22:05.610 { 00:22:05.610 "name": "nvme0", 00:22:05.610 "trtype": "tcp", 00:22:05.610 "traddr": "10.0.0.2", 00:22:05.610 "adrfam": "ipv4", 00:22:05.610 "trsvcid": "4420", 00:22:05.610 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:05.610 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:05.610 "prchk_reftag": false, 00:22:05.610 "prchk_guard": false, 00:22:05.610 "hdgst": false, 00:22:05.610 "ddgst": false, 00:22:05.610 "dhchap_key": "key0", 00:22:05.610 "dhchap_ctrlr_key": "key1", 00:22:05.610 "allow_unrecognized_csi": false, 00:22:05.610 "method": "bdev_nvme_attach_controller", 00:22:05.610 "req_id": 1 00:22:05.610 } 00:22:05.610 Got JSON-RPC error response 00:22:05.610 response: 00:22:05.610 { 00:22:05.610 "code": -5, 00:22:05.610 "message": "Input/output error" 00:22:05.610 } 00:22:05.610 07:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:05.610 07:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:05.610 07:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:05.610 07:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:05.610 07:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:22:05.610 07:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:05.610 07:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:05.868 nvme0n1 00:22:05.868 07:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:22:05.868 07:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:22:05.868 07:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:06.126 07:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.127 07:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:06.127 07:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:06.385 07:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:06.385 07:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.385 07:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.385 07:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.643 07:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:06.643 07:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:06.643 07:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:08.015 nvme0n1 00:22:08.016 07:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:22:08.016 07:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:22:08.016 07:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:08.274 07:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:08.274 07:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:08.274 07:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.274 07:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.274 07:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.274 07:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:22:08.274 07:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:22:08.274 07:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:08.532 07:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:08.532 07:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:M2I3Nzg2MzJmZGMyYWQwMDc4OTgyNDFmYzEzODg1ZGVmNTQyNzZjODRjOGQzNzhjsZAAwQ==: --dhchap-ctrl-secret DHHC-1:03:YTgxZTE5YjUzODYxYTlkMWFkYzM3MDQwY2ZhNDMzNmZkMTBjZWEyMDY3NGNkMTk5YTIyNDAzNzIwOWFhMjc2ZhRtH/g=: 00:22:08.532 07:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:M2I3Nzg2MzJmZGMyYWQwMDc4OTgyNDFmYzEzODg1ZGVmNTQyNzZjODRjOGQzNzhjsZAAwQ==: --dhchap-ctrl-secret DHHC-1:03:YTgxZTE5YjUzODYxYTlkMWFkYzM3MDQwY2ZhNDMzNmZkMTBjZWEyMDY3NGNkMTk5YTIyNDAzNzIwOWFhMjc2ZhRtH/g=: 00:22:09.467 07:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:22:09.467 07:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:22:09.467 07:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:22:09.467 07:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:22:09.467 07:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:22:09.467 07:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:22:09.467 07:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:22:09.467 07:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:09.467 07:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:09.725 07:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:22:09.725 07:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:09.725 07:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:22:09.725 07:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:09.725 07:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:09.725 07:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:09.725 07:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:09.725 07:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:09.725 07:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:09.725 07:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:10.659 request: 00:22:10.659 { 00:22:10.659 "name": "nvme0", 00:22:10.659 "trtype": "tcp", 00:22:10.659 "traddr": "10.0.0.2", 00:22:10.659 "adrfam": "ipv4", 00:22:10.659 "trsvcid": "4420", 00:22:10.659 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:10.659 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:10.659 "prchk_reftag": false, 00:22:10.659 "prchk_guard": false, 00:22:10.659 "hdgst": false, 00:22:10.659 "ddgst": false, 00:22:10.659 "dhchap_key": "key1", 00:22:10.659 "allow_unrecognized_csi": false, 00:22:10.659 "method": "bdev_nvme_attach_controller", 00:22:10.659 "req_id": 1 00:22:10.659 } 00:22:10.659 Got JSON-RPC error response 00:22:10.659 response: 00:22:10.659 { 00:22:10.659 "code": -5, 00:22:10.659 "message": "Input/output error" 00:22:10.659 } 00:22:10.659 07:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:10.659 07:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:10.659 07:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:10.659 07:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:10.659 07:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:10.659 07:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:10.659 07:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:12.034 nvme0n1 00:22:12.034 07:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:22:12.034 07:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:22:12.035 07:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:12.600 07:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:12.600 07:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:12.600 07:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:12.858 07:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:12.858 07:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.858 07:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.858 07:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.858 07:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:22:12.858 07:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:12.858 07:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:13.116 nvme0n1 00:22:13.116 07:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:22:13.116 07:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:22:13.116 07:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:13.374 07:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:13.374 07:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:13.374 07:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:13.632 07:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:13.632 07:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.632 07:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.632 07:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.632 07:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:NDBkMTU5ZTQ3ODViMjIwNjJmMmU4MGM0MTE3NjZmNGU8Y8L9: '' 2s 00:22:13.632 07:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:13.632 07:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:13.632 07:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:NDBkMTU5ZTQ3ODViMjIwNjJmMmU4MGM0MTE3NjZmNGU8Y8L9: 00:22:13.632 07:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:22:13.632 07:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:13.632 07:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:13.632 07:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:NDBkMTU5ZTQ3ODViMjIwNjJmMmU4MGM0MTE3NjZmNGU8Y8L9: ]] 00:22:13.632 07:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:NDBkMTU5ZTQ3ODViMjIwNjJmMmU4MGM0MTE3NjZmNGU8Y8L9: 00:22:13.632 07:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:22:13.632 07:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:13.632 07:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:16.158 07:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:22:16.158 07:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:22:16.158 07:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:16.158 07:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:16.158 07:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:16.158 07:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:22:16.158 07:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:22:16.158 07:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key key2 00:22:16.158 07:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.158 07:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.158 07:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.158 07:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:M2I3Nzg2MzJmZGMyYWQwMDc4OTgyNDFmYzEzODg1ZGVmNTQyNzZjODRjOGQzNzhjsZAAwQ==: 2s 00:22:16.158 07:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:16.158 07:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:16.158 07:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:22:16.158 07:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:M2I3Nzg2MzJmZGMyYWQwMDc4OTgyNDFmYzEzODg1ZGVmNTQyNzZjODRjOGQzNzhjsZAAwQ==: 00:22:16.158 07:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:16.158 07:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:16.158 07:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:22:16.158 07:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:M2I3Nzg2MzJmZGMyYWQwMDc4OTgyNDFmYzEzODg1ZGVmNTQyNzZjODRjOGQzNzhjsZAAwQ==: ]] 00:22:16.158 07:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:M2I3Nzg2MzJmZGMyYWQwMDc4OTgyNDFmYzEzODg1ZGVmNTQyNzZjODRjOGQzNzhjsZAAwQ==: 00:22:16.158 07:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:16.158 07:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:18.057 07:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:22:18.057 07:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:22:18.057 07:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:18.057 07:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:18.057 07:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:18.057 07:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:22:18.057 07:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:22:18.057 07:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:18.057 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:18.057 07:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:18.057 07:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.057 07:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.057 07:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.057 07:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:18.057 07:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:18.057 07:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:19.431 nvme0n1 00:22:19.431 07:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:19.431 07:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.431 07:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.431 07:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.431 07:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:19.431 07:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:20.421 07:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:22:20.421 07:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:22:20.421 07:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:20.421 07:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:20.421 07:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:20.421 07:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.421 07:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.421 07:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.421 07:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:22:20.421 07:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:22:20.679 07:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:22:20.679 07:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:22:20.679 07:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:20.936 07:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:20.936 07:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:20.936 07:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.936 07:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.936 07:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.936 07:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:20.936 07:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:20.936 07:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:20.936 07:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:22:20.936 07:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:20.936 07:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:22:20.936 07:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:20.936 07:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:20.936 07:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:21.868 request: 00:22:21.868 { 00:22:21.868 "name": "nvme0", 00:22:21.868 "dhchap_key": "key1", 00:22:21.868 "dhchap_ctrlr_key": "key3", 00:22:21.868 "method": "bdev_nvme_set_keys", 00:22:21.868 "req_id": 1 00:22:21.868 } 00:22:21.868 Got JSON-RPC error response 00:22:21.868 response: 00:22:21.868 { 00:22:21.868 "code": -13, 00:22:21.868 "message": "Permission denied" 00:22:21.868 } 00:22:21.868 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:21.868 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:21.868 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:21.868 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:21.868 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:21.868 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:21.868 07:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:22.125 07:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:22:22.125 07:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:22:23.496 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:23.496 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:23.496 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:23.496 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:22:23.496 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:23.496 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.496 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.496 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.496 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:23.496 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:23.496 07:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:24.870 nvme0n1 00:22:24.870 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:24.870 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.870 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.870 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.870 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:24.870 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:24.870 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:24.870 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:22:24.870 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:24.870 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:22:24.870 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:24.870 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:24.870 07:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:25.803 request: 00:22:25.803 { 00:22:25.803 "name": "nvme0", 00:22:25.803 "dhchap_key": "key2", 00:22:25.803 "dhchap_ctrlr_key": "key0", 00:22:25.803 "method": "bdev_nvme_set_keys", 00:22:25.803 "req_id": 1 00:22:25.803 } 00:22:25.803 Got JSON-RPC error response 00:22:25.803 response: 00:22:25.803 { 00:22:25.803 "code": -13, 00:22:25.803 "message": "Permission denied" 00:22:25.803 } 00:22:25.803 07:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:25.804 07:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:25.804 07:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:25.804 07:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:25.804 07:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:25.804 07:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:25.804 07:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:26.061 07:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:22:26.061 07:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:22:27.431 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:27.431 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:27.431 07:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:27.431 07:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:22:27.431 07:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:22:27.431 07:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:22:27.431 07:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2959934 00:22:27.431 07:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2959934 ']' 00:22:27.431 07:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2959934 00:22:27.431 07:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:27.431 07:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:27.431 07:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2959934 00:22:27.431 07:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:27.431 07:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:27.431 07:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2959934' 00:22:27.431 killing process with pid 2959934 00:22:27.431 07:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2959934 00:22:27.431 07:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2959934 00:22:29.961 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:29.961 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:29.961 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:22:29.961 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:29.961 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:22:29.961 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:29.961 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:29.961 rmmod nvme_tcp 00:22:29.961 rmmod nvme_fabrics 00:22:29.961 rmmod nvme_keyring 00:22:29.961 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:29.961 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:22:29.961 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:22:29.961 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 2983327 ']' 00:22:29.961 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 2983327 00:22:29.961 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2983327 ']' 00:22:29.961 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2983327 00:22:29.961 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:29.961 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:29.961 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2983327 00:22:29.961 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:29.961 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:29.961 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2983327' 00:22:29.961 killing process with pid 2983327 00:22:29.961 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2983327 00:22:29.961 07:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2983327 00:22:30.897 07:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:30.897 07:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:30.897 07:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:30.897 07:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:22:30.897 07:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:22:30.898 07:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:30.898 07:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:22:30.898 07:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:30.898 07:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:30.898 07:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:30.898 07:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:30.898 07:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:32.802 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:32.802 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.zFr /tmp/spdk.key-sha256.5pa /tmp/spdk.key-sha384.FBM /tmp/spdk.key-sha512.jCF /tmp/spdk.key-sha512.I6k /tmp/spdk.key-sha384.ehs /tmp/spdk.key-sha256.p8x '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:22:32.802 00:22:32.802 real 3m46.164s 00:22:32.802 user 8m44.911s 00:22:32.802 sys 0m27.675s 00:22:32.802 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:32.802 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.802 ************************************ 00:22:32.802 END TEST nvmf_auth_target 00:22:32.802 ************************************ 00:22:32.802 07:46:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:22:32.802 07:46:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:32.802 07:46:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:22:32.802 07:46:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:32.802 07:46:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:33.061 ************************************ 00:22:33.061 START TEST nvmf_bdevio_no_huge 00:22:33.061 ************************************ 00:22:33.061 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:33.061 * Looking for test storage... 00:22:33.061 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:33.061 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:33.061 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:22:33.061 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:33.061 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:33.061 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:33.061 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:33.061 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:33.061 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:22:33.061 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:22:33.061 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:22:33.061 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:22:33.061 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:22:33.061 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:22:33.061 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:22:33.061 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:33.061 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:22:33.061 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:22:33.061 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:33.061 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:33.061 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:22:33.061 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:22:33.061 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:33.061 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:22:33.061 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:22:33.061 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:22:33.061 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:22:33.061 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:33.061 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:22:33.061 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:22:33.061 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:33.061 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:33.061 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:22:33.061 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:33.061 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:33.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:33.061 --rc genhtml_branch_coverage=1 00:22:33.061 --rc genhtml_function_coverage=1 00:22:33.061 --rc genhtml_legend=1 00:22:33.061 --rc geninfo_all_blocks=1 00:22:33.061 --rc geninfo_unexecuted_blocks=1 00:22:33.061 00:22:33.061 ' 00:22:33.061 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:33.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:33.061 --rc genhtml_branch_coverage=1 00:22:33.061 --rc genhtml_function_coverage=1 00:22:33.061 --rc genhtml_legend=1 00:22:33.061 --rc geninfo_all_blocks=1 00:22:33.061 --rc geninfo_unexecuted_blocks=1 00:22:33.061 00:22:33.061 ' 00:22:33.061 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:33.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:33.061 --rc genhtml_branch_coverage=1 00:22:33.061 --rc genhtml_function_coverage=1 00:22:33.061 --rc genhtml_legend=1 00:22:33.061 --rc geninfo_all_blocks=1 00:22:33.061 --rc geninfo_unexecuted_blocks=1 00:22:33.061 00:22:33.061 ' 00:22:33.061 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:33.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:33.061 --rc genhtml_branch_coverage=1 00:22:33.061 --rc genhtml_function_coverage=1 00:22:33.061 --rc genhtml_legend=1 00:22:33.061 --rc geninfo_all_blocks=1 00:22:33.061 --rc geninfo_unexecuted_blocks=1 00:22:33.061 00:22:33.061 ' 00:22:33.062 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:33.062 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:33.062 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:33.062 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:33.062 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:33.062 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:33.062 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:33.062 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:33.062 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:33.062 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:33.062 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:33.062 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:33.062 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:33.062 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:33.062 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:33.062 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:33.062 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:33.062 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:33.062 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:33.062 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:22:33.062 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:33.062 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:33.062 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:33.062 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.062 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.062 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.062 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:33.062 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.062 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:22:33.062 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:33.062 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:33.062 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:33.062 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:33.062 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:33.062 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:33.062 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:33.062 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:33.062 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:33.062 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:33.062 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:33.062 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:33.062 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:33.062 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:33.062 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:33.062 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:33.062 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:33.062 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:33.062 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:33.062 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:33.062 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:33.062 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:33.062 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:33.062 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:22:33.062 07:46:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:35.593 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:35.593 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:22:35.593 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:35.593 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:35.593 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:35.593 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:35.593 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:35.593 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:22:35.593 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:35.593 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:22:35.593 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:22:35.593 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:22:35.593 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:22:35.593 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:22:35.593 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:22:35.593 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:35.593 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:35.593 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:35.593 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:35.593 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:35.593 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:35.593 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:35.593 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:35.593 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:35.593 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:35.593 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:35.593 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:35.593 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:35.593 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:35.593 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:35.593 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:35.593 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:35.593 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:35.593 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:35.593 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:35.593 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:35.593 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:35.593 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:35.593 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:35.593 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:35.593 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:35.593 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:35.593 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:35.593 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:35.593 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:35.593 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:35.593 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:35.593 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:35.593 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:35.593 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:35.593 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:35.593 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:35.593 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:35.593 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:35.593 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:35.593 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:35.594 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:35.594 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:35.594 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:35.594 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:35.594 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:35.594 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:35.594 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:35.594 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:35.594 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:35.594 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:35.594 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:35.594 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:35.594 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:35.594 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:35.594 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:35.594 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:35.594 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:35.594 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:22:35.594 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:35.594 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:35.594 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:35.594 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:35.594 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:35.594 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:35.594 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:35.594 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:35.594 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:35.594 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:35.594 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:35.594 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:35.594 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:35.594 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:35.594 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:35.594 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:35.594 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:35.594 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:35.594 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:35.594 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:35.594 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:35.594 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:35.594 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:35.594 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:35.594 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:35.594 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:35.594 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:35.594 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:22:35.594 00:22:35.594 --- 10.0.0.2 ping statistics --- 00:22:35.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:35.594 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:22:35.594 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:35.594 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:35.594 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:22:35.594 00:22:35.594 --- 10.0.0.1 ping statistics --- 00:22:35.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:35.594 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:22:35.594 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:35.594 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:22:35.594 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:35.594 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:35.594 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:35.594 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:35.594 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:35.594 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:35.594 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:35.594 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:35.594 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:35.594 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:35.594 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:35.594 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=2989213 00:22:35.594 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:35.594 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 2989213 00:22:35.594 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 2989213 ']' 00:22:35.594 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:35.594 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:35.594 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:35.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:35.594 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:35.594 07:46:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:35.594 [2024-11-19 07:46:27.298319] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:22:35.594 [2024-11-19 07:46:27.298474] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:35.594 [2024-11-19 07:46:27.456765] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:35.853 [2024-11-19 07:46:27.586869] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:35.853 [2024-11-19 07:46:27.586946] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:35.853 [2024-11-19 07:46:27.586984] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:35.853 [2024-11-19 07:46:27.587007] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:35.853 [2024-11-19 07:46:27.587024] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:35.853 [2024-11-19 07:46:27.588879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:35.853 [2024-11-19 07:46:27.588927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:22:35.853 [2024-11-19 07:46:27.588956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:35.853 [2024-11-19 07:46:27.588962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:22:36.420 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:36.420 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:22:36.420 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:36.420 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:36.420 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:36.420 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:36.420 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:36.420 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.420 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:36.420 [2024-11-19 07:46:28.325065] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:36.420 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.420 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:36.420 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.420 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:36.678 Malloc0 00:22:36.678 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.678 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:36.678 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.678 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:36.678 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.678 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:36.678 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.678 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:36.678 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.678 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:36.678 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.678 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:36.678 [2024-11-19 07:46:28.415762] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:36.678 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.678 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:36.678 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:36.678 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:22:36.678 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:22:36.678 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:36.678 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:36.678 { 00:22:36.678 "params": { 00:22:36.678 "name": "Nvme$subsystem", 00:22:36.678 "trtype": "$TEST_TRANSPORT", 00:22:36.678 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:36.678 "adrfam": "ipv4", 00:22:36.678 "trsvcid": "$NVMF_PORT", 00:22:36.678 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:36.678 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:36.678 "hdgst": ${hdgst:-false}, 00:22:36.678 "ddgst": ${ddgst:-false} 00:22:36.678 }, 00:22:36.678 "method": "bdev_nvme_attach_controller" 00:22:36.678 } 00:22:36.678 EOF 00:22:36.678 )") 00:22:36.678 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:22:36.678 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:22:36.678 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:22:36.678 07:46:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:36.678 "params": { 00:22:36.678 "name": "Nvme1", 00:22:36.678 "trtype": "tcp", 00:22:36.678 "traddr": "10.0.0.2", 00:22:36.678 "adrfam": "ipv4", 00:22:36.678 "trsvcid": "4420", 00:22:36.678 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:36.678 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:36.678 "hdgst": false, 00:22:36.678 "ddgst": false 00:22:36.678 }, 00:22:36.678 "method": "bdev_nvme_attach_controller" 00:22:36.678 }' 00:22:36.678 [2024-11-19 07:46:28.501031] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:22:36.678 [2024-11-19 07:46:28.501172] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2989366 ] 00:22:36.936 [2024-11-19 07:46:28.655614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:36.936 [2024-11-19 07:46:28.796506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:36.936 [2024-11-19 07:46:28.796547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:36.936 [2024-11-19 07:46:28.796556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:37.501 I/O targets: 00:22:37.501 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:37.501 00:22:37.501 00:22:37.501 CUnit - A unit testing framework for C - Version 2.1-3 00:22:37.501 http://cunit.sourceforge.net/ 00:22:37.501 00:22:37.501 00:22:37.501 Suite: bdevio tests on: Nvme1n1 00:22:37.501 Test: blockdev write read block ...passed 00:22:37.501 Test: blockdev write zeroes read block ...passed 00:22:37.501 Test: blockdev write zeroes read no split ...passed 00:22:37.501 Test: blockdev write zeroes read split ...passed 00:22:37.760 Test: blockdev write zeroes read split partial ...passed 00:22:37.760 Test: blockdev reset ...[2024-11-19 07:46:29.451138] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:37.760 [2024-11-19 07:46:29.451337] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f1100 (9): Bad file descriptor 00:22:37.760 [2024-11-19 07:46:29.509993] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:22:37.760 passed 00:22:37.760 Test: blockdev write read 8 blocks ...passed 00:22:37.760 Test: blockdev write read size > 128k ...passed 00:22:37.760 Test: blockdev write read invalid size ...passed 00:22:37.760 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:37.760 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:37.760 Test: blockdev write read max offset ...passed 00:22:37.760 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:37.760 Test: blockdev writev readv 8 blocks ...passed 00:22:37.760 Test: blockdev writev readv 30 x 1block ...passed 00:22:38.019 Test: blockdev writev readv block ...passed 00:22:38.019 Test: blockdev writev readv size > 128k ...passed 00:22:38.019 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:38.019 Test: blockdev comparev and writev ...[2024-11-19 07:46:29.772569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:38.019 [2024-11-19 07:46:29.772650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.019 [2024-11-19 07:46:29.772699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:38.019 [2024-11-19 07:46:29.772727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:38.019 [2024-11-19 07:46:29.773206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:38.019 [2024-11-19 07:46:29.773243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:38.019 [2024-11-19 07:46:29.773277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:38.019 [2024-11-19 07:46:29.773303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:38.019 [2024-11-19 07:46:29.773740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:38.019 [2024-11-19 07:46:29.773775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:38.019 [2024-11-19 07:46:29.773809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:38.019 [2024-11-19 07:46:29.773835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:38.019 [2024-11-19 07:46:29.774275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:38.019 [2024-11-19 07:46:29.774314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:38.019 [2024-11-19 07:46:29.774351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:38.019 [2024-11-19 07:46:29.774377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:38.019 passed 00:22:38.019 Test: blockdev nvme passthru rw ...passed 00:22:38.019 Test: blockdev nvme passthru vendor specific ...[2024-11-19 07:46:29.858112] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:38.019 [2024-11-19 07:46:29.858171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:38.019 [2024-11-19 07:46:29.858426] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:38.019 [2024-11-19 07:46:29.858460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:38.019 [2024-11-19 07:46:29.858660] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:38.019 [2024-11-19 07:46:29.858708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:38.019 [2024-11-19 07:46:29.858957] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:38.019 [2024-11-19 07:46:29.858990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:38.019 passed 00:22:38.019 Test: blockdev nvme admin passthru ...passed 00:22:38.019 Test: blockdev copy ...passed 00:22:38.019 00:22:38.019 Run Summary: Type Total Ran Passed Failed Inactive 00:22:38.019 suites 1 1 n/a 0 0 00:22:38.019 tests 23 23 23 0 0 00:22:38.019 asserts 152 152 152 0 n/a 00:22:38.019 00:22:38.019 Elapsed time = 1.319 seconds 00:22:38.952 07:46:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:38.952 07:46:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.952 07:46:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:38.952 07:46:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.952 07:46:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:38.952 07:46:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:38.952 07:46:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:38.952 07:46:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:22:38.952 07:46:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:38.952 07:46:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:22:38.952 07:46:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:38.952 07:46:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:38.952 rmmod nvme_tcp 00:22:38.952 rmmod nvme_fabrics 00:22:38.952 rmmod nvme_keyring 00:22:38.952 07:46:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:38.952 07:46:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:22:38.952 07:46:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:22:38.952 07:46:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 2989213 ']' 00:22:38.952 07:46:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 2989213 00:22:38.952 07:46:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 2989213 ']' 00:22:38.952 07:46:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 2989213 00:22:38.952 07:46:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:22:38.952 07:46:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:38.952 07:46:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2989213 00:22:38.952 07:46:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:22:38.952 07:46:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:22:38.952 07:46:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2989213' 00:22:38.952 killing process with pid 2989213 00:22:38.952 07:46:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 2989213 00:22:38.952 07:46:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 2989213 00:22:39.888 07:46:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:39.888 07:46:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:39.888 07:46:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:39.888 07:46:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:22:39.888 07:46:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:22:39.888 07:46:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:39.888 07:46:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:22:39.888 07:46:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:39.888 07:46:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:39.888 07:46:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:39.888 07:46:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:39.888 07:46:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:41.792 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:41.792 00:22:41.792 real 0m8.827s 00:22:41.792 user 0m20.043s 00:22:41.792 sys 0m2.958s 00:22:41.792 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:41.792 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:41.792 ************************************ 00:22:41.792 END TEST nvmf_bdevio_no_huge 00:22:41.793 ************************************ 00:22:41.793 07:46:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:41.793 07:46:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:41.793 07:46:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:41.793 07:46:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:41.793 ************************************ 00:22:41.793 START TEST nvmf_tls 00:22:41.793 ************************************ 00:22:41.793 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:41.793 * Looking for test storage... 00:22:41.793 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:41.793 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:41.793 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:22:41.793 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:42.052 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:42.052 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:42.052 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:42.052 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:42.052 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:22:42.052 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:22:42.052 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:22:42.052 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:22:42.052 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:22:42.052 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:22:42.052 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:22:42.052 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:42.052 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:22:42.052 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:22:42.052 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:42.052 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:42.052 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:22:42.052 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:22:42.052 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:42.052 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:22:42.052 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:22:42.052 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:22:42.052 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:22:42.052 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:42.052 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:22:42.052 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:22:42.052 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:42.052 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:42.052 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:22:42.052 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:42.052 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:42.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:42.052 --rc genhtml_branch_coverage=1 00:22:42.052 --rc genhtml_function_coverage=1 00:22:42.052 --rc genhtml_legend=1 00:22:42.052 --rc geninfo_all_blocks=1 00:22:42.052 --rc geninfo_unexecuted_blocks=1 00:22:42.052 00:22:42.052 ' 00:22:42.052 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:42.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:42.052 --rc genhtml_branch_coverage=1 00:22:42.052 --rc genhtml_function_coverage=1 00:22:42.052 --rc genhtml_legend=1 00:22:42.052 --rc geninfo_all_blocks=1 00:22:42.052 --rc geninfo_unexecuted_blocks=1 00:22:42.052 00:22:42.052 ' 00:22:42.052 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:42.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:42.052 --rc genhtml_branch_coverage=1 00:22:42.052 --rc genhtml_function_coverage=1 00:22:42.052 --rc genhtml_legend=1 00:22:42.052 --rc geninfo_all_blocks=1 00:22:42.052 --rc geninfo_unexecuted_blocks=1 00:22:42.052 00:22:42.052 ' 00:22:42.052 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:42.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:42.052 --rc genhtml_branch_coverage=1 00:22:42.052 --rc genhtml_function_coverage=1 00:22:42.052 --rc genhtml_legend=1 00:22:42.052 --rc geninfo_all_blocks=1 00:22:42.052 --rc geninfo_unexecuted_blocks=1 00:22:42.052 00:22:42.052 ' 00:22:42.052 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:42.052 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:42.052 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:42.052 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:42.052 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:42.052 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:42.052 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:42.053 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:42.053 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:42.053 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:42.053 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:42.053 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:42.053 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:42.053 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:42.053 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:42.053 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:42.053 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:42.053 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:42.053 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:42.053 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:22:42.053 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:42.053 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:42.053 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:42.053 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.053 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.053 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.053 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:42.053 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.053 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:22:42.053 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:42.053 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:42.053 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:42.053 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:42.053 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:42.053 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:42.053 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:42.053 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:42.053 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:42.053 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:42.053 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:42.053 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:22:42.053 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:42.053 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:42.053 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:42.053 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:42.053 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:42.053 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:42.053 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:42.053 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:42.053 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:42.053 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:42.053 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:22:42.053 07:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:43.955 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:43.955 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:22:43.955 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:43.955 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:43.955 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:43.955 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:43.955 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:43.955 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:22:43.955 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:43.955 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:22:43.955 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:22:43.955 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:22:43.955 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:22:43.955 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:22:43.955 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:22:43.955 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:43.955 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:43.955 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:43.955 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:43.955 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:43.955 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:43.955 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:43.955 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:43.955 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:43.955 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:43.955 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:43.955 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:43.955 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:43.955 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:43.955 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:43.956 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:43.956 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:43.956 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:43.956 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:43.956 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:43.956 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:43.956 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:43.956 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:43.956 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:43.956 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:43.956 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:43.956 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:43.956 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:43.956 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:43.956 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:43.956 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:43.956 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:43.956 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:43.956 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:43.956 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:43.956 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:43.956 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:43.956 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:43.956 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:43.956 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:43.956 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:43.956 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:43.956 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:43.956 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:43.956 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:43.956 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:43.956 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:43.956 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:43.956 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:43.956 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:43.956 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:43.956 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:43.956 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:43.956 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:43.956 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:43.956 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:43.956 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:43.956 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:43.956 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:22:43.956 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:43.956 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:43.956 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:43.956 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:43.956 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:43.956 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:43.956 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:43.956 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:43.956 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:43.956 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:43.956 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:43.956 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:43.956 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:43.956 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:43.956 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:43.956 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:43.956 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:43.956 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:44.215 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:44.215 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:44.215 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:44.215 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:44.215 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:44.215 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:44.215 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:44.215 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:44.215 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:44.215 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:22:44.215 00:22:44.215 --- 10.0.0.2 ping statistics --- 00:22:44.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:44.215 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:22:44.215 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:44.215 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:44.215 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:22:44.215 00:22:44.215 --- 10.0.0.1 ping statistics --- 00:22:44.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:44.215 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:22:44.215 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:44.215 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:22:44.215 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:44.215 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:44.215 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:44.215 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:44.215 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:44.215 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:44.215 07:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:44.215 07:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:44.215 07:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:44.215 07:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:44.215 07:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:44.215 07:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2991591 00:22:44.215 07:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:44.215 07:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2991591 00:22:44.215 07:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2991591 ']' 00:22:44.215 07:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:44.215 07:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:44.215 07:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:44.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:44.215 07:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:44.215 07:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:44.215 [2024-11-19 07:46:36.117098] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:22:44.215 [2024-11-19 07:46:36.117272] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:44.474 [2024-11-19 07:46:36.286138] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:44.732 [2024-11-19 07:46:36.422516] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:44.732 [2024-11-19 07:46:36.422594] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:44.732 [2024-11-19 07:46:36.422628] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:44.732 [2024-11-19 07:46:36.422652] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:44.732 [2024-11-19 07:46:36.422672] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:44.732 [2024-11-19 07:46:36.424267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:45.298 07:46:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:45.298 07:46:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:45.298 07:46:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:45.298 07:46:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:45.298 07:46:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:45.298 07:46:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:45.298 07:46:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:22:45.298 07:46:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:45.555 true 00:22:45.555 07:46:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:45.556 07:46:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:22:45.814 07:46:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:22:45.814 07:46:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:22:45.814 07:46:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:46.072 07:46:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:46.072 07:46:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:22:46.638 07:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:22:46.638 07:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:22:46.638 07:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:46.638 07:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:46.638 07:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:22:46.896 07:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:22:46.896 07:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:22:46.896 07:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:46.896 07:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:22:47.461 07:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:22:47.461 07:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:22:47.461 07:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:47.719 07:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:47.719 07:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:22:47.977 07:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:22:47.977 07:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:22:47.977 07:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:48.235 07:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:48.235 07:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:22:48.492 07:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:22:48.492 07:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:22:48.492 07:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:48.492 07:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:48.492 07:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:48.492 07:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:48.492 07:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:22:48.492 07:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:22:48.492 07:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:48.492 07:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:48.492 07:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:48.492 07:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:48.492 07:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:48.492 07:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:48.492 07:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:22:48.492 07:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:22:48.492 07:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:48.749 07:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:48.749 07:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:22:48.749 07:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.wyw4P4LwBT 00:22:48.749 07:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:22:48.749 07:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.ZnmBbFB1zR 00:22:48.749 07:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:48.749 07:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:48.749 07:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.wyw4P4LwBT 00:22:48.749 07:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.ZnmBbFB1zR 00:22:48.749 07:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:49.007 07:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:49.573 07:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.wyw4P4LwBT 00:22:49.574 07:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.wyw4P4LwBT 00:22:49.574 07:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:49.832 [2024-11-19 07:46:41.677298] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:49.832 07:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:50.090 07:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:50.348 [2024-11-19 07:46:42.234791] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:50.348 [2024-11-19 07:46:42.235169] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:50.348 07:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:50.916 malloc0 00:22:50.916 07:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:50.916 07:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.wyw4P4LwBT 00:22:51.483 07:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:51.742 07:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.wyw4P4LwBT 00:23:01.773 Initializing NVMe Controllers 00:23:01.773 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:01.773 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:01.773 Initialization complete. Launching workers. 00:23:01.773 ======================================================== 00:23:01.773 Latency(us) 00:23:01.773 Device Information : IOPS MiB/s Average min max 00:23:01.773 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5580.78 21.80 11473.15 2586.44 13420.11 00:23:01.773 ======================================================== 00:23:01.773 Total : 5580.78 21.80 11473.15 2586.44 13420.11 00:23:01.773 00:23:01.773 07:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.wyw4P4LwBT 00:23:01.773 07:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:01.773 07:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:01.773 07:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:01.773 07:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.wyw4P4LwBT 00:23:01.773 07:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:01.773 07:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2993740 00:23:01.773 07:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:01.773 07:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:01.773 07:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2993740 /var/tmp/bdevperf.sock 00:23:01.773 07:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2993740 ']' 00:23:01.773 07:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:01.773 07:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:01.773 07:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:01.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:01.773 07:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:01.773 07:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:02.032 [2024-11-19 07:46:53.776615] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:23:02.032 [2024-11-19 07:46:53.776784] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2993740 ] 00:23:02.032 [2024-11-19 07:46:53.928872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:02.290 [2024-11-19 07:46:54.050437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:02.856 07:46:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:02.856 07:46:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:02.856 07:46:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.wyw4P4LwBT 00:23:03.114 07:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:03.372 [2024-11-19 07:46:55.254437] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:03.631 TLSTESTn1 00:23:03.632 07:46:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:03.632 Running I/O for 10 seconds... 00:23:05.951 2596.00 IOPS, 10.14 MiB/s [2024-11-19T06:46:58.816Z] 2611.00 IOPS, 10.20 MiB/s [2024-11-19T06:46:59.766Z] 2621.33 IOPS, 10.24 MiB/s [2024-11-19T06:47:00.701Z] 2622.25 IOPS, 10.24 MiB/s [2024-11-19T06:47:01.636Z] 2624.20 IOPS, 10.25 MiB/s [2024-11-19T06:47:02.571Z] 2622.50 IOPS, 10.24 MiB/s [2024-11-19T06:47:03.506Z] 2622.57 IOPS, 10.24 MiB/s [2024-11-19T06:47:04.882Z] 2623.12 IOPS, 10.25 MiB/s [2024-11-19T06:47:05.817Z] 2624.56 IOPS, 10.25 MiB/s [2024-11-19T06:47:05.817Z] 2626.70 IOPS, 10.26 MiB/s 00:23:13.887 Latency(us) 00:23:13.888 [2024-11-19T06:47:05.818Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:13.888 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:13.888 Verification LBA range: start 0x0 length 0x2000 00:23:13.888 TLSTESTn1 : 10.03 2631.85 10.28 0.00 0.00 48537.35 8786.68 35923.44 00:23:13.888 [2024-11-19T06:47:05.818Z] =================================================================================================================== 00:23:13.888 [2024-11-19T06:47:05.818Z] Total : 2631.85 10.28 0.00 0.00 48537.35 8786.68 35923.44 00:23:13.888 { 00:23:13.888 "results": [ 00:23:13.888 { 00:23:13.888 "job": "TLSTESTn1", 00:23:13.888 "core_mask": "0x4", 00:23:13.888 "workload": "verify", 00:23:13.888 "status": "finished", 00:23:13.888 "verify_range": { 00:23:13.888 "start": 0, 00:23:13.888 "length": 8192 00:23:13.888 }, 00:23:13.888 "queue_depth": 128, 00:23:13.888 "io_size": 4096, 00:23:13.888 "runtime": 10.028671, 00:23:13.888 "iops": 2631.8542107922376, 00:23:13.888 "mibps": 10.280680510907178, 00:23:13.888 "io_failed": 0, 00:23:13.888 "io_timeout": 0, 00:23:13.888 "avg_latency_us": 48537.351816097376, 00:23:13.888 "min_latency_us": 8786.678518518518, 00:23:13.888 "max_latency_us": 35923.43703703704 00:23:13.888 } 00:23:13.888 ], 00:23:13.888 "core_count": 1 00:23:13.888 } 00:23:13.888 07:47:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:13.888 07:47:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2993740 00:23:13.888 07:47:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2993740 ']' 00:23:13.888 07:47:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2993740 00:23:13.888 07:47:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:13.888 07:47:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:13.888 07:47:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2993740 00:23:13.888 07:47:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:13.888 07:47:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:13.888 07:47:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2993740' 00:23:13.888 killing process with pid 2993740 00:23:13.888 07:47:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2993740 00:23:13.888 Received shutdown signal, test time was about 10.000000 seconds 00:23:13.888 00:23:13.888 Latency(us) 00:23:13.888 [2024-11-19T06:47:05.818Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:13.888 [2024-11-19T06:47:05.818Z] =================================================================================================================== 00:23:13.888 [2024-11-19T06:47:05.818Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:13.888 07:47:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2993740 00:23:14.824 07:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZnmBbFB1zR 00:23:14.824 07:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:14.824 07:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZnmBbFB1zR 00:23:14.824 07:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:14.824 07:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:14.824 07:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:14.824 07:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:14.824 07:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZnmBbFB1zR 00:23:14.824 07:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:14.824 07:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:14.824 07:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:14.824 07:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ZnmBbFB1zR 00:23:14.824 07:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:14.824 07:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2995196 00:23:14.824 07:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:14.824 07:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:14.824 07:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2995196 /var/tmp/bdevperf.sock 00:23:14.824 07:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2995196 ']' 00:23:14.824 07:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:14.824 07:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:14.824 07:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:14.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:14.824 07:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:14.824 07:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:14.824 [2024-11-19 07:47:06.488379] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:23:14.824 [2024-11-19 07:47:06.488540] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2995196 ] 00:23:14.824 [2024-11-19 07:47:06.619603] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:14.824 [2024-11-19 07:47:06.738448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:15.760 07:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:15.760 07:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:15.760 07:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ZnmBbFB1zR 00:23:16.018 07:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:16.276 [2024-11-19 07:47:08.116294] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:16.276 [2024-11-19 07:47:08.126387] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:16.276 [2024-11-19 07:47:08.127297] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (107): Transport endpoint is not connected 00:23:16.276 [2024-11-19 07:47:08.128261] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:23:16.276 [2024-11-19 07:47:08.129262] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:23:16.276 [2024-11-19 07:47:08.129302] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:16.276 [2024-11-19 07:47:08.129325] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:16.276 [2024-11-19 07:47:08.129356] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:23:16.276 request: 00:23:16.276 { 00:23:16.276 "name": "TLSTEST", 00:23:16.276 "trtype": "tcp", 00:23:16.276 "traddr": "10.0.0.2", 00:23:16.276 "adrfam": "ipv4", 00:23:16.276 "trsvcid": "4420", 00:23:16.276 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:16.276 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:16.276 "prchk_reftag": false, 00:23:16.276 "prchk_guard": false, 00:23:16.276 "hdgst": false, 00:23:16.276 "ddgst": false, 00:23:16.276 "psk": "key0", 00:23:16.276 "allow_unrecognized_csi": false, 00:23:16.276 "method": "bdev_nvme_attach_controller", 00:23:16.276 "req_id": 1 00:23:16.276 } 00:23:16.276 Got JSON-RPC error response 00:23:16.276 response: 00:23:16.276 { 00:23:16.276 "code": -5, 00:23:16.276 "message": "Input/output error" 00:23:16.276 } 00:23:16.276 07:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2995196 00:23:16.276 07:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2995196 ']' 00:23:16.276 07:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2995196 00:23:16.276 07:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:16.276 07:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:16.276 07:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2995196 00:23:16.276 07:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:16.276 07:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:16.276 07:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2995196' 00:23:16.276 killing process with pid 2995196 00:23:16.276 07:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2995196 00:23:16.276 Received shutdown signal, test time was about 10.000000 seconds 00:23:16.276 00:23:16.276 Latency(us) 00:23:16.276 [2024-11-19T06:47:08.206Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:16.276 [2024-11-19T06:47:08.206Z] =================================================================================================================== 00:23:16.276 [2024-11-19T06:47:08.206Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:16.276 07:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2995196 00:23:17.211 07:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:17.211 07:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:17.211 07:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:17.211 07:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:17.211 07:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:17.211 07:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.wyw4P4LwBT 00:23:17.211 07:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:17.211 07:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.wyw4P4LwBT 00:23:17.211 07:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:17.211 07:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:17.211 07:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:17.211 07:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:17.211 07:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.wyw4P4LwBT 00:23:17.211 07:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:17.211 07:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:17.211 07:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:23:17.211 07:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.wyw4P4LwBT 00:23:17.211 07:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:17.211 07:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2995473 00:23:17.211 07:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:17.211 07:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:17.212 07:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2995473 /var/tmp/bdevperf.sock 00:23:17.212 07:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2995473 ']' 00:23:17.212 07:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:17.212 07:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:17.212 07:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:17.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:17.212 07:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:17.212 07:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:17.212 [2024-11-19 07:47:09.070713] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:23:17.212 [2024-11-19 07:47:09.070861] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2995473 ] 00:23:17.469 [2024-11-19 07:47:09.201793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:17.469 [2024-11-19 07:47:09.321408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:18.404 07:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:18.404 07:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:18.404 07:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.wyw4P4LwBT 00:23:18.662 07:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:23:18.921 [2024-11-19 07:47:10.672846] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:18.921 [2024-11-19 07:47:10.682406] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:18.921 [2024-11-19 07:47:10.682443] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:18.921 [2024-11-19 07:47:10.682519] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:18.921 [2024-11-19 07:47:10.682612] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (107): Transport endpoint is not connected 00:23:18.921 [2024-11-19 07:47:10.683565] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:23:18.921 [2024-11-19 07:47:10.684568] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:23:18.921 [2024-11-19 07:47:10.684598] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:18.921 [2024-11-19 07:47:10.684630] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:18.921 [2024-11-19 07:47:10.684657] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:23:18.921 request: 00:23:18.921 { 00:23:18.921 "name": "TLSTEST", 00:23:18.921 "trtype": "tcp", 00:23:18.921 "traddr": "10.0.0.2", 00:23:18.921 "adrfam": "ipv4", 00:23:18.921 "trsvcid": "4420", 00:23:18.921 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:18.921 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:18.921 "prchk_reftag": false, 00:23:18.921 "prchk_guard": false, 00:23:18.921 "hdgst": false, 00:23:18.921 "ddgst": false, 00:23:18.921 "psk": "key0", 00:23:18.921 "allow_unrecognized_csi": false, 00:23:18.921 "method": "bdev_nvme_attach_controller", 00:23:18.921 "req_id": 1 00:23:18.921 } 00:23:18.921 Got JSON-RPC error response 00:23:18.921 response: 00:23:18.921 { 00:23:18.921 "code": -5, 00:23:18.921 "message": "Input/output error" 00:23:18.921 } 00:23:18.921 07:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2995473 00:23:18.921 07:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2995473 ']' 00:23:18.921 07:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2995473 00:23:18.921 07:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:18.921 07:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:18.921 07:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2995473 00:23:18.921 07:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:18.921 07:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:18.921 07:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2995473' 00:23:18.921 killing process with pid 2995473 00:23:18.921 07:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2995473 00:23:18.921 Received shutdown signal, test time was about 10.000000 seconds 00:23:18.921 00:23:18.921 Latency(us) 00:23:18.921 [2024-11-19T06:47:10.851Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:18.921 [2024-11-19T06:47:10.851Z] =================================================================================================================== 00:23:18.921 [2024-11-19T06:47:10.851Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:18.921 07:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2995473 00:23:19.855 07:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:19.855 07:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:19.855 07:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:19.855 07:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:19.855 07:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:19.855 07:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.wyw4P4LwBT 00:23:19.855 07:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:19.855 07:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.wyw4P4LwBT 00:23:19.855 07:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:19.855 07:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:19.855 07:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:19.855 07:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:19.855 07:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.wyw4P4LwBT 00:23:19.855 07:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:19.855 07:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:23:19.855 07:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:19.855 07:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.wyw4P4LwBT 00:23:19.855 07:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:19.855 07:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2995763 00:23:19.855 07:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:19.855 07:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:19.855 07:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2995763 /var/tmp/bdevperf.sock 00:23:19.855 07:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2995763 ']' 00:23:19.855 07:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:19.855 07:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:19.855 07:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:19.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:19.855 07:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:19.855 07:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:19.855 [2024-11-19 07:47:11.613787] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:23:19.855 [2024-11-19 07:47:11.613940] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2995763 ] 00:23:19.855 [2024-11-19 07:47:11.754328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:20.114 [2024-11-19 07:47:11.874407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:21.051 07:47:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:21.051 07:47:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:21.051 07:47:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.wyw4P4LwBT 00:23:21.051 07:47:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:21.310 [2024-11-19 07:47:13.153664] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:21.310 [2024-11-19 07:47:13.165238] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:21.310 [2024-11-19 07:47:13.165274] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:21.310 [2024-11-19 07:47:13.165339] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:21.310 [2024-11-19 07:47:13.165565] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (107): Transport endpoint is not connected 00:23:21.310 [2024-11-19 07:47:13.166535] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:23:21.310 [2024-11-19 07:47:13.167536] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:23:21.310 [2024-11-19 07:47:13.167565] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:21.310 [2024-11-19 07:47:13.167593] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:23:21.310 [2024-11-19 07:47:13.167619] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:23:21.310 request: 00:23:21.310 { 00:23:21.310 "name": "TLSTEST", 00:23:21.310 "trtype": "tcp", 00:23:21.310 "traddr": "10.0.0.2", 00:23:21.310 "adrfam": "ipv4", 00:23:21.310 "trsvcid": "4420", 00:23:21.310 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:21.310 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:21.310 "prchk_reftag": false, 00:23:21.310 "prchk_guard": false, 00:23:21.310 "hdgst": false, 00:23:21.310 "ddgst": false, 00:23:21.310 "psk": "key0", 00:23:21.310 "allow_unrecognized_csi": false, 00:23:21.310 "method": "bdev_nvme_attach_controller", 00:23:21.310 "req_id": 1 00:23:21.310 } 00:23:21.310 Got JSON-RPC error response 00:23:21.310 response: 00:23:21.310 { 00:23:21.310 "code": -5, 00:23:21.310 "message": "Input/output error" 00:23:21.310 } 00:23:21.310 07:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2995763 00:23:21.310 07:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2995763 ']' 00:23:21.310 07:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2995763 00:23:21.310 07:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:21.310 07:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:21.310 07:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2995763 00:23:21.310 07:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:21.310 07:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:21.310 07:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2995763' 00:23:21.310 killing process with pid 2995763 00:23:21.310 07:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2995763 00:23:21.310 Received shutdown signal, test time was about 10.000000 seconds 00:23:21.310 00:23:21.310 Latency(us) 00:23:21.310 [2024-11-19T06:47:13.240Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:21.310 [2024-11-19T06:47:13.240Z] =================================================================================================================== 00:23:21.310 [2024-11-19T06:47:13.240Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:21.310 07:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2995763 00:23:22.247 07:47:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:22.247 07:47:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:22.247 07:47:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:22.247 07:47:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:22.247 07:47:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:22.247 07:47:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:22.247 07:47:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:22.247 07:47:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:22.247 07:47:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:22.247 07:47:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:22.248 07:47:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:22.248 07:47:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:22.248 07:47:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:22.248 07:47:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:22.248 07:47:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:22.248 07:47:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:22.248 07:47:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:23:22.248 07:47:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:22.248 07:47:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2996150 00:23:22.248 07:47:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:22.248 07:47:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:22.248 07:47:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2996150 /var/tmp/bdevperf.sock 00:23:22.248 07:47:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2996150 ']' 00:23:22.248 07:47:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:22.248 07:47:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:22.248 07:47:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:22.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:22.248 07:47:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:22.248 07:47:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:22.248 [2024-11-19 07:47:14.095572] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:23:22.248 [2024-11-19 07:47:14.095733] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2996150 ] 00:23:22.510 [2024-11-19 07:47:14.227464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:22.510 [2024-11-19 07:47:14.346658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:23.454 07:47:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:23.454 07:47:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:23.454 07:47:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:23:23.454 [2024-11-19 07:47:15.295237] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:23:23.454 [2024-11-19 07:47:15.295288] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:23.454 request: 00:23:23.454 { 00:23:23.454 "name": "key0", 00:23:23.454 "path": "", 00:23:23.454 "method": "keyring_file_add_key", 00:23:23.454 "req_id": 1 00:23:23.454 } 00:23:23.454 Got JSON-RPC error response 00:23:23.454 response: 00:23:23.454 { 00:23:23.454 "code": -1, 00:23:23.454 "message": "Operation not permitted" 00:23:23.454 } 00:23:23.454 07:47:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:23.713 [2024-11-19 07:47:15.552076] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:23.713 [2024-11-19 07:47:15.552148] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:23.713 request: 00:23:23.713 { 00:23:23.713 "name": "TLSTEST", 00:23:23.713 "trtype": "tcp", 00:23:23.713 "traddr": "10.0.0.2", 00:23:23.713 "adrfam": "ipv4", 00:23:23.713 "trsvcid": "4420", 00:23:23.713 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:23.713 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:23.713 "prchk_reftag": false, 00:23:23.713 "prchk_guard": false, 00:23:23.713 "hdgst": false, 00:23:23.713 "ddgst": false, 00:23:23.713 "psk": "key0", 00:23:23.713 "allow_unrecognized_csi": false, 00:23:23.713 "method": "bdev_nvme_attach_controller", 00:23:23.713 "req_id": 1 00:23:23.713 } 00:23:23.713 Got JSON-RPC error response 00:23:23.713 response: 00:23:23.713 { 00:23:23.713 "code": -126, 00:23:23.713 "message": "Required key not available" 00:23:23.713 } 00:23:23.713 07:47:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2996150 00:23:23.713 07:47:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2996150 ']' 00:23:23.713 07:47:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2996150 00:23:23.713 07:47:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:23.713 07:47:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:23.713 07:47:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2996150 00:23:23.713 07:47:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:23.713 07:47:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:23.713 07:47:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2996150' 00:23:23.713 killing process with pid 2996150 00:23:23.713 07:47:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2996150 00:23:23.713 Received shutdown signal, test time was about 10.000000 seconds 00:23:23.713 00:23:23.713 Latency(us) 00:23:23.713 [2024-11-19T06:47:15.643Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:23.713 [2024-11-19T06:47:15.643Z] =================================================================================================================== 00:23:23.713 [2024-11-19T06:47:15.643Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:23.713 07:47:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2996150 00:23:24.715 07:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:24.715 07:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:24.715 07:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:24.715 07:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:24.715 07:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:24.715 07:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 2991591 00:23:24.715 07:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2991591 ']' 00:23:24.715 07:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2991591 00:23:24.715 07:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:24.715 07:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:24.715 07:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2991591 00:23:24.715 07:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:24.715 07:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:24.715 07:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2991591' 00:23:24.715 killing process with pid 2991591 00:23:24.715 07:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2991591 00:23:24.715 07:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2991591 00:23:26.121 07:47:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:23:26.121 07:47:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:23:26.121 07:47:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:23:26.121 07:47:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:26.121 07:47:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:23:26.121 07:47:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:23:26.121 07:47:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:23:26.121 07:47:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:26.121 07:47:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:23:26.121 07:47:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.DE2e5zWclL 00:23:26.121 07:47:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:26.121 07:47:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.DE2e5zWclL 00:23:26.121 07:47:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:23:26.121 07:47:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:26.121 07:47:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:26.121 07:47:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:26.121 07:47:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2996574 00:23:26.121 07:47:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:26.121 07:47:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2996574 00:23:26.121 07:47:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2996574 ']' 00:23:26.121 07:47:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:26.121 07:47:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:26.122 07:47:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:26.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:26.122 07:47:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:26.122 07:47:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:26.122 [2024-11-19 07:47:17.851728] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:23:26.122 [2024-11-19 07:47:17.851885] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:26.122 [2024-11-19 07:47:17.991214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:26.381 [2024-11-19 07:47:18.112397] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:26.381 [2024-11-19 07:47:18.112482] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:26.381 [2024-11-19 07:47:18.112503] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:26.381 [2024-11-19 07:47:18.112523] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:26.381 [2024-11-19 07:47:18.112539] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:26.381 [2024-11-19 07:47:18.114042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:26.948 07:47:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:26.948 07:47:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:26.948 07:47:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:26.948 07:47:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:26.948 07:47:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:26.948 07:47:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:26.948 07:47:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.DE2e5zWclL 00:23:26.948 07:47:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.DE2e5zWclL 00:23:26.948 07:47:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:27.513 [2024-11-19 07:47:19.177400] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:27.513 07:47:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:27.770 07:47:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:28.029 [2024-11-19 07:47:19.811410] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:28.029 [2024-11-19 07:47:19.811800] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:28.029 07:47:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:28.287 malloc0 00:23:28.287 07:47:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:28.545 07:47:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.DE2e5zWclL 00:23:28.802 07:47:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:29.061 07:47:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.DE2e5zWclL 00:23:29.061 07:47:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:29.061 07:47:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:29.061 07:47:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:29.061 07:47:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.DE2e5zWclL 00:23:29.061 07:47:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:29.061 07:47:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2996988 00:23:29.061 07:47:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:29.061 07:47:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:29.061 07:47:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2996988 /var/tmp/bdevperf.sock 00:23:29.061 07:47:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2996988 ']' 00:23:29.061 07:47:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:29.061 07:47:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:29.061 07:47:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:29.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:29.061 07:47:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:29.061 07:47:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:29.319 [2024-11-19 07:47:21.018862] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:23:29.319 [2024-11-19 07:47:21.019007] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2996988 ] 00:23:29.319 [2024-11-19 07:47:21.149774] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:29.577 [2024-11-19 07:47:21.268853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:30.143 07:47:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:30.143 07:47:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:30.143 07:47:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.DE2e5zWclL 00:23:30.401 07:47:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:30.659 [2024-11-19 07:47:22.484138] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:30.659 TLSTESTn1 00:23:30.659 07:47:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:30.918 Running I/O for 10 seconds... 00:23:32.792 2524.00 IOPS, 9.86 MiB/s [2024-11-19T06:47:26.101Z] 2640.50 IOPS, 10.31 MiB/s [2024-11-19T06:47:27.037Z] 2672.00 IOPS, 10.44 MiB/s [2024-11-19T06:47:27.974Z] 2688.50 IOPS, 10.50 MiB/s [2024-11-19T06:47:28.914Z] 2700.00 IOPS, 10.55 MiB/s [2024-11-19T06:47:29.848Z] 2707.17 IOPS, 10.57 MiB/s [2024-11-19T06:47:30.788Z] 2709.57 IOPS, 10.58 MiB/s [2024-11-19T06:47:31.727Z] 2708.38 IOPS, 10.58 MiB/s [2024-11-19T06:47:33.107Z] 2714.56 IOPS, 10.60 MiB/s [2024-11-19T06:47:33.107Z] 2716.80 IOPS, 10.61 MiB/s 00:23:41.177 Latency(us) 00:23:41.177 [2024-11-19T06:47:33.107Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:41.177 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:41.177 Verification LBA range: start 0x0 length 0x2000 00:23:41.177 TLSTESTn1 : 10.03 2721.95 10.63 0.00 0.00 46932.73 8252.68 52040.44 00:23:41.177 [2024-11-19T06:47:33.107Z] =================================================================================================================== 00:23:41.177 [2024-11-19T06:47:33.107Z] Total : 2721.95 10.63 0.00 0.00 46932.73 8252.68 52040.44 00:23:41.177 { 00:23:41.177 "results": [ 00:23:41.177 { 00:23:41.177 "job": "TLSTESTn1", 00:23:41.177 "core_mask": "0x4", 00:23:41.177 "workload": "verify", 00:23:41.177 "status": "finished", 00:23:41.177 "verify_range": { 00:23:41.177 "start": 0, 00:23:41.177 "length": 8192 00:23:41.177 }, 00:23:41.177 "queue_depth": 128, 00:23:41.177 "io_size": 4096, 00:23:41.177 "runtime": 10.026989, 00:23:41.177 "iops": 2721.9537191075005, 00:23:41.177 "mibps": 10.632631715263674, 00:23:41.177 "io_failed": 0, 00:23:41.177 "io_timeout": 0, 00:23:41.177 "avg_latency_us": 46932.731059680205, 00:23:41.177 "min_latency_us": 8252.68148148148, 00:23:41.177 "max_latency_us": 52040.43851851852 00:23:41.177 } 00:23:41.177 ], 00:23:41.177 "core_count": 1 00:23:41.177 } 00:23:41.177 07:47:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:41.177 07:47:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2996988 00:23:41.177 07:47:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2996988 ']' 00:23:41.177 07:47:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2996988 00:23:41.177 07:47:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:41.177 07:47:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:41.177 07:47:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2996988 00:23:41.177 07:47:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:41.177 07:47:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:41.177 07:47:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2996988' 00:23:41.177 killing process with pid 2996988 00:23:41.177 07:47:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2996988 00:23:41.177 Received shutdown signal, test time was about 10.000000 seconds 00:23:41.177 00:23:41.177 Latency(us) 00:23:41.177 [2024-11-19T06:47:33.107Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:41.177 [2024-11-19T06:47:33.107Z] =================================================================================================================== 00:23:41.177 [2024-11-19T06:47:33.107Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:41.177 07:47:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2996988 00:23:41.743 07:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.DE2e5zWclL 00:23:41.743 07:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.DE2e5zWclL 00:23:41.743 07:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:41.743 07:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.DE2e5zWclL 00:23:41.743 07:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:41.743 07:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:41.743 07:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:41.743 07:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:41.743 07:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.DE2e5zWclL 00:23:41.743 07:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:41.743 07:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:41.743 07:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:41.743 07:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.DE2e5zWclL 00:23:41.743 07:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:41.743 07:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2998444 00:23:41.743 07:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:41.743 07:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:41.743 07:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2998444 /var/tmp/bdevperf.sock 00:23:41.743 07:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2998444 ']' 00:23:41.743 07:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:41.743 07:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:41.743 07:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:41.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:41.743 07:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:41.743 07:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:42.004 [2024-11-19 07:47:33.695313] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:23:42.004 [2024-11-19 07:47:33.695465] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2998444 ] 00:23:42.004 [2024-11-19 07:47:33.845549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:42.263 [2024-11-19 07:47:33.970986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:42.829 07:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:42.829 07:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:42.829 07:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.DE2e5zWclL 00:23:43.087 [2024-11-19 07:47:34.937318] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.DE2e5zWclL': 0100666 00:23:43.087 [2024-11-19 07:47:34.937384] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:43.087 request: 00:23:43.087 { 00:23:43.087 "name": "key0", 00:23:43.087 "path": "/tmp/tmp.DE2e5zWclL", 00:23:43.087 "method": "keyring_file_add_key", 00:23:43.087 "req_id": 1 00:23:43.087 } 00:23:43.087 Got JSON-RPC error response 00:23:43.087 response: 00:23:43.087 { 00:23:43.087 "code": -1, 00:23:43.087 "message": "Operation not permitted" 00:23:43.087 } 00:23:43.087 07:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:43.346 [2024-11-19 07:47:35.194110] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:43.346 [2024-11-19 07:47:35.194187] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:43.346 request: 00:23:43.346 { 00:23:43.346 "name": "TLSTEST", 00:23:43.346 "trtype": "tcp", 00:23:43.346 "traddr": "10.0.0.2", 00:23:43.346 "adrfam": "ipv4", 00:23:43.346 "trsvcid": "4420", 00:23:43.346 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:43.346 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:43.346 "prchk_reftag": false, 00:23:43.346 "prchk_guard": false, 00:23:43.346 "hdgst": false, 00:23:43.346 "ddgst": false, 00:23:43.346 "psk": "key0", 00:23:43.346 "allow_unrecognized_csi": false, 00:23:43.346 "method": "bdev_nvme_attach_controller", 00:23:43.346 "req_id": 1 00:23:43.346 } 00:23:43.346 Got JSON-RPC error response 00:23:43.346 response: 00:23:43.346 { 00:23:43.346 "code": -126, 00:23:43.346 "message": "Required key not available" 00:23:43.346 } 00:23:43.346 07:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2998444 00:23:43.346 07:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2998444 ']' 00:23:43.346 07:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2998444 00:23:43.346 07:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:43.346 07:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:43.346 07:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2998444 00:23:43.346 07:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:43.346 07:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:43.346 07:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2998444' 00:23:43.346 killing process with pid 2998444 00:23:43.346 07:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2998444 00:23:43.346 Received shutdown signal, test time was about 10.000000 seconds 00:23:43.346 00:23:43.346 Latency(us) 00:23:43.346 [2024-11-19T06:47:35.276Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:43.346 [2024-11-19T06:47:35.276Z] =================================================================================================================== 00:23:43.346 [2024-11-19T06:47:35.276Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:43.346 07:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2998444 00:23:44.283 07:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:44.283 07:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:44.284 07:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:44.284 07:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:44.284 07:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:44.284 07:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 2996574 00:23:44.284 07:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2996574 ']' 00:23:44.284 07:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2996574 00:23:44.284 07:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:44.284 07:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:44.284 07:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2996574 00:23:44.284 07:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:44.284 07:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:44.284 07:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2996574' 00:23:44.284 killing process with pid 2996574 00:23:44.284 07:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2996574 00:23:44.284 07:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2996574 00:23:45.661 07:47:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:23:45.661 07:47:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:45.661 07:47:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:45.661 07:47:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:45.661 07:47:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2998854 00:23:45.661 07:47:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:45.661 07:47:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2998854 00:23:45.661 07:47:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2998854 ']' 00:23:45.661 07:47:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:45.661 07:47:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:45.661 07:47:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:45.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:45.661 07:47:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:45.661 07:47:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:45.661 [2024-11-19 07:47:37.408875] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:23:45.661 [2024-11-19 07:47:37.409035] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:45.661 [2024-11-19 07:47:37.561413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:45.919 [2024-11-19 07:47:37.698380] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:45.919 [2024-11-19 07:47:37.698483] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:45.919 [2024-11-19 07:47:37.698509] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:45.919 [2024-11-19 07:47:37.698535] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:45.919 [2024-11-19 07:47:37.698556] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:45.919 [2024-11-19 07:47:37.700200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:46.486 07:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:46.486 07:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:46.486 07:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:46.486 07:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:46.486 07:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:46.486 07:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:46.486 07:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.DE2e5zWclL 00:23:46.486 07:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:46.486 07:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.DE2e5zWclL 00:23:46.486 07:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:23:46.486 07:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:46.486 07:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:23:46.486 07:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:46.486 07:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.DE2e5zWclL 00:23:46.486 07:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.DE2e5zWclL 00:23:46.486 07:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:47.052 [2024-11-19 07:47:38.687438] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:47.052 07:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:47.310 07:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:47.569 [2024-11-19 07:47:39.325249] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:47.569 [2024-11-19 07:47:39.325620] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:47.569 07:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:47.827 malloc0 00:23:47.827 07:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:48.084 07:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.DE2e5zWclL 00:23:48.342 [2024-11-19 07:47:40.237229] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.DE2e5zWclL': 0100666 00:23:48.342 [2024-11-19 07:47:40.237306] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:48.342 request: 00:23:48.342 { 00:23:48.342 "name": "key0", 00:23:48.342 "path": "/tmp/tmp.DE2e5zWclL", 00:23:48.342 "method": "keyring_file_add_key", 00:23:48.342 "req_id": 1 00:23:48.342 } 00:23:48.342 Got JSON-RPC error response 00:23:48.342 response: 00:23:48.342 { 00:23:48.342 "code": -1, 00:23:48.342 "message": "Operation not permitted" 00:23:48.342 } 00:23:48.342 07:47:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:48.601 [2024-11-19 07:47:40.501974] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:23:48.601 [2024-11-19 07:47:40.502070] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:48.601 request: 00:23:48.601 { 00:23:48.601 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:48.601 "host": "nqn.2016-06.io.spdk:host1", 00:23:48.601 "psk": "key0", 00:23:48.601 "method": "nvmf_subsystem_add_host", 00:23:48.601 "req_id": 1 00:23:48.601 } 00:23:48.601 Got JSON-RPC error response 00:23:48.601 response: 00:23:48.601 { 00:23:48.601 "code": -32603, 00:23:48.601 "message": "Internal error" 00:23:48.601 } 00:23:48.601 07:47:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:48.601 07:47:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:48.601 07:47:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:48.601 07:47:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:48.601 07:47:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 2998854 00:23:48.601 07:47:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2998854 ']' 00:23:48.601 07:47:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2998854 00:23:48.601 07:47:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:48.601 07:47:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:48.601 07:47:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2998854 00:23:48.860 07:47:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:48.860 07:47:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:48.860 07:47:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2998854' 00:23:48.860 killing process with pid 2998854 00:23:48.860 07:47:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2998854 00:23:48.860 07:47:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2998854 00:23:50.240 07:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.DE2e5zWclL 00:23:50.240 07:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:23:50.240 07:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:50.240 07:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:50.240 07:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:50.240 07:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2999420 00:23:50.240 07:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:50.240 07:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2999420 00:23:50.240 07:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2999420 ']' 00:23:50.240 07:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:50.240 07:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:50.240 07:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:50.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:50.240 07:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:50.240 07:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:50.240 [2024-11-19 07:47:41.890428] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:23:50.240 [2024-11-19 07:47:41.890558] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:50.240 [2024-11-19 07:47:42.033239] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:50.240 [2024-11-19 07:47:42.152129] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:50.240 [2024-11-19 07:47:42.152212] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:50.240 [2024-11-19 07:47:42.152234] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:50.240 [2024-11-19 07:47:42.152255] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:50.240 [2024-11-19 07:47:42.152271] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:50.240 [2024-11-19 07:47:42.153884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:51.176 07:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:51.176 07:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:51.176 07:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:51.176 07:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:51.176 07:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:51.176 07:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:51.176 07:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.DE2e5zWclL 00:23:51.176 07:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.DE2e5zWclL 00:23:51.176 07:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:51.434 [2024-11-19 07:47:43.274754] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:51.434 07:47:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:51.692 07:47:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:51.953 [2024-11-19 07:47:43.832324] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:51.953 [2024-11-19 07:47:43.832630] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:51.953 07:47:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:52.250 malloc0 00:23:52.250 07:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:52.534 07:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.DE2e5zWclL 00:23:53.101 07:47:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:53.360 07:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=2999838 00:23:53.360 07:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:53.360 07:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:53.360 07:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 2999838 /var/tmp/bdevperf.sock 00:23:53.360 07:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2999838 ']' 00:23:53.360 07:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:53.360 07:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:53.360 07:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:53.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:53.360 07:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:53.360 07:47:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:53.360 [2024-11-19 07:47:45.177056] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:23:53.360 [2024-11-19 07:47:45.177200] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2999838 ] 00:23:53.620 [2024-11-19 07:47:45.311555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:53.620 [2024-11-19 07:47:45.434339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:54.557 07:47:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:54.557 07:47:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:54.557 07:47:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.DE2e5zWclL 00:23:54.557 07:47:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:54.817 [2024-11-19 07:47:46.740216] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:55.075 TLSTESTn1 00:23:55.075 07:47:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:23:55.333 07:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:23:55.333 "subsystems": [ 00:23:55.333 { 00:23:55.333 "subsystem": "keyring", 00:23:55.333 "config": [ 00:23:55.333 { 00:23:55.333 "method": "keyring_file_add_key", 00:23:55.333 "params": { 00:23:55.333 "name": "key0", 00:23:55.333 "path": "/tmp/tmp.DE2e5zWclL" 00:23:55.333 } 00:23:55.333 } 00:23:55.333 ] 00:23:55.333 }, 00:23:55.333 { 00:23:55.333 "subsystem": "iobuf", 00:23:55.333 "config": [ 00:23:55.333 { 00:23:55.333 "method": "iobuf_set_options", 00:23:55.333 "params": { 00:23:55.333 "small_pool_count": 8192, 00:23:55.333 "large_pool_count": 1024, 00:23:55.333 "small_bufsize": 8192, 00:23:55.333 "large_bufsize": 135168, 00:23:55.333 "enable_numa": false 00:23:55.333 } 00:23:55.333 } 00:23:55.333 ] 00:23:55.333 }, 00:23:55.333 { 00:23:55.333 "subsystem": "sock", 00:23:55.333 "config": [ 00:23:55.333 { 00:23:55.333 "method": "sock_set_default_impl", 00:23:55.333 "params": { 00:23:55.333 "impl_name": "posix" 00:23:55.333 } 00:23:55.333 }, 00:23:55.333 { 00:23:55.333 "method": "sock_impl_set_options", 00:23:55.333 "params": { 00:23:55.333 "impl_name": "ssl", 00:23:55.333 "recv_buf_size": 4096, 00:23:55.333 "send_buf_size": 4096, 00:23:55.333 "enable_recv_pipe": true, 00:23:55.333 "enable_quickack": false, 00:23:55.333 "enable_placement_id": 0, 00:23:55.333 "enable_zerocopy_send_server": true, 00:23:55.333 "enable_zerocopy_send_client": false, 00:23:55.333 "zerocopy_threshold": 0, 00:23:55.333 "tls_version": 0, 00:23:55.333 "enable_ktls": false 00:23:55.333 } 00:23:55.333 }, 00:23:55.333 { 00:23:55.333 "method": "sock_impl_set_options", 00:23:55.333 "params": { 00:23:55.333 "impl_name": "posix", 00:23:55.333 "recv_buf_size": 2097152, 00:23:55.333 "send_buf_size": 2097152, 00:23:55.333 "enable_recv_pipe": true, 00:23:55.333 "enable_quickack": false, 00:23:55.333 "enable_placement_id": 0, 00:23:55.333 "enable_zerocopy_send_server": true, 00:23:55.333 "enable_zerocopy_send_client": false, 00:23:55.333 "zerocopy_threshold": 0, 00:23:55.333 "tls_version": 0, 00:23:55.333 "enable_ktls": false 00:23:55.333 } 00:23:55.333 } 00:23:55.333 ] 00:23:55.333 }, 00:23:55.333 { 00:23:55.333 "subsystem": "vmd", 00:23:55.333 "config": [] 00:23:55.333 }, 00:23:55.333 { 00:23:55.333 "subsystem": "accel", 00:23:55.333 "config": [ 00:23:55.333 { 00:23:55.333 "method": "accel_set_options", 00:23:55.333 "params": { 00:23:55.333 "small_cache_size": 128, 00:23:55.333 "large_cache_size": 16, 00:23:55.333 "task_count": 2048, 00:23:55.333 "sequence_count": 2048, 00:23:55.333 "buf_count": 2048 00:23:55.333 } 00:23:55.333 } 00:23:55.333 ] 00:23:55.333 }, 00:23:55.333 { 00:23:55.333 "subsystem": "bdev", 00:23:55.333 "config": [ 00:23:55.333 { 00:23:55.333 "method": "bdev_set_options", 00:23:55.333 "params": { 00:23:55.333 "bdev_io_pool_size": 65535, 00:23:55.333 "bdev_io_cache_size": 256, 00:23:55.333 "bdev_auto_examine": true, 00:23:55.333 "iobuf_small_cache_size": 128, 00:23:55.333 "iobuf_large_cache_size": 16 00:23:55.333 } 00:23:55.333 }, 00:23:55.333 { 00:23:55.333 "method": "bdev_raid_set_options", 00:23:55.333 "params": { 00:23:55.333 "process_window_size_kb": 1024, 00:23:55.333 "process_max_bandwidth_mb_sec": 0 00:23:55.333 } 00:23:55.333 }, 00:23:55.333 { 00:23:55.333 "method": "bdev_iscsi_set_options", 00:23:55.333 "params": { 00:23:55.333 "timeout_sec": 30 00:23:55.333 } 00:23:55.333 }, 00:23:55.333 { 00:23:55.333 "method": "bdev_nvme_set_options", 00:23:55.333 "params": { 00:23:55.333 "action_on_timeout": "none", 00:23:55.333 "timeout_us": 0, 00:23:55.333 "timeout_admin_us": 0, 00:23:55.333 "keep_alive_timeout_ms": 10000, 00:23:55.333 "arbitration_burst": 0, 00:23:55.333 "low_priority_weight": 0, 00:23:55.334 "medium_priority_weight": 0, 00:23:55.334 "high_priority_weight": 0, 00:23:55.334 "nvme_adminq_poll_period_us": 10000, 00:23:55.334 "nvme_ioq_poll_period_us": 0, 00:23:55.334 "io_queue_requests": 0, 00:23:55.334 "delay_cmd_submit": true, 00:23:55.334 "transport_retry_count": 4, 00:23:55.334 "bdev_retry_count": 3, 00:23:55.334 "transport_ack_timeout": 0, 00:23:55.334 "ctrlr_loss_timeout_sec": 0, 00:23:55.334 "reconnect_delay_sec": 0, 00:23:55.334 "fast_io_fail_timeout_sec": 0, 00:23:55.334 "disable_auto_failback": false, 00:23:55.334 "generate_uuids": false, 00:23:55.334 "transport_tos": 0, 00:23:55.334 "nvme_error_stat": false, 00:23:55.334 "rdma_srq_size": 0, 00:23:55.334 "io_path_stat": false, 00:23:55.334 "allow_accel_sequence": false, 00:23:55.334 "rdma_max_cq_size": 0, 00:23:55.334 "rdma_cm_event_timeout_ms": 0, 00:23:55.334 "dhchap_digests": [ 00:23:55.334 "sha256", 00:23:55.334 "sha384", 00:23:55.334 "sha512" 00:23:55.334 ], 00:23:55.334 "dhchap_dhgroups": [ 00:23:55.334 "null", 00:23:55.334 "ffdhe2048", 00:23:55.334 "ffdhe3072", 00:23:55.334 "ffdhe4096", 00:23:55.334 "ffdhe6144", 00:23:55.334 "ffdhe8192" 00:23:55.334 ] 00:23:55.334 } 00:23:55.334 }, 00:23:55.334 { 00:23:55.334 "method": "bdev_nvme_set_hotplug", 00:23:55.334 "params": { 00:23:55.334 "period_us": 100000, 00:23:55.334 "enable": false 00:23:55.334 } 00:23:55.334 }, 00:23:55.334 { 00:23:55.334 "method": "bdev_malloc_create", 00:23:55.334 "params": { 00:23:55.334 "name": "malloc0", 00:23:55.334 "num_blocks": 8192, 00:23:55.334 "block_size": 4096, 00:23:55.334 "physical_block_size": 4096, 00:23:55.334 "uuid": "e2dda1ca-199e-41c0-9035-9eb36c701867", 00:23:55.334 "optimal_io_boundary": 0, 00:23:55.334 "md_size": 0, 00:23:55.334 "dif_type": 0, 00:23:55.334 "dif_is_head_of_md": false, 00:23:55.334 "dif_pi_format": 0 00:23:55.334 } 00:23:55.334 }, 00:23:55.334 { 00:23:55.334 "method": "bdev_wait_for_examine" 00:23:55.334 } 00:23:55.334 ] 00:23:55.334 }, 00:23:55.334 { 00:23:55.334 "subsystem": "nbd", 00:23:55.334 "config": [] 00:23:55.334 }, 00:23:55.334 { 00:23:55.334 "subsystem": "scheduler", 00:23:55.334 "config": [ 00:23:55.334 { 00:23:55.334 "method": "framework_set_scheduler", 00:23:55.334 "params": { 00:23:55.334 "name": "static" 00:23:55.334 } 00:23:55.334 } 00:23:55.334 ] 00:23:55.334 }, 00:23:55.334 { 00:23:55.334 "subsystem": "nvmf", 00:23:55.334 "config": [ 00:23:55.334 { 00:23:55.334 "method": "nvmf_set_config", 00:23:55.334 "params": { 00:23:55.334 "discovery_filter": "match_any", 00:23:55.334 "admin_cmd_passthru": { 00:23:55.334 "identify_ctrlr": false 00:23:55.334 }, 00:23:55.334 "dhchap_digests": [ 00:23:55.334 "sha256", 00:23:55.334 "sha384", 00:23:55.334 "sha512" 00:23:55.334 ], 00:23:55.334 "dhchap_dhgroups": [ 00:23:55.334 "null", 00:23:55.334 "ffdhe2048", 00:23:55.334 "ffdhe3072", 00:23:55.334 "ffdhe4096", 00:23:55.334 "ffdhe6144", 00:23:55.334 "ffdhe8192" 00:23:55.334 ] 00:23:55.334 } 00:23:55.334 }, 00:23:55.334 { 00:23:55.334 "method": "nvmf_set_max_subsystems", 00:23:55.334 "params": { 00:23:55.334 "max_subsystems": 1024 00:23:55.334 } 00:23:55.334 }, 00:23:55.334 { 00:23:55.334 "method": "nvmf_set_crdt", 00:23:55.334 "params": { 00:23:55.334 "crdt1": 0, 00:23:55.334 "crdt2": 0, 00:23:55.334 "crdt3": 0 00:23:55.334 } 00:23:55.334 }, 00:23:55.334 { 00:23:55.334 "method": "nvmf_create_transport", 00:23:55.334 "params": { 00:23:55.334 "trtype": "TCP", 00:23:55.334 "max_queue_depth": 128, 00:23:55.334 "max_io_qpairs_per_ctrlr": 127, 00:23:55.334 "in_capsule_data_size": 4096, 00:23:55.334 "max_io_size": 131072, 00:23:55.334 "io_unit_size": 131072, 00:23:55.334 "max_aq_depth": 128, 00:23:55.334 "num_shared_buffers": 511, 00:23:55.334 "buf_cache_size": 4294967295, 00:23:55.334 "dif_insert_or_strip": false, 00:23:55.334 "zcopy": false, 00:23:55.334 "c2h_success": false, 00:23:55.334 "sock_priority": 0, 00:23:55.334 "abort_timeout_sec": 1, 00:23:55.334 "ack_timeout": 0, 00:23:55.334 "data_wr_pool_size": 0 00:23:55.334 } 00:23:55.334 }, 00:23:55.334 { 00:23:55.334 "method": "nvmf_create_subsystem", 00:23:55.334 "params": { 00:23:55.334 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:55.334 "allow_any_host": false, 00:23:55.334 "serial_number": "SPDK00000000000001", 00:23:55.334 "model_number": "SPDK bdev Controller", 00:23:55.334 "max_namespaces": 10, 00:23:55.334 "min_cntlid": 1, 00:23:55.334 "max_cntlid": 65519, 00:23:55.334 "ana_reporting": false 00:23:55.334 } 00:23:55.334 }, 00:23:55.334 { 00:23:55.334 "method": "nvmf_subsystem_add_host", 00:23:55.334 "params": { 00:23:55.334 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:55.334 "host": "nqn.2016-06.io.spdk:host1", 00:23:55.334 "psk": "key0" 00:23:55.334 } 00:23:55.334 }, 00:23:55.334 { 00:23:55.334 "method": "nvmf_subsystem_add_ns", 00:23:55.334 "params": { 00:23:55.334 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:55.334 "namespace": { 00:23:55.334 "nsid": 1, 00:23:55.334 "bdev_name": "malloc0", 00:23:55.334 "nguid": "E2DDA1CA199E41C090359EB36C701867", 00:23:55.334 "uuid": "e2dda1ca-199e-41c0-9035-9eb36c701867", 00:23:55.334 "no_auto_visible": false 00:23:55.334 } 00:23:55.334 } 00:23:55.334 }, 00:23:55.334 { 00:23:55.334 "method": "nvmf_subsystem_add_listener", 00:23:55.334 "params": { 00:23:55.334 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:55.334 "listen_address": { 00:23:55.334 "trtype": "TCP", 00:23:55.334 "adrfam": "IPv4", 00:23:55.334 "traddr": "10.0.0.2", 00:23:55.334 "trsvcid": "4420" 00:23:55.334 }, 00:23:55.334 "secure_channel": true 00:23:55.334 } 00:23:55.334 } 00:23:55.334 ] 00:23:55.334 } 00:23:55.334 ] 00:23:55.334 }' 00:23:55.334 07:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:55.901 07:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:23:55.901 "subsystems": [ 00:23:55.901 { 00:23:55.901 "subsystem": "keyring", 00:23:55.901 "config": [ 00:23:55.901 { 00:23:55.901 "method": "keyring_file_add_key", 00:23:55.901 "params": { 00:23:55.901 "name": "key0", 00:23:55.901 "path": "/tmp/tmp.DE2e5zWclL" 00:23:55.901 } 00:23:55.901 } 00:23:55.901 ] 00:23:55.901 }, 00:23:55.901 { 00:23:55.901 "subsystem": "iobuf", 00:23:55.901 "config": [ 00:23:55.901 { 00:23:55.901 "method": "iobuf_set_options", 00:23:55.901 "params": { 00:23:55.901 "small_pool_count": 8192, 00:23:55.901 "large_pool_count": 1024, 00:23:55.901 "small_bufsize": 8192, 00:23:55.901 "large_bufsize": 135168, 00:23:55.901 "enable_numa": false 00:23:55.901 } 00:23:55.901 } 00:23:55.901 ] 00:23:55.901 }, 00:23:55.901 { 00:23:55.901 "subsystem": "sock", 00:23:55.901 "config": [ 00:23:55.901 { 00:23:55.901 "method": "sock_set_default_impl", 00:23:55.901 "params": { 00:23:55.901 "impl_name": "posix" 00:23:55.901 } 00:23:55.901 }, 00:23:55.901 { 00:23:55.901 "method": "sock_impl_set_options", 00:23:55.901 "params": { 00:23:55.901 "impl_name": "ssl", 00:23:55.901 "recv_buf_size": 4096, 00:23:55.901 "send_buf_size": 4096, 00:23:55.901 "enable_recv_pipe": true, 00:23:55.901 "enable_quickack": false, 00:23:55.901 "enable_placement_id": 0, 00:23:55.901 "enable_zerocopy_send_server": true, 00:23:55.901 "enable_zerocopy_send_client": false, 00:23:55.901 "zerocopy_threshold": 0, 00:23:55.901 "tls_version": 0, 00:23:55.901 "enable_ktls": false 00:23:55.902 } 00:23:55.902 }, 00:23:55.902 { 00:23:55.902 "method": "sock_impl_set_options", 00:23:55.902 "params": { 00:23:55.902 "impl_name": "posix", 00:23:55.902 "recv_buf_size": 2097152, 00:23:55.902 "send_buf_size": 2097152, 00:23:55.902 "enable_recv_pipe": true, 00:23:55.902 "enable_quickack": false, 00:23:55.902 "enable_placement_id": 0, 00:23:55.902 "enable_zerocopy_send_server": true, 00:23:55.902 "enable_zerocopy_send_client": false, 00:23:55.902 "zerocopy_threshold": 0, 00:23:55.902 "tls_version": 0, 00:23:55.902 "enable_ktls": false 00:23:55.902 } 00:23:55.902 } 00:23:55.902 ] 00:23:55.902 }, 00:23:55.902 { 00:23:55.902 "subsystem": "vmd", 00:23:55.902 "config": [] 00:23:55.902 }, 00:23:55.902 { 00:23:55.902 "subsystem": "accel", 00:23:55.902 "config": [ 00:23:55.902 { 00:23:55.902 "method": "accel_set_options", 00:23:55.902 "params": { 00:23:55.902 "small_cache_size": 128, 00:23:55.902 "large_cache_size": 16, 00:23:55.902 "task_count": 2048, 00:23:55.902 "sequence_count": 2048, 00:23:55.902 "buf_count": 2048 00:23:55.902 } 00:23:55.902 } 00:23:55.902 ] 00:23:55.902 }, 00:23:55.902 { 00:23:55.902 "subsystem": "bdev", 00:23:55.902 "config": [ 00:23:55.902 { 00:23:55.902 "method": "bdev_set_options", 00:23:55.902 "params": { 00:23:55.902 "bdev_io_pool_size": 65535, 00:23:55.902 "bdev_io_cache_size": 256, 00:23:55.902 "bdev_auto_examine": true, 00:23:55.902 "iobuf_small_cache_size": 128, 00:23:55.902 "iobuf_large_cache_size": 16 00:23:55.902 } 00:23:55.902 }, 00:23:55.902 { 00:23:55.902 "method": "bdev_raid_set_options", 00:23:55.902 "params": { 00:23:55.902 "process_window_size_kb": 1024, 00:23:55.902 "process_max_bandwidth_mb_sec": 0 00:23:55.902 } 00:23:55.902 }, 00:23:55.902 { 00:23:55.902 "method": "bdev_iscsi_set_options", 00:23:55.902 "params": { 00:23:55.902 "timeout_sec": 30 00:23:55.902 } 00:23:55.902 }, 00:23:55.902 { 00:23:55.902 "method": "bdev_nvme_set_options", 00:23:55.902 "params": { 00:23:55.902 "action_on_timeout": "none", 00:23:55.902 "timeout_us": 0, 00:23:55.902 "timeout_admin_us": 0, 00:23:55.902 "keep_alive_timeout_ms": 10000, 00:23:55.902 "arbitration_burst": 0, 00:23:55.902 "low_priority_weight": 0, 00:23:55.902 "medium_priority_weight": 0, 00:23:55.902 "high_priority_weight": 0, 00:23:55.902 "nvme_adminq_poll_period_us": 10000, 00:23:55.902 "nvme_ioq_poll_period_us": 0, 00:23:55.902 "io_queue_requests": 512, 00:23:55.902 "delay_cmd_submit": true, 00:23:55.902 "transport_retry_count": 4, 00:23:55.902 "bdev_retry_count": 3, 00:23:55.902 "transport_ack_timeout": 0, 00:23:55.902 "ctrlr_loss_timeout_sec": 0, 00:23:55.902 "reconnect_delay_sec": 0, 00:23:55.902 "fast_io_fail_timeout_sec": 0, 00:23:55.902 "disable_auto_failback": false, 00:23:55.902 "generate_uuids": false, 00:23:55.902 "transport_tos": 0, 00:23:55.902 "nvme_error_stat": false, 00:23:55.902 "rdma_srq_size": 0, 00:23:55.902 "io_path_stat": false, 00:23:55.902 "allow_accel_sequence": false, 00:23:55.902 "rdma_max_cq_size": 0, 00:23:55.902 "rdma_cm_event_timeout_ms": 0, 00:23:55.902 "dhchap_digests": [ 00:23:55.902 "sha256", 00:23:55.902 "sha384", 00:23:55.902 "sha512" 00:23:55.902 ], 00:23:55.902 "dhchap_dhgroups": [ 00:23:55.902 "null", 00:23:55.902 "ffdhe2048", 00:23:55.902 "ffdhe3072", 00:23:55.902 "ffdhe4096", 00:23:55.902 "ffdhe6144", 00:23:55.902 "ffdhe8192" 00:23:55.902 ] 00:23:55.902 } 00:23:55.902 }, 00:23:55.902 { 00:23:55.902 "method": "bdev_nvme_attach_controller", 00:23:55.902 "params": { 00:23:55.902 "name": "TLSTEST", 00:23:55.902 "trtype": "TCP", 00:23:55.902 "adrfam": "IPv4", 00:23:55.902 "traddr": "10.0.0.2", 00:23:55.902 "trsvcid": "4420", 00:23:55.902 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:55.902 "prchk_reftag": false, 00:23:55.902 "prchk_guard": false, 00:23:55.902 "ctrlr_loss_timeout_sec": 0, 00:23:55.902 "reconnect_delay_sec": 0, 00:23:55.902 "fast_io_fail_timeout_sec": 0, 00:23:55.902 "psk": "key0", 00:23:55.902 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:55.902 "hdgst": false, 00:23:55.902 "ddgst": false, 00:23:55.902 "multipath": "multipath" 00:23:55.902 } 00:23:55.902 }, 00:23:55.902 { 00:23:55.902 "method": "bdev_nvme_set_hotplug", 00:23:55.902 "params": { 00:23:55.902 "period_us": 100000, 00:23:55.902 "enable": false 00:23:55.902 } 00:23:55.902 }, 00:23:55.902 { 00:23:55.902 "method": "bdev_wait_for_examine" 00:23:55.902 } 00:23:55.902 ] 00:23:55.902 }, 00:23:55.902 { 00:23:55.902 "subsystem": "nbd", 00:23:55.902 "config": [] 00:23:55.902 } 00:23:55.902 ] 00:23:55.902 }' 00:23:55.902 07:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 2999838 00:23:55.902 07:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2999838 ']' 00:23:55.902 07:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2999838 00:23:55.902 07:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:55.902 07:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:55.902 07:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2999838 00:23:55.902 07:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:55.902 07:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:55.902 07:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2999838' 00:23:55.902 killing process with pid 2999838 00:23:55.902 07:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2999838 00:23:55.902 Received shutdown signal, test time was about 10.000000 seconds 00:23:55.902 00:23:55.902 Latency(us) 00:23:55.902 [2024-11-19T06:47:47.832Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:55.902 [2024-11-19T06:47:47.832Z] =================================================================================================================== 00:23:55.902 [2024-11-19T06:47:47.832Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:55.902 07:47:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2999838 00:23:56.473 07:47:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 2999420 00:23:56.473 07:47:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2999420 ']' 00:23:56.473 07:47:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2999420 00:23:56.473 07:47:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:56.473 07:47:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:56.473 07:47:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2999420 00:23:56.733 07:47:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:56.733 07:47:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:56.733 07:47:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2999420' 00:23:56.733 killing process with pid 2999420 00:23:56.733 07:47:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2999420 00:23:56.733 07:47:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2999420 00:23:57.671 07:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:23:57.671 07:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:57.671 07:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:23:57.671 "subsystems": [ 00:23:57.671 { 00:23:57.671 "subsystem": "keyring", 00:23:57.671 "config": [ 00:23:57.671 { 00:23:57.671 "method": "keyring_file_add_key", 00:23:57.671 "params": { 00:23:57.671 "name": "key0", 00:23:57.671 "path": "/tmp/tmp.DE2e5zWclL" 00:23:57.671 } 00:23:57.671 } 00:23:57.671 ] 00:23:57.671 }, 00:23:57.671 { 00:23:57.671 "subsystem": "iobuf", 00:23:57.671 "config": [ 00:23:57.671 { 00:23:57.671 "method": "iobuf_set_options", 00:23:57.671 "params": { 00:23:57.671 "small_pool_count": 8192, 00:23:57.671 "large_pool_count": 1024, 00:23:57.671 "small_bufsize": 8192, 00:23:57.671 "large_bufsize": 135168, 00:23:57.671 "enable_numa": false 00:23:57.671 } 00:23:57.671 } 00:23:57.671 ] 00:23:57.671 }, 00:23:57.671 { 00:23:57.671 "subsystem": "sock", 00:23:57.671 "config": [ 00:23:57.671 { 00:23:57.671 "method": "sock_set_default_impl", 00:23:57.671 "params": { 00:23:57.671 "impl_name": "posix" 00:23:57.671 } 00:23:57.671 }, 00:23:57.671 { 00:23:57.671 "method": "sock_impl_set_options", 00:23:57.671 "params": { 00:23:57.671 "impl_name": "ssl", 00:23:57.671 "recv_buf_size": 4096, 00:23:57.671 "send_buf_size": 4096, 00:23:57.671 "enable_recv_pipe": true, 00:23:57.671 "enable_quickack": false, 00:23:57.671 "enable_placement_id": 0, 00:23:57.671 "enable_zerocopy_send_server": true, 00:23:57.671 "enable_zerocopy_send_client": false, 00:23:57.671 "zerocopy_threshold": 0, 00:23:57.671 "tls_version": 0, 00:23:57.671 "enable_ktls": false 00:23:57.671 } 00:23:57.671 }, 00:23:57.671 { 00:23:57.671 "method": "sock_impl_set_options", 00:23:57.671 "params": { 00:23:57.671 "impl_name": "posix", 00:23:57.671 "recv_buf_size": 2097152, 00:23:57.671 "send_buf_size": 2097152, 00:23:57.671 "enable_recv_pipe": true, 00:23:57.671 "enable_quickack": false, 00:23:57.671 "enable_placement_id": 0, 00:23:57.671 "enable_zerocopy_send_server": true, 00:23:57.671 "enable_zerocopy_send_client": false, 00:23:57.671 "zerocopy_threshold": 0, 00:23:57.671 "tls_version": 0, 00:23:57.671 "enable_ktls": false 00:23:57.671 } 00:23:57.671 } 00:23:57.671 ] 00:23:57.671 }, 00:23:57.671 { 00:23:57.671 "subsystem": "vmd", 00:23:57.671 "config": [] 00:23:57.671 }, 00:23:57.671 { 00:23:57.671 "subsystem": "accel", 00:23:57.671 "config": [ 00:23:57.671 { 00:23:57.671 "method": "accel_set_options", 00:23:57.671 "params": { 00:23:57.671 "small_cache_size": 128, 00:23:57.671 "large_cache_size": 16, 00:23:57.671 "task_count": 2048, 00:23:57.671 "sequence_count": 2048, 00:23:57.671 "buf_count": 2048 00:23:57.671 } 00:23:57.671 } 00:23:57.671 ] 00:23:57.671 }, 00:23:57.671 { 00:23:57.671 "subsystem": "bdev", 00:23:57.671 "config": [ 00:23:57.671 { 00:23:57.671 "method": "bdev_set_options", 00:23:57.671 "params": { 00:23:57.671 "bdev_io_pool_size": 65535, 00:23:57.671 "bdev_io_cache_size": 256, 00:23:57.671 "bdev_auto_examine": true, 00:23:57.671 "iobuf_small_cache_size": 128, 00:23:57.672 "iobuf_large_cache_size": 16 00:23:57.672 } 00:23:57.672 }, 00:23:57.672 { 00:23:57.672 "method": "bdev_raid_set_options", 00:23:57.672 "params": { 00:23:57.672 "process_window_size_kb": 1024, 00:23:57.672 "process_max_bandwidth_mb_sec": 0 00:23:57.672 } 00:23:57.672 }, 00:23:57.672 { 00:23:57.672 "method": "bdev_iscsi_set_options", 00:23:57.672 "params": { 00:23:57.672 "timeout_sec": 30 00:23:57.672 } 00:23:57.672 }, 00:23:57.672 { 00:23:57.672 "method": "bdev_nvme_set_options", 00:23:57.672 "params": { 00:23:57.672 "action_on_timeout": "none", 00:23:57.672 "timeout_us": 0, 00:23:57.672 "timeout_admin_us": 0, 00:23:57.672 "keep_alive_timeout_ms": 10000, 00:23:57.672 "arbitration_burst": 0, 00:23:57.672 "low_priority_weight": 0, 00:23:57.672 "medium_priority_weight": 0, 00:23:57.672 "high_priority_weight": 0, 00:23:57.672 "nvme_adminq_poll_period_us": 10000, 00:23:57.672 "nvme_ioq_poll_period_us": 0, 00:23:57.672 "io_queue_requests": 0, 00:23:57.672 "delay_cmd_submit": true, 00:23:57.672 "transport_retry_count": 4, 00:23:57.672 "bdev_retry_count": 3, 00:23:57.672 "transport_ack_timeout": 0, 00:23:57.672 "ctrlr_loss_timeout_sec": 0, 00:23:57.672 "reconnect_delay_sec": 0, 00:23:57.672 "fast_io_fail_timeout_sec": 0, 00:23:57.672 "disable_auto_failback": false, 00:23:57.672 "generate_uuids": false, 00:23:57.672 "transport_tos": 0, 00:23:57.672 "nvme_error_stat": false, 00:23:57.672 "rdma_srq_size": 0, 00:23:57.672 "io_path_stat": false, 00:23:57.672 "allow_accel_sequence": false, 00:23:57.672 "rdma_max_cq_size": 0, 00:23:57.672 "rdma_cm_event_timeout_ms": 0, 00:23:57.672 "dhchap_digests": [ 00:23:57.672 "sha256", 00:23:57.672 "sha384", 00:23:57.672 "sha512" 00:23:57.672 ], 00:23:57.672 "dhchap_dhgroups": [ 00:23:57.672 "null", 00:23:57.672 "ffdhe2048", 00:23:57.672 "ffdhe3072", 00:23:57.672 "ffdhe4096", 00:23:57.672 "ffdhe6144", 00:23:57.672 "ffdhe8192" 00:23:57.672 ] 00:23:57.672 } 00:23:57.672 }, 00:23:57.672 { 00:23:57.672 "method": "bdev_nvme_set_hotplug", 00:23:57.672 "params": { 00:23:57.672 "period_us": 100000, 00:23:57.672 "enable": false 00:23:57.672 } 00:23:57.672 }, 00:23:57.672 { 00:23:57.672 "method": "bdev_malloc_create", 00:23:57.672 "params": { 00:23:57.672 "name": "malloc0", 00:23:57.672 "num_blocks": 8192, 00:23:57.672 "block_size": 4096, 00:23:57.672 "physical_block_size": 4096, 00:23:57.672 "uuid": "e2dda1ca-199e-41c0-9035-9eb36c701867", 00:23:57.672 "optimal_io_boundary": 0, 00:23:57.672 "md_size": 0, 00:23:57.672 "dif_type": 0, 00:23:57.672 "dif_is_head_of_md": false, 00:23:57.672 "dif_pi_format": 0 00:23:57.672 } 00:23:57.672 }, 00:23:57.672 { 00:23:57.672 "method": "bdev_wait_for_examine" 00:23:57.672 } 00:23:57.672 ] 00:23:57.672 }, 00:23:57.672 { 00:23:57.672 "subsystem": "nbd", 00:23:57.672 "config": [] 00:23:57.672 }, 00:23:57.672 { 00:23:57.672 "subsystem": "scheduler", 00:23:57.672 "config": [ 00:23:57.672 { 00:23:57.672 "method": "framework_set_scheduler", 00:23:57.672 "params": { 00:23:57.672 "name": "static" 00:23:57.672 } 00:23:57.672 } 00:23:57.672 ] 00:23:57.672 }, 00:23:57.672 { 00:23:57.672 "subsystem": "nvmf", 00:23:57.672 "config": [ 00:23:57.672 { 00:23:57.672 "method": "nvmf_set_config", 00:23:57.672 "params": { 00:23:57.672 "discovery_filter": "match_any", 00:23:57.672 "admin_cmd_passthru": { 00:23:57.672 "identify_ctrlr": false 00:23:57.672 }, 00:23:57.672 "dhchap_digests": [ 00:23:57.672 "sha256", 00:23:57.672 "sha384", 00:23:57.672 "sha512" 00:23:57.672 ], 00:23:57.672 "dhchap_dhgroups": [ 00:23:57.672 "null", 00:23:57.672 "ffdhe2048", 00:23:57.672 "ffdhe3072", 00:23:57.672 "ffdhe4096", 00:23:57.672 "ffdhe6144", 00:23:57.672 "ffdhe8192" 00:23:57.672 ] 00:23:57.672 } 00:23:57.672 }, 00:23:57.672 { 00:23:57.672 "method": "nvmf_set_max_subsystems", 00:23:57.672 "params": { 00:23:57.672 "max_subsystems": 1024 00:23:57.672 } 00:23:57.672 }, 00:23:57.672 { 00:23:57.672 "method": "nvmf_set_crdt", 00:23:57.672 "params": { 00:23:57.672 "crdt1": 0, 00:23:57.672 "crdt2": 0, 00:23:57.672 "crdt3": 0 00:23:57.672 } 00:23:57.672 }, 00:23:57.672 { 00:23:57.672 "method": "nvmf_create_transport", 00:23:57.672 "params": { 00:23:57.672 "trtype": "TCP", 00:23:57.672 "max_queue_depth": 128, 00:23:57.672 "max_io_qpairs_per_ctrlr": 127, 00:23:57.672 "in_capsule_data_size": 4096, 00:23:57.673 "max_io_size": 131072, 00:23:57.673 "io_unit_size": 131072, 00:23:57.673 "max_aq_depth": 128, 00:23:57.673 "num_shared_buffers": 511, 00:23:57.673 "buf_cache_size": 4294967295, 00:23:57.673 "dif_insert_or_strip": false, 00:23:57.673 "zcopy": false, 00:23:57.673 "c2h_success": false, 00:23:57.673 "sock_priority": 0, 00:23:57.673 "abort_timeout_sec": 1, 00:23:57.673 "ack_timeout": 0, 00:23:57.673 "data_wr_pool_size": 0 00:23:57.673 } 00:23:57.673 }, 00:23:57.673 { 00:23:57.673 "method": "nvmf_create_subsystem", 00:23:57.673 "params": { 00:23:57.673 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:57.673 "allow_any_host": false, 00:23:57.673 "serial_number": "SPDK00000000000001", 00:23:57.673 "model_number": "SPDK bdev Controller", 00:23:57.673 "max_namespaces": 10, 00:23:57.673 "min_cntlid": 1, 00:23:57.673 "max_cntlid": 65519, 00:23:57.673 "ana_reporting": false 00:23:57.673 } 00:23:57.673 }, 00:23:57.673 { 00:23:57.673 "method": "nvmf_subsystem_add_host", 00:23:57.673 "params": { 00:23:57.673 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:57.673 "host": "nqn.2016-06.io.spdk:host1", 00:23:57.673 "psk": "key0" 00:23:57.673 } 00:23:57.673 }, 00:23:57.673 { 00:23:57.673 "method": "nvmf_subsystem_add_ns", 00:23:57.673 "params": { 00:23:57.673 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:57.673 "namespace": { 00:23:57.673 "nsid": 1, 00:23:57.673 "bdev_name": "malloc0", 00:23:57.673 "nguid": "E2DDA1CA199E41C090359EB36C701867", 00:23:57.673 "uuid": "e2dda1ca-199e-41c0-9035-9eb36c701867", 00:23:57.673 "no_auto_visible": false 00:23:57.673 } 00:23:57.673 } 00:23:57.673 }, 00:23:57.673 { 00:23:57.673 "method": "nvmf_subsystem_add_listener", 00:23:57.673 "params": { 00:23:57.673 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:57.673 "listen_address": { 00:23:57.673 "trtype": "TCP", 00:23:57.673 "adrfam": "IPv4", 00:23:57.673 "traddr": "10.0.0.2", 00:23:57.673 "trsvcid": "4420" 00:23:57.673 }, 00:23:57.673 "secure_channel": true 00:23:57.673 } 00:23:57.673 } 00:23:57.673 ] 00:23:57.673 } 00:23:57.673 ] 00:23:57.673 }' 00:23:57.673 07:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:57.673 07:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:57.673 07:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3000381 00:23:57.673 07:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:23:57.673 07:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3000381 00:23:57.673 07:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3000381 ']' 00:23:57.673 07:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:57.673 07:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:57.673 07:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:57.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:57.673 07:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:57.673 07:47:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:57.931 [2024-11-19 07:47:49.668685] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:23:57.931 [2024-11-19 07:47:49.668841] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:57.931 [2024-11-19 07:47:49.819602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:58.191 [2024-11-19 07:47:49.956234] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:58.191 [2024-11-19 07:47:49.956334] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:58.191 [2024-11-19 07:47:49.956360] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:58.191 [2024-11-19 07:47:49.956384] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:58.191 [2024-11-19 07:47:49.956404] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:58.191 [2024-11-19 07:47:49.958147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:58.760 [2024-11-19 07:47:50.513386] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:58.760 [2024-11-19 07:47:50.545426] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:58.760 [2024-11-19 07:47:50.545795] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:59.021 07:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:59.021 07:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:59.021 07:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:59.021 07:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:59.021 07:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:59.022 07:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:59.022 07:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=3000537 00:23:59.022 07:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 3000537 /var/tmp/bdevperf.sock 00:23:59.022 07:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3000537 ']' 00:23:59.022 07:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:59.022 07:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:59.022 07:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:23:59.022 07:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:59.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:59.022 07:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:59.022 07:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:59.022 07:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:23:59.022 "subsystems": [ 00:23:59.022 { 00:23:59.022 "subsystem": "keyring", 00:23:59.022 "config": [ 00:23:59.022 { 00:23:59.022 "method": "keyring_file_add_key", 00:23:59.022 "params": { 00:23:59.022 "name": "key0", 00:23:59.022 "path": "/tmp/tmp.DE2e5zWclL" 00:23:59.022 } 00:23:59.022 } 00:23:59.022 ] 00:23:59.022 }, 00:23:59.022 { 00:23:59.022 "subsystem": "iobuf", 00:23:59.022 "config": [ 00:23:59.022 { 00:23:59.022 "method": "iobuf_set_options", 00:23:59.022 "params": { 00:23:59.022 "small_pool_count": 8192, 00:23:59.022 "large_pool_count": 1024, 00:23:59.022 "small_bufsize": 8192, 00:23:59.022 "large_bufsize": 135168, 00:23:59.022 "enable_numa": false 00:23:59.022 } 00:23:59.022 } 00:23:59.022 ] 00:23:59.022 }, 00:23:59.022 { 00:23:59.022 "subsystem": "sock", 00:23:59.022 "config": [ 00:23:59.022 { 00:23:59.022 "method": "sock_set_default_impl", 00:23:59.022 "params": { 00:23:59.022 "impl_name": "posix" 00:23:59.022 } 00:23:59.022 }, 00:23:59.022 { 00:23:59.022 "method": "sock_impl_set_options", 00:23:59.022 "params": { 00:23:59.022 "impl_name": "ssl", 00:23:59.022 "recv_buf_size": 4096, 00:23:59.022 "send_buf_size": 4096, 00:23:59.022 "enable_recv_pipe": true, 00:23:59.022 "enable_quickack": false, 00:23:59.022 "enable_placement_id": 0, 00:23:59.022 "enable_zerocopy_send_server": true, 00:23:59.022 "enable_zerocopy_send_client": false, 00:23:59.022 "zerocopy_threshold": 0, 00:23:59.022 "tls_version": 0, 00:23:59.022 "enable_ktls": false 00:23:59.022 } 00:23:59.022 }, 00:23:59.022 { 00:23:59.022 "method": "sock_impl_set_options", 00:23:59.022 "params": { 00:23:59.022 "impl_name": "posix", 00:23:59.022 "recv_buf_size": 2097152, 00:23:59.022 "send_buf_size": 2097152, 00:23:59.022 "enable_recv_pipe": true, 00:23:59.022 "enable_quickack": false, 00:23:59.022 "enable_placement_id": 0, 00:23:59.022 "enable_zerocopy_send_server": true, 00:23:59.022 "enable_zerocopy_send_client": false, 00:23:59.022 "zerocopy_threshold": 0, 00:23:59.022 "tls_version": 0, 00:23:59.022 "enable_ktls": false 00:23:59.022 } 00:23:59.022 } 00:23:59.022 ] 00:23:59.022 }, 00:23:59.022 { 00:23:59.022 "subsystem": "vmd", 00:23:59.022 "config": [] 00:23:59.022 }, 00:23:59.022 { 00:23:59.022 "subsystem": "accel", 00:23:59.022 "config": [ 00:23:59.022 { 00:23:59.022 "method": "accel_set_options", 00:23:59.022 "params": { 00:23:59.022 "small_cache_size": 128, 00:23:59.022 "large_cache_size": 16, 00:23:59.022 "task_count": 2048, 00:23:59.022 "sequence_count": 2048, 00:23:59.022 "buf_count": 2048 00:23:59.022 } 00:23:59.022 } 00:23:59.022 ] 00:23:59.022 }, 00:23:59.022 { 00:23:59.022 "subsystem": "bdev", 00:23:59.022 "config": [ 00:23:59.022 { 00:23:59.022 "method": "bdev_set_options", 00:23:59.022 "params": { 00:23:59.022 "bdev_io_pool_size": 65535, 00:23:59.022 "bdev_io_cache_size": 256, 00:23:59.022 "bdev_auto_examine": true, 00:23:59.022 "iobuf_small_cache_size": 128, 00:23:59.022 "iobuf_large_cache_size": 16 00:23:59.022 } 00:23:59.022 }, 00:23:59.022 { 00:23:59.022 "method": "bdev_raid_set_options", 00:23:59.022 "params": { 00:23:59.022 "process_window_size_kb": 1024, 00:23:59.022 "process_max_bandwidth_mb_sec": 0 00:23:59.022 } 00:23:59.022 }, 00:23:59.022 { 00:23:59.022 "method": "bdev_iscsi_set_options", 00:23:59.022 "params": { 00:23:59.022 "timeout_sec": 30 00:23:59.022 } 00:23:59.022 }, 00:23:59.022 { 00:23:59.022 "method": "bdev_nvme_set_options", 00:23:59.022 "params": { 00:23:59.022 "action_on_timeout": "none", 00:23:59.022 "timeout_us": 0, 00:23:59.022 "timeout_admin_us": 0, 00:23:59.022 "keep_alive_timeout_ms": 10000, 00:23:59.022 "arbitration_burst": 0, 00:23:59.022 "low_priority_weight": 0, 00:23:59.022 "medium_priority_weight": 0, 00:23:59.022 "high_priority_weight": 0, 00:23:59.022 "nvme_adminq_poll_period_us": 10000, 00:23:59.022 "nvme_ioq_poll_period_us": 0, 00:23:59.022 "io_queue_requests": 512, 00:23:59.022 "delay_cmd_submit": true, 00:23:59.022 "transport_retry_count": 4, 00:23:59.022 "bdev_retry_count": 3, 00:23:59.022 "transport_ack_timeout": 0, 00:23:59.022 "ctrlr_loss_timeout_sec": 0, 00:23:59.022 "reconnect_delay_sec": 0, 00:23:59.022 "fast_io_fail_timeout_sec": 0, 00:23:59.022 "disable_auto_failback": false, 00:23:59.022 "generate_uuids": false, 00:23:59.022 "transport_tos": 0, 00:23:59.022 "nvme_error_stat": false, 00:23:59.022 "rdma_srq_size": 0, 00:23:59.022 "io_path_stat": false, 00:23:59.022 "allow_accel_sequence": false, 00:23:59.022 "rdma_max_cq_size": 0, 00:23:59.022 "rdma_cm_event_timeout_ms": 0, 00:23:59.022 "dhchap_digests": [ 00:23:59.022 "sha256", 00:23:59.022 "sha384", 00:23:59.022 "sha512" 00:23:59.022 ], 00:23:59.022 "dhchap_dhgroups": [ 00:23:59.022 "null", 00:23:59.022 "ffdhe2048", 00:23:59.022 "ffdhe3072", 00:23:59.022 "ffdhe4096", 00:23:59.023 "ffdhe6144", 00:23:59.023 "ffdhe8192" 00:23:59.023 ] 00:23:59.023 } 00:23:59.023 }, 00:23:59.023 { 00:23:59.023 "method": "bdev_nvme_attach_controller", 00:23:59.023 "params": { 00:23:59.023 "name": "TLSTEST", 00:23:59.023 "trtype": "TCP", 00:23:59.023 "adrfam": "IPv4", 00:23:59.023 "traddr": "10.0.0.2", 00:23:59.023 "trsvcid": "4420", 00:23:59.023 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:59.023 "prchk_reftag": false, 00:23:59.023 "prchk_guard": false, 00:23:59.023 "ctrlr_loss_timeout_sec": 0, 00:23:59.023 "reconnect_delay_sec": 0, 00:23:59.023 "fast_io_fail_timeout_sec": 0, 00:23:59.023 "psk": "key0", 00:23:59.023 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:59.023 "hdgst": false, 00:23:59.023 "ddgst": false, 00:23:59.023 "multipath": "multipath" 00:23:59.023 } 00:23:59.023 }, 00:23:59.023 { 00:23:59.023 "method": "bdev_nvme_set_hotplug", 00:23:59.023 "params": { 00:23:59.023 "period_us": 100000, 00:23:59.023 "enable": false 00:23:59.023 } 00:23:59.023 }, 00:23:59.023 { 00:23:59.023 "method": "bdev_wait_for_examine" 00:23:59.023 } 00:23:59.023 ] 00:23:59.023 }, 00:23:59.023 { 00:23:59.023 "subsystem": "nbd", 00:23:59.023 "config": [] 00:23:59.023 } 00:23:59.023 ] 00:23:59.023 }' 00:23:59.023 [2024-11-19 07:47:50.828859] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:23:59.023 [2024-11-19 07:47:50.828987] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3000537 ] 00:23:59.283 [2024-11-19 07:47:50.963505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:59.283 [2024-11-19 07:47:51.091155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:59.852 [2024-11-19 07:47:51.504627] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:00.113 07:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:00.113 07:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:00.113 07:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:00.113 Running I/O for 10 seconds... 00:24:02.429 2155.00 IOPS, 8.42 MiB/s [2024-11-19T06:47:55.298Z] 2256.50 IOPS, 8.81 MiB/s [2024-11-19T06:47:56.237Z] 2262.67 IOPS, 8.84 MiB/s [2024-11-19T06:47:57.171Z] 2254.75 IOPS, 8.81 MiB/s [2024-11-19T06:47:58.105Z] 2264.60 IOPS, 8.85 MiB/s [2024-11-19T06:47:59.040Z] 2271.83 IOPS, 8.87 MiB/s [2024-11-19T06:47:59.972Z] 2267.14 IOPS, 8.86 MiB/s [2024-11-19T06:48:01.346Z] 2272.75 IOPS, 8.88 MiB/s [2024-11-19T06:48:02.280Z] 2268.33 IOPS, 8.86 MiB/s [2024-11-19T06:48:02.280Z] 2265.90 IOPS, 8.85 MiB/s 00:24:10.350 Latency(us) 00:24:10.350 [2024-11-19T06:48:02.280Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:10.350 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:10.350 Verification LBA range: start 0x0 length 0x2000 00:24:10.350 TLSTESTn1 : 10.05 2267.38 8.86 0.00 0.00 56295.17 7573.05 43302.31 00:24:10.350 [2024-11-19T06:48:02.280Z] =================================================================================================================== 00:24:10.350 [2024-11-19T06:48:02.280Z] Total : 2267.38 8.86 0.00 0.00 56295.17 7573.05 43302.31 00:24:10.350 { 00:24:10.350 "results": [ 00:24:10.350 { 00:24:10.350 "job": "TLSTESTn1", 00:24:10.350 "core_mask": "0x4", 00:24:10.350 "workload": "verify", 00:24:10.350 "status": "finished", 00:24:10.350 "verify_range": { 00:24:10.350 "start": 0, 00:24:10.350 "length": 8192 00:24:10.350 }, 00:24:10.350 "queue_depth": 128, 00:24:10.350 "io_size": 4096, 00:24:10.350 "runtime": 10.049918, 00:24:10.350 "iops": 2267.3816841092635, 00:24:10.350 "mibps": 8.85695970355181, 00:24:10.350 "io_failed": 0, 00:24:10.350 "io_timeout": 0, 00:24:10.350 "avg_latency_us": 56295.17037713185, 00:24:10.350 "min_latency_us": 7573.0488888888885, 00:24:10.351 "max_latency_us": 43302.305185185185 00:24:10.351 } 00:24:10.351 ], 00:24:10.351 "core_count": 1 00:24:10.351 } 00:24:10.351 07:48:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:10.351 07:48:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 3000537 00:24:10.351 07:48:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3000537 ']' 00:24:10.351 07:48:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3000537 00:24:10.351 07:48:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:10.351 07:48:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:10.351 07:48:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3000537 00:24:10.351 07:48:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:10.351 07:48:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:10.351 07:48:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3000537' 00:24:10.351 killing process with pid 3000537 00:24:10.351 07:48:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3000537 00:24:10.351 Received shutdown signal, test time was about 10.000000 seconds 00:24:10.351 00:24:10.351 Latency(us) 00:24:10.351 [2024-11-19T06:48:02.281Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:10.351 [2024-11-19T06:48:02.281Z] =================================================================================================================== 00:24:10.351 [2024-11-19T06:48:02.281Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:10.351 07:48:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3000537 00:24:11.285 07:48:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 3000381 00:24:11.285 07:48:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3000381 ']' 00:24:11.285 07:48:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3000381 00:24:11.285 07:48:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:11.285 07:48:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:11.285 07:48:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3000381 00:24:11.285 07:48:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:11.285 07:48:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:11.285 07:48:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3000381' 00:24:11.285 killing process with pid 3000381 00:24:11.285 07:48:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3000381 00:24:11.285 07:48:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3000381 00:24:12.662 07:48:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:24:12.662 07:48:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:12.662 07:48:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:12.662 07:48:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:12.662 07:48:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3002116 00:24:12.662 07:48:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:12.662 07:48:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3002116 00:24:12.662 07:48:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3002116 ']' 00:24:12.662 07:48:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:12.662 07:48:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:12.662 07:48:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:12.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:12.662 07:48:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:12.662 07:48:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:12.662 [2024-11-19 07:48:04.251141] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:24:12.662 [2024-11-19 07:48:04.251287] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:12.662 [2024-11-19 07:48:04.398278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:12.662 [2024-11-19 07:48:04.518058] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:12.662 [2024-11-19 07:48:04.518126] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:12.662 [2024-11-19 07:48:04.518146] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:12.662 [2024-11-19 07:48:04.518166] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:12.662 [2024-11-19 07:48:04.518182] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:12.662 [2024-11-19 07:48:04.519567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:13.596 07:48:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:13.596 07:48:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:13.596 07:48:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:13.596 07:48:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:13.596 07:48:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:13.596 07:48:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:13.597 07:48:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.DE2e5zWclL 00:24:13.597 07:48:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.DE2e5zWclL 00:24:13.597 07:48:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:13.855 [2024-11-19 07:48:05.597016] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:13.855 07:48:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:14.112 07:48:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:14.370 [2024-11-19 07:48:06.202654] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:14.370 [2024-11-19 07:48:06.203024] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:14.370 07:48:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:14.937 malloc0 00:24:14.937 07:48:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:14.937 07:48:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.DE2e5zWclL 00:24:15.195 07:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:15.761 07:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=3002687 00:24:15.761 07:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:15.762 07:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:15.762 07:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 3002687 /var/tmp/bdevperf.sock 00:24:15.762 07:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3002687 ']' 00:24:15.762 07:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:15.762 07:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:15.762 07:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:15.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:15.762 07:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:15.762 07:48:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:15.762 [2024-11-19 07:48:07.530913] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:24:15.762 [2024-11-19 07:48:07.531095] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3002687 ] 00:24:15.762 [2024-11-19 07:48:07.669406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:16.020 [2024-11-19 07:48:07.797883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:16.955 07:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:16.955 07:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:16.955 07:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.DE2e5zWclL 00:24:16.955 07:48:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:17.213 [2024-11-19 07:48:09.079105] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:17.470 nvme0n1 00:24:17.470 07:48:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:17.470 Running I/O for 1 seconds... 00:24:18.431 2632.00 IOPS, 10.28 MiB/s 00:24:18.431 Latency(us) 00:24:18.431 [2024-11-19T06:48:10.361Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:18.431 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:18.431 Verification LBA range: start 0x0 length 0x2000 00:24:18.431 nvme0n1 : 1.03 2690.22 10.51 0.00 0.00 47007.78 8349.77 40777.96 00:24:18.431 [2024-11-19T06:48:10.361Z] =================================================================================================================== 00:24:18.431 [2024-11-19T06:48:10.361Z] Total : 2690.22 10.51 0.00 0.00 47007.78 8349.77 40777.96 00:24:18.431 { 00:24:18.431 "results": [ 00:24:18.431 { 00:24:18.431 "job": "nvme0n1", 00:24:18.431 "core_mask": "0x2", 00:24:18.431 "workload": "verify", 00:24:18.431 "status": "finished", 00:24:18.431 "verify_range": { 00:24:18.431 "start": 0, 00:24:18.431 "length": 8192 00:24:18.431 }, 00:24:18.431 "queue_depth": 128, 00:24:18.431 "io_size": 4096, 00:24:18.431 "runtime": 1.025938, 00:24:18.431 "iops": 2690.221046495987, 00:24:18.431 "mibps": 10.50867596287495, 00:24:18.431 "io_failed": 0, 00:24:18.431 "io_timeout": 0, 00:24:18.431 "avg_latency_us": 47007.78027697262, 00:24:18.431 "min_latency_us": 8349.771851851852, 00:24:18.431 "max_latency_us": 40777.955555555556 00:24:18.431 } 00:24:18.431 ], 00:24:18.431 "core_count": 1 00:24:18.431 } 00:24:18.431 07:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 3002687 00:24:18.431 07:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3002687 ']' 00:24:18.431 07:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3002687 00:24:18.431 07:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:18.431 07:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:18.431 07:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3002687 00:24:18.713 07:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:18.713 07:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:18.713 07:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3002687' 00:24:18.713 killing process with pid 3002687 00:24:18.713 07:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3002687 00:24:18.713 Received shutdown signal, test time was about 1.000000 seconds 00:24:18.713 00:24:18.713 Latency(us) 00:24:18.713 [2024-11-19T06:48:10.643Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:18.713 [2024-11-19T06:48:10.643Z] =================================================================================================================== 00:24:18.713 [2024-11-19T06:48:10.643Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:18.713 07:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3002687 00:24:19.652 07:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 3002116 00:24:19.652 07:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3002116 ']' 00:24:19.652 07:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3002116 00:24:19.652 07:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:19.652 07:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:19.652 07:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3002116 00:24:19.652 07:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:19.652 07:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:19.652 07:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3002116' 00:24:19.652 killing process with pid 3002116 00:24:19.652 07:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3002116 00:24:19.652 07:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3002116 00:24:21.026 07:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:24:21.026 07:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:21.026 07:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:21.026 07:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:21.026 07:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3003705 00:24:21.026 07:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:21.026 07:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3003705 00:24:21.026 07:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3003705 ']' 00:24:21.026 07:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:21.026 07:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:21.026 07:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:21.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:21.026 07:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:21.026 07:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:21.026 [2024-11-19 07:48:12.677977] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:24:21.026 [2024-11-19 07:48:12.678155] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:21.026 [2024-11-19 07:48:12.829002] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:21.284 [2024-11-19 07:48:12.965623] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:21.285 [2024-11-19 07:48:12.965728] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:21.285 [2024-11-19 07:48:12.965755] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:21.285 [2024-11-19 07:48:12.965780] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:21.285 [2024-11-19 07:48:12.965798] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:21.285 [2024-11-19 07:48:12.967533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:21.851 07:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:21.851 07:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:21.851 07:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:21.851 07:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:21.851 07:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:21.851 07:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:21.851 07:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:24:21.851 07:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.851 07:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:21.851 [2024-11-19 07:48:13.709649] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:21.851 malloc0 00:24:21.851 [2024-11-19 07:48:13.767247] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:21.851 [2024-11-19 07:48:13.767570] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:22.110 07:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.110 07:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=3003858 00:24:22.110 07:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 3003858 /var/tmp/bdevperf.sock 00:24:22.110 07:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:22.110 07:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3003858 ']' 00:24:22.110 07:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:22.110 07:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:22.110 07:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:22.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:22.110 07:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:22.110 07:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:22.110 [2024-11-19 07:48:13.875456] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:24:22.110 [2024-11-19 07:48:13.875599] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3003858 ] 00:24:22.110 [2024-11-19 07:48:14.011808] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:22.369 [2024-11-19 07:48:14.138171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:23.304 07:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:23.304 07:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:23.304 07:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.DE2e5zWclL 00:24:23.304 07:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:23.870 [2024-11-19 07:48:15.501852] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:23.870 nvme0n1 00:24:23.870 07:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:23.870 Running I/O for 1 seconds... 00:24:25.062 2323.00 IOPS, 9.07 MiB/s 00:24:25.062 Latency(us) 00:24:25.062 [2024-11-19T06:48:16.993Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:25.063 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:25.063 Verification LBA range: start 0x0 length 0x2000 00:24:25.063 nvme0n1 : 1.03 2378.04 9.29 0.00 0.00 53144.98 10922.67 53982.25 00:24:25.063 [2024-11-19T06:48:16.993Z] =================================================================================================================== 00:24:25.063 [2024-11-19T06:48:16.993Z] Total : 2378.04 9.29 0.00 0.00 53144.98 10922.67 53982.25 00:24:25.063 { 00:24:25.063 "results": [ 00:24:25.063 { 00:24:25.063 "job": "nvme0n1", 00:24:25.063 "core_mask": "0x2", 00:24:25.063 "workload": "verify", 00:24:25.063 "status": "finished", 00:24:25.063 "verify_range": { 00:24:25.063 "start": 0, 00:24:25.063 "length": 8192 00:24:25.063 }, 00:24:25.063 "queue_depth": 128, 00:24:25.063 "io_size": 4096, 00:24:25.063 "runtime": 1.030682, 00:24:25.063 "iops": 2378.03706671893, 00:24:25.063 "mibps": 9.28920729187082, 00:24:25.063 "io_failed": 0, 00:24:25.063 "io_timeout": 0, 00:24:25.063 "avg_latency_us": 53144.97517385195, 00:24:25.063 "min_latency_us": 10922.666666666666, 00:24:25.063 "max_latency_us": 53982.24592592593 00:24:25.063 } 00:24:25.063 ], 00:24:25.063 "core_count": 1 00:24:25.063 } 00:24:25.063 07:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:24:25.063 07:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.063 07:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:25.063 07:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.063 07:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:24:25.063 "subsystems": [ 00:24:25.063 { 00:24:25.063 "subsystem": "keyring", 00:24:25.063 "config": [ 00:24:25.063 { 00:24:25.063 "method": "keyring_file_add_key", 00:24:25.063 "params": { 00:24:25.063 "name": "key0", 00:24:25.063 "path": "/tmp/tmp.DE2e5zWclL" 00:24:25.063 } 00:24:25.063 } 00:24:25.063 ] 00:24:25.063 }, 00:24:25.063 { 00:24:25.063 "subsystem": "iobuf", 00:24:25.063 "config": [ 00:24:25.063 { 00:24:25.063 "method": "iobuf_set_options", 00:24:25.063 "params": { 00:24:25.063 "small_pool_count": 8192, 00:24:25.063 "large_pool_count": 1024, 00:24:25.063 "small_bufsize": 8192, 00:24:25.063 "large_bufsize": 135168, 00:24:25.063 "enable_numa": false 00:24:25.063 } 00:24:25.063 } 00:24:25.063 ] 00:24:25.063 }, 00:24:25.063 { 00:24:25.063 "subsystem": "sock", 00:24:25.063 "config": [ 00:24:25.063 { 00:24:25.063 "method": "sock_set_default_impl", 00:24:25.063 "params": { 00:24:25.063 "impl_name": "posix" 00:24:25.063 } 00:24:25.063 }, 00:24:25.063 { 00:24:25.063 "method": "sock_impl_set_options", 00:24:25.063 "params": { 00:24:25.063 "impl_name": "ssl", 00:24:25.063 "recv_buf_size": 4096, 00:24:25.063 "send_buf_size": 4096, 00:24:25.063 "enable_recv_pipe": true, 00:24:25.063 "enable_quickack": false, 00:24:25.063 "enable_placement_id": 0, 00:24:25.063 "enable_zerocopy_send_server": true, 00:24:25.063 "enable_zerocopy_send_client": false, 00:24:25.063 "zerocopy_threshold": 0, 00:24:25.063 "tls_version": 0, 00:24:25.063 "enable_ktls": false 00:24:25.063 } 00:24:25.063 }, 00:24:25.063 { 00:24:25.063 "method": "sock_impl_set_options", 00:24:25.063 "params": { 00:24:25.063 "impl_name": "posix", 00:24:25.063 "recv_buf_size": 2097152, 00:24:25.063 "send_buf_size": 2097152, 00:24:25.063 "enable_recv_pipe": true, 00:24:25.063 "enable_quickack": false, 00:24:25.063 "enable_placement_id": 0, 00:24:25.063 "enable_zerocopy_send_server": true, 00:24:25.063 "enable_zerocopy_send_client": false, 00:24:25.063 "zerocopy_threshold": 0, 00:24:25.063 "tls_version": 0, 00:24:25.063 "enable_ktls": false 00:24:25.063 } 00:24:25.063 } 00:24:25.063 ] 00:24:25.063 }, 00:24:25.063 { 00:24:25.063 "subsystem": "vmd", 00:24:25.063 "config": [] 00:24:25.063 }, 00:24:25.063 { 00:24:25.063 "subsystem": "accel", 00:24:25.063 "config": [ 00:24:25.063 { 00:24:25.063 "method": "accel_set_options", 00:24:25.063 "params": { 00:24:25.063 "small_cache_size": 128, 00:24:25.063 "large_cache_size": 16, 00:24:25.063 "task_count": 2048, 00:24:25.063 "sequence_count": 2048, 00:24:25.063 "buf_count": 2048 00:24:25.063 } 00:24:25.063 } 00:24:25.063 ] 00:24:25.063 }, 00:24:25.063 { 00:24:25.063 "subsystem": "bdev", 00:24:25.063 "config": [ 00:24:25.063 { 00:24:25.063 "method": "bdev_set_options", 00:24:25.063 "params": { 00:24:25.063 "bdev_io_pool_size": 65535, 00:24:25.063 "bdev_io_cache_size": 256, 00:24:25.063 "bdev_auto_examine": true, 00:24:25.063 "iobuf_small_cache_size": 128, 00:24:25.063 "iobuf_large_cache_size": 16 00:24:25.063 } 00:24:25.063 }, 00:24:25.063 { 00:24:25.063 "method": "bdev_raid_set_options", 00:24:25.063 "params": { 00:24:25.063 "process_window_size_kb": 1024, 00:24:25.063 "process_max_bandwidth_mb_sec": 0 00:24:25.063 } 00:24:25.063 }, 00:24:25.063 { 00:24:25.063 "method": "bdev_iscsi_set_options", 00:24:25.063 "params": { 00:24:25.063 "timeout_sec": 30 00:24:25.063 } 00:24:25.063 }, 00:24:25.063 { 00:24:25.063 "method": "bdev_nvme_set_options", 00:24:25.063 "params": { 00:24:25.063 "action_on_timeout": "none", 00:24:25.063 "timeout_us": 0, 00:24:25.063 "timeout_admin_us": 0, 00:24:25.063 "keep_alive_timeout_ms": 10000, 00:24:25.063 "arbitration_burst": 0, 00:24:25.063 "low_priority_weight": 0, 00:24:25.063 "medium_priority_weight": 0, 00:24:25.063 "high_priority_weight": 0, 00:24:25.063 "nvme_adminq_poll_period_us": 10000, 00:24:25.063 "nvme_ioq_poll_period_us": 0, 00:24:25.063 "io_queue_requests": 0, 00:24:25.063 "delay_cmd_submit": true, 00:24:25.063 "transport_retry_count": 4, 00:24:25.063 "bdev_retry_count": 3, 00:24:25.063 "transport_ack_timeout": 0, 00:24:25.063 "ctrlr_loss_timeout_sec": 0, 00:24:25.063 "reconnect_delay_sec": 0, 00:24:25.063 "fast_io_fail_timeout_sec": 0, 00:24:25.063 "disable_auto_failback": false, 00:24:25.063 "generate_uuids": false, 00:24:25.063 "transport_tos": 0, 00:24:25.063 "nvme_error_stat": false, 00:24:25.063 "rdma_srq_size": 0, 00:24:25.063 "io_path_stat": false, 00:24:25.063 "allow_accel_sequence": false, 00:24:25.063 "rdma_max_cq_size": 0, 00:24:25.063 "rdma_cm_event_timeout_ms": 0, 00:24:25.063 "dhchap_digests": [ 00:24:25.063 "sha256", 00:24:25.063 "sha384", 00:24:25.063 "sha512" 00:24:25.063 ], 00:24:25.063 "dhchap_dhgroups": [ 00:24:25.063 "null", 00:24:25.063 "ffdhe2048", 00:24:25.063 "ffdhe3072", 00:24:25.063 "ffdhe4096", 00:24:25.063 "ffdhe6144", 00:24:25.063 "ffdhe8192" 00:24:25.063 ] 00:24:25.063 } 00:24:25.063 }, 00:24:25.063 { 00:24:25.063 "method": "bdev_nvme_set_hotplug", 00:24:25.063 "params": { 00:24:25.063 "period_us": 100000, 00:24:25.063 "enable": false 00:24:25.063 } 00:24:25.063 }, 00:24:25.063 { 00:24:25.063 "method": "bdev_malloc_create", 00:24:25.063 "params": { 00:24:25.063 "name": "malloc0", 00:24:25.063 "num_blocks": 8192, 00:24:25.063 "block_size": 4096, 00:24:25.063 "physical_block_size": 4096, 00:24:25.063 "uuid": "5a2804c1-3692-4516-b64d-60424ce967ef", 00:24:25.063 "optimal_io_boundary": 0, 00:24:25.063 "md_size": 0, 00:24:25.063 "dif_type": 0, 00:24:25.063 "dif_is_head_of_md": false, 00:24:25.063 "dif_pi_format": 0 00:24:25.063 } 00:24:25.063 }, 00:24:25.063 { 00:24:25.063 "method": "bdev_wait_for_examine" 00:24:25.063 } 00:24:25.063 ] 00:24:25.063 }, 00:24:25.063 { 00:24:25.063 "subsystem": "nbd", 00:24:25.063 "config": [] 00:24:25.063 }, 00:24:25.063 { 00:24:25.063 "subsystem": "scheduler", 00:24:25.063 "config": [ 00:24:25.063 { 00:24:25.063 "method": "framework_set_scheduler", 00:24:25.063 "params": { 00:24:25.063 "name": "static" 00:24:25.063 } 00:24:25.063 } 00:24:25.063 ] 00:24:25.063 }, 00:24:25.063 { 00:24:25.063 "subsystem": "nvmf", 00:24:25.063 "config": [ 00:24:25.063 { 00:24:25.063 "method": "nvmf_set_config", 00:24:25.063 "params": { 00:24:25.063 "discovery_filter": "match_any", 00:24:25.063 "admin_cmd_passthru": { 00:24:25.063 "identify_ctrlr": false 00:24:25.063 }, 00:24:25.063 "dhchap_digests": [ 00:24:25.063 "sha256", 00:24:25.063 "sha384", 00:24:25.063 "sha512" 00:24:25.063 ], 00:24:25.063 "dhchap_dhgroups": [ 00:24:25.063 "null", 00:24:25.063 "ffdhe2048", 00:24:25.063 "ffdhe3072", 00:24:25.063 "ffdhe4096", 00:24:25.063 "ffdhe6144", 00:24:25.063 "ffdhe8192" 00:24:25.063 ] 00:24:25.064 } 00:24:25.064 }, 00:24:25.064 { 00:24:25.064 "method": "nvmf_set_max_subsystems", 00:24:25.064 "params": { 00:24:25.064 "max_subsystems": 1024 00:24:25.064 } 00:24:25.064 }, 00:24:25.064 { 00:24:25.064 "method": "nvmf_set_crdt", 00:24:25.064 "params": { 00:24:25.064 "crdt1": 0, 00:24:25.064 "crdt2": 0, 00:24:25.064 "crdt3": 0 00:24:25.064 } 00:24:25.064 }, 00:24:25.064 { 00:24:25.064 "method": "nvmf_create_transport", 00:24:25.064 "params": { 00:24:25.064 "trtype": "TCP", 00:24:25.064 "max_queue_depth": 128, 00:24:25.064 "max_io_qpairs_per_ctrlr": 127, 00:24:25.064 "in_capsule_data_size": 4096, 00:24:25.064 "max_io_size": 131072, 00:24:25.064 "io_unit_size": 131072, 00:24:25.064 "max_aq_depth": 128, 00:24:25.064 "num_shared_buffers": 511, 00:24:25.064 "buf_cache_size": 4294967295, 00:24:25.064 "dif_insert_or_strip": false, 00:24:25.064 "zcopy": false, 00:24:25.064 "c2h_success": false, 00:24:25.064 "sock_priority": 0, 00:24:25.064 "abort_timeout_sec": 1, 00:24:25.064 "ack_timeout": 0, 00:24:25.064 "data_wr_pool_size": 0 00:24:25.064 } 00:24:25.064 }, 00:24:25.064 { 00:24:25.064 "method": "nvmf_create_subsystem", 00:24:25.064 "params": { 00:24:25.064 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:25.064 "allow_any_host": false, 00:24:25.064 "serial_number": "00000000000000000000", 00:24:25.064 "model_number": "SPDK bdev Controller", 00:24:25.064 "max_namespaces": 32, 00:24:25.064 "min_cntlid": 1, 00:24:25.064 "max_cntlid": 65519, 00:24:25.064 "ana_reporting": false 00:24:25.064 } 00:24:25.064 }, 00:24:25.064 { 00:24:25.064 "method": "nvmf_subsystem_add_host", 00:24:25.064 "params": { 00:24:25.064 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:25.064 "host": "nqn.2016-06.io.spdk:host1", 00:24:25.064 "psk": "key0" 00:24:25.064 } 00:24:25.064 }, 00:24:25.064 { 00:24:25.064 "method": "nvmf_subsystem_add_ns", 00:24:25.064 "params": { 00:24:25.064 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:25.064 "namespace": { 00:24:25.064 "nsid": 1, 00:24:25.064 "bdev_name": "malloc0", 00:24:25.064 "nguid": "5A2804C136924516B64D60424CE967EF", 00:24:25.064 "uuid": "5a2804c1-3692-4516-b64d-60424ce967ef", 00:24:25.064 "no_auto_visible": false 00:24:25.064 } 00:24:25.064 } 00:24:25.064 }, 00:24:25.064 { 00:24:25.064 "method": "nvmf_subsystem_add_listener", 00:24:25.064 "params": { 00:24:25.064 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:25.064 "listen_address": { 00:24:25.064 "trtype": "TCP", 00:24:25.064 "adrfam": "IPv4", 00:24:25.064 "traddr": "10.0.0.2", 00:24:25.064 "trsvcid": "4420" 00:24:25.064 }, 00:24:25.064 "secure_channel": false, 00:24:25.064 "sock_impl": "ssl" 00:24:25.064 } 00:24:25.064 } 00:24:25.064 ] 00:24:25.064 } 00:24:25.064 ] 00:24:25.064 }' 00:24:25.064 07:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:25.322 07:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:24:25.322 "subsystems": [ 00:24:25.322 { 00:24:25.322 "subsystem": "keyring", 00:24:25.322 "config": [ 00:24:25.322 { 00:24:25.322 "method": "keyring_file_add_key", 00:24:25.322 "params": { 00:24:25.322 "name": "key0", 00:24:25.322 "path": "/tmp/tmp.DE2e5zWclL" 00:24:25.322 } 00:24:25.322 } 00:24:25.322 ] 00:24:25.322 }, 00:24:25.322 { 00:24:25.322 "subsystem": "iobuf", 00:24:25.322 "config": [ 00:24:25.322 { 00:24:25.322 "method": "iobuf_set_options", 00:24:25.322 "params": { 00:24:25.322 "small_pool_count": 8192, 00:24:25.322 "large_pool_count": 1024, 00:24:25.322 "small_bufsize": 8192, 00:24:25.322 "large_bufsize": 135168, 00:24:25.322 "enable_numa": false 00:24:25.322 } 00:24:25.322 } 00:24:25.322 ] 00:24:25.322 }, 00:24:25.322 { 00:24:25.322 "subsystem": "sock", 00:24:25.322 "config": [ 00:24:25.322 { 00:24:25.322 "method": "sock_set_default_impl", 00:24:25.322 "params": { 00:24:25.322 "impl_name": "posix" 00:24:25.322 } 00:24:25.322 }, 00:24:25.322 { 00:24:25.322 "method": "sock_impl_set_options", 00:24:25.322 "params": { 00:24:25.322 "impl_name": "ssl", 00:24:25.322 "recv_buf_size": 4096, 00:24:25.322 "send_buf_size": 4096, 00:24:25.322 "enable_recv_pipe": true, 00:24:25.322 "enable_quickack": false, 00:24:25.322 "enable_placement_id": 0, 00:24:25.322 "enable_zerocopy_send_server": true, 00:24:25.322 "enable_zerocopy_send_client": false, 00:24:25.322 "zerocopy_threshold": 0, 00:24:25.322 "tls_version": 0, 00:24:25.322 "enable_ktls": false 00:24:25.322 } 00:24:25.322 }, 00:24:25.322 { 00:24:25.322 "method": "sock_impl_set_options", 00:24:25.322 "params": { 00:24:25.322 "impl_name": "posix", 00:24:25.322 "recv_buf_size": 2097152, 00:24:25.322 "send_buf_size": 2097152, 00:24:25.322 "enable_recv_pipe": true, 00:24:25.322 "enable_quickack": false, 00:24:25.322 "enable_placement_id": 0, 00:24:25.322 "enable_zerocopy_send_server": true, 00:24:25.322 "enable_zerocopy_send_client": false, 00:24:25.322 "zerocopy_threshold": 0, 00:24:25.322 "tls_version": 0, 00:24:25.322 "enable_ktls": false 00:24:25.322 } 00:24:25.322 } 00:24:25.322 ] 00:24:25.323 }, 00:24:25.323 { 00:24:25.323 "subsystem": "vmd", 00:24:25.323 "config": [] 00:24:25.323 }, 00:24:25.323 { 00:24:25.323 "subsystem": "accel", 00:24:25.323 "config": [ 00:24:25.323 { 00:24:25.323 "method": "accel_set_options", 00:24:25.323 "params": { 00:24:25.323 "small_cache_size": 128, 00:24:25.323 "large_cache_size": 16, 00:24:25.323 "task_count": 2048, 00:24:25.323 "sequence_count": 2048, 00:24:25.323 "buf_count": 2048 00:24:25.323 } 00:24:25.323 } 00:24:25.323 ] 00:24:25.323 }, 00:24:25.323 { 00:24:25.323 "subsystem": "bdev", 00:24:25.323 "config": [ 00:24:25.323 { 00:24:25.323 "method": "bdev_set_options", 00:24:25.323 "params": { 00:24:25.323 "bdev_io_pool_size": 65535, 00:24:25.323 "bdev_io_cache_size": 256, 00:24:25.323 "bdev_auto_examine": true, 00:24:25.323 "iobuf_small_cache_size": 128, 00:24:25.323 "iobuf_large_cache_size": 16 00:24:25.323 } 00:24:25.323 }, 00:24:25.323 { 00:24:25.323 "method": "bdev_raid_set_options", 00:24:25.323 "params": { 00:24:25.323 "process_window_size_kb": 1024, 00:24:25.323 "process_max_bandwidth_mb_sec": 0 00:24:25.323 } 00:24:25.323 }, 00:24:25.323 { 00:24:25.323 "method": "bdev_iscsi_set_options", 00:24:25.323 "params": { 00:24:25.323 "timeout_sec": 30 00:24:25.323 } 00:24:25.323 }, 00:24:25.323 { 00:24:25.323 "method": "bdev_nvme_set_options", 00:24:25.323 "params": { 00:24:25.323 "action_on_timeout": "none", 00:24:25.323 "timeout_us": 0, 00:24:25.323 "timeout_admin_us": 0, 00:24:25.323 "keep_alive_timeout_ms": 10000, 00:24:25.323 "arbitration_burst": 0, 00:24:25.323 "low_priority_weight": 0, 00:24:25.323 "medium_priority_weight": 0, 00:24:25.323 "high_priority_weight": 0, 00:24:25.323 "nvme_adminq_poll_period_us": 10000, 00:24:25.323 "nvme_ioq_poll_period_us": 0, 00:24:25.323 "io_queue_requests": 512, 00:24:25.323 "delay_cmd_submit": true, 00:24:25.323 "transport_retry_count": 4, 00:24:25.323 "bdev_retry_count": 3, 00:24:25.323 "transport_ack_timeout": 0, 00:24:25.323 "ctrlr_loss_timeout_sec": 0, 00:24:25.323 "reconnect_delay_sec": 0, 00:24:25.323 "fast_io_fail_timeout_sec": 0, 00:24:25.323 "disable_auto_failback": false, 00:24:25.323 "generate_uuids": false, 00:24:25.323 "transport_tos": 0, 00:24:25.323 "nvme_error_stat": false, 00:24:25.323 "rdma_srq_size": 0, 00:24:25.323 "io_path_stat": false, 00:24:25.323 "allow_accel_sequence": false, 00:24:25.323 "rdma_max_cq_size": 0, 00:24:25.323 "rdma_cm_event_timeout_ms": 0, 00:24:25.323 "dhchap_digests": [ 00:24:25.323 "sha256", 00:24:25.323 "sha384", 00:24:25.323 "sha512" 00:24:25.323 ], 00:24:25.323 "dhchap_dhgroups": [ 00:24:25.323 "null", 00:24:25.323 "ffdhe2048", 00:24:25.323 "ffdhe3072", 00:24:25.323 "ffdhe4096", 00:24:25.323 "ffdhe6144", 00:24:25.323 "ffdhe8192" 00:24:25.323 ] 00:24:25.323 } 00:24:25.323 }, 00:24:25.323 { 00:24:25.323 "method": "bdev_nvme_attach_controller", 00:24:25.323 "params": { 00:24:25.323 "name": "nvme0", 00:24:25.323 "trtype": "TCP", 00:24:25.323 "adrfam": "IPv4", 00:24:25.323 "traddr": "10.0.0.2", 00:24:25.323 "trsvcid": "4420", 00:24:25.323 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:25.323 "prchk_reftag": false, 00:24:25.323 "prchk_guard": false, 00:24:25.323 "ctrlr_loss_timeout_sec": 0, 00:24:25.323 "reconnect_delay_sec": 0, 00:24:25.323 "fast_io_fail_timeout_sec": 0, 00:24:25.323 "psk": "key0", 00:24:25.323 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:25.323 "hdgst": false, 00:24:25.323 "ddgst": false, 00:24:25.323 "multipath": "multipath" 00:24:25.323 } 00:24:25.323 }, 00:24:25.323 { 00:24:25.323 "method": "bdev_nvme_set_hotplug", 00:24:25.323 "params": { 00:24:25.323 "period_us": 100000, 00:24:25.323 "enable": false 00:24:25.323 } 00:24:25.323 }, 00:24:25.323 { 00:24:25.323 "method": "bdev_enable_histogram", 00:24:25.323 "params": { 00:24:25.323 "name": "nvme0n1", 00:24:25.323 "enable": true 00:24:25.323 } 00:24:25.323 }, 00:24:25.323 { 00:24:25.323 "method": "bdev_wait_for_examine" 00:24:25.323 } 00:24:25.323 ] 00:24:25.323 }, 00:24:25.323 { 00:24:25.323 "subsystem": "nbd", 00:24:25.323 "config": [] 00:24:25.323 } 00:24:25.323 ] 00:24:25.323 }' 00:24:25.323 07:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 3003858 00:24:25.323 07:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3003858 ']' 00:24:25.323 07:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3003858 00:24:25.323 07:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:25.323 07:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:25.323 07:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3003858 00:24:25.323 07:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:25.323 07:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:25.323 07:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3003858' 00:24:25.323 killing process with pid 3003858 00:24:25.323 07:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3003858 00:24:25.323 Received shutdown signal, test time was about 1.000000 seconds 00:24:25.323 00:24:25.323 Latency(us) 00:24:25.323 [2024-11-19T06:48:17.253Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:25.323 [2024-11-19T06:48:17.253Z] =================================================================================================================== 00:24:25.323 [2024-11-19T06:48:17.253Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:25.323 07:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3003858 00:24:26.257 07:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 3003705 00:24:26.257 07:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3003705 ']' 00:24:26.257 07:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3003705 00:24:26.257 07:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:26.257 07:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:26.258 07:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3003705 00:24:26.516 07:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:26.516 07:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:26.516 07:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3003705' 00:24:26.516 killing process with pid 3003705 00:24:26.516 07:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3003705 00:24:26.516 07:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3003705 00:24:27.457 07:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:24:27.457 07:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:24:27.457 "subsystems": [ 00:24:27.457 { 00:24:27.457 "subsystem": "keyring", 00:24:27.457 "config": [ 00:24:27.457 { 00:24:27.457 "method": "keyring_file_add_key", 00:24:27.457 "params": { 00:24:27.457 "name": "key0", 00:24:27.457 "path": "/tmp/tmp.DE2e5zWclL" 00:24:27.457 } 00:24:27.457 } 00:24:27.457 ] 00:24:27.457 }, 00:24:27.457 { 00:24:27.457 "subsystem": "iobuf", 00:24:27.457 "config": [ 00:24:27.457 { 00:24:27.457 "method": "iobuf_set_options", 00:24:27.457 "params": { 00:24:27.457 "small_pool_count": 8192, 00:24:27.457 "large_pool_count": 1024, 00:24:27.457 "small_bufsize": 8192, 00:24:27.457 "large_bufsize": 135168, 00:24:27.457 "enable_numa": false 00:24:27.457 } 00:24:27.457 } 00:24:27.457 ] 00:24:27.457 }, 00:24:27.457 { 00:24:27.457 "subsystem": "sock", 00:24:27.457 "config": [ 00:24:27.457 { 00:24:27.457 "method": "sock_set_default_impl", 00:24:27.457 "params": { 00:24:27.457 "impl_name": "posix" 00:24:27.457 } 00:24:27.457 }, 00:24:27.457 { 00:24:27.457 "method": "sock_impl_set_options", 00:24:27.457 "params": { 00:24:27.457 "impl_name": "ssl", 00:24:27.457 "recv_buf_size": 4096, 00:24:27.457 "send_buf_size": 4096, 00:24:27.457 "enable_recv_pipe": true, 00:24:27.457 "enable_quickack": false, 00:24:27.457 "enable_placement_id": 0, 00:24:27.457 "enable_zerocopy_send_server": true, 00:24:27.457 "enable_zerocopy_send_client": false, 00:24:27.457 "zerocopy_threshold": 0, 00:24:27.457 "tls_version": 0, 00:24:27.457 "enable_ktls": false 00:24:27.457 } 00:24:27.457 }, 00:24:27.457 { 00:24:27.457 "method": "sock_impl_set_options", 00:24:27.457 "params": { 00:24:27.457 "impl_name": "posix", 00:24:27.457 "recv_buf_size": 2097152, 00:24:27.457 "send_buf_size": 2097152, 00:24:27.457 "enable_recv_pipe": true, 00:24:27.457 "enable_quickack": false, 00:24:27.457 "enable_placement_id": 0, 00:24:27.457 "enable_zerocopy_send_server": true, 00:24:27.457 "enable_zerocopy_send_client": false, 00:24:27.457 "zerocopy_threshold": 0, 00:24:27.457 "tls_version": 0, 00:24:27.457 "enable_ktls": false 00:24:27.457 } 00:24:27.457 } 00:24:27.457 ] 00:24:27.457 }, 00:24:27.457 { 00:24:27.457 "subsystem": "vmd", 00:24:27.458 "config": [] 00:24:27.458 }, 00:24:27.458 { 00:24:27.458 "subsystem": "accel", 00:24:27.458 "config": [ 00:24:27.458 { 00:24:27.458 "method": "accel_set_options", 00:24:27.458 "params": { 00:24:27.458 "small_cache_size": 128, 00:24:27.458 "large_cache_size": 16, 00:24:27.458 "task_count": 2048, 00:24:27.458 "sequence_count": 2048, 00:24:27.458 "buf_count": 2048 00:24:27.458 } 00:24:27.458 } 00:24:27.458 ] 00:24:27.458 }, 00:24:27.458 { 00:24:27.458 "subsystem": "bdev", 00:24:27.458 "config": [ 00:24:27.458 { 00:24:27.458 "method": "bdev_set_options", 00:24:27.458 "params": { 00:24:27.458 "bdev_io_pool_size": 65535, 00:24:27.458 "bdev_io_cache_size": 256, 00:24:27.458 "bdev_auto_examine": true, 00:24:27.458 "iobuf_small_cache_size": 128, 00:24:27.458 "iobuf_large_cache_size": 16 00:24:27.458 } 00:24:27.458 }, 00:24:27.458 { 00:24:27.458 "method": "bdev_raid_set_options", 00:24:27.458 "params": { 00:24:27.458 "process_window_size_kb": 1024, 00:24:27.458 "process_max_bandwidth_mb_sec": 0 00:24:27.458 } 00:24:27.458 }, 00:24:27.458 { 00:24:27.458 "method": "bdev_iscsi_set_options", 00:24:27.458 "params": { 00:24:27.458 "timeout_sec": 30 00:24:27.458 } 00:24:27.458 }, 00:24:27.458 { 00:24:27.458 "method": "bdev_nvme_set_options", 00:24:27.458 "params": { 00:24:27.458 "action_on_timeout": "none", 00:24:27.458 "timeout_us": 0, 00:24:27.458 "timeout_admin_us": 0, 00:24:27.458 "keep_alive_timeout_ms": 10000, 00:24:27.458 "arbitration_burst": 0, 00:24:27.458 "low_priority_weight": 0, 00:24:27.458 "medium_priority_weight": 0, 00:24:27.458 "high_priority_weight": 0, 00:24:27.458 "nvme_adminq_poll_period_us": 10000, 00:24:27.458 "nvme_ioq_poll_period_us": 0, 00:24:27.458 "io_queue_requests": 0, 00:24:27.458 "delay_cmd_submit": true, 00:24:27.458 "transport_retry_count": 4, 00:24:27.458 "bdev_retry_count": 3, 00:24:27.458 "transport_ack_timeout": 0, 00:24:27.458 "ctrlr_loss_timeout_sec": 0, 00:24:27.458 "reconnect_delay_sec": 0, 00:24:27.458 "fast_io_fail_timeout_sec": 0, 00:24:27.458 "disable_auto_failback": false, 00:24:27.458 "generate_uuids": false, 00:24:27.458 "transport_tos": 0, 00:24:27.458 "nvme_error_stat": false, 00:24:27.458 "rdma_srq_size": 0, 00:24:27.458 "io_path_stat": false, 00:24:27.458 "allow_accel_sequence": false, 00:24:27.458 "rdma_max_cq_size": 0, 00:24:27.458 "rdma_cm_event_timeout_ms": 0, 00:24:27.458 "dhchap_digests": [ 00:24:27.458 "sha256", 00:24:27.458 "sha384", 00:24:27.458 "sha512" 00:24:27.458 ], 00:24:27.458 "dhchap_dhgroups": [ 00:24:27.458 "null", 00:24:27.458 "ffdhe2048", 00:24:27.458 "ffdhe3072", 00:24:27.458 "ffdhe4096", 00:24:27.458 "ffdhe6144", 00:24:27.458 "ffdhe8192" 00:24:27.458 ] 00:24:27.458 } 00:24:27.458 }, 00:24:27.458 { 00:24:27.458 "method": "bdev_nvme_set_hotplug", 00:24:27.458 "params": { 00:24:27.458 "period_us": 100000, 00:24:27.458 "enable": false 00:24:27.458 } 00:24:27.458 }, 00:24:27.458 { 00:24:27.458 "method": "bdev_malloc_create", 00:24:27.458 "params": { 00:24:27.458 "name": "malloc0", 00:24:27.458 "num_blocks": 8192, 00:24:27.459 "block_size": 4096, 00:24:27.459 "physical_block_size": 4096, 00:24:27.459 "uuid": "5a2804c1-3692-4516-b64d-60424ce967ef", 00:24:27.459 "optimal_io_boundary": 0, 00:24:27.459 "md_size": 0, 00:24:27.459 "dif_type": 0, 00:24:27.459 "dif_is_head_of_md": false, 00:24:27.459 "dif_pi_format": 0 00:24:27.459 } 00:24:27.459 }, 00:24:27.459 { 00:24:27.459 "method": "bdev_wait_for_examine" 00:24:27.459 } 00:24:27.459 ] 00:24:27.459 }, 00:24:27.459 { 00:24:27.459 "subsystem": "nbd", 00:24:27.459 "config": [] 00:24:27.459 }, 00:24:27.459 { 00:24:27.459 "subsystem": "scheduler", 00:24:27.459 "config": [ 00:24:27.459 { 00:24:27.459 "method": "framework_set_scheduler", 00:24:27.459 "params": { 00:24:27.459 "name": "static" 00:24:27.459 } 00:24:27.459 } 00:24:27.459 ] 00:24:27.459 }, 00:24:27.459 { 00:24:27.459 "subsystem": "nvmf", 00:24:27.459 "config": [ 00:24:27.459 { 00:24:27.459 "method": "nvmf_set_config", 00:24:27.459 "params": { 00:24:27.459 "discovery_filter": "match_any", 00:24:27.459 "admin_cmd_passthru": { 00:24:27.459 "identify_ctrlr": false 00:24:27.459 }, 00:24:27.459 "dhchap_digests": [ 00:24:27.459 "sha256", 00:24:27.459 "sha384", 00:24:27.459 "sha512" 00:24:27.459 ], 00:24:27.459 "dhchap_dhgroups": [ 00:24:27.459 "null", 00:24:27.459 "ffdhe2048", 00:24:27.459 "ffdhe3072", 00:24:27.459 "ffdhe4096", 00:24:27.459 "ffdhe6144", 00:24:27.459 "ffdhe8192" 00:24:27.459 ] 00:24:27.459 } 00:24:27.459 }, 00:24:27.459 { 00:24:27.459 "method": "nvmf_set_max_subsystems", 00:24:27.459 "params": { 00:24:27.459 "max_subsystems": 1024 00:24:27.459 } 00:24:27.459 }, 00:24:27.459 { 00:24:27.459 "method": "nvmf_set_crdt", 00:24:27.459 "params": { 00:24:27.459 "crdt1": 0, 00:24:27.459 "crdt2": 0, 00:24:27.459 "crdt3": 0 00:24:27.459 } 00:24:27.459 }, 00:24:27.459 { 00:24:27.459 "method": "nvmf_create_transport", 00:24:27.459 "params": { 00:24:27.459 "trtype": "TCP", 00:24:27.459 "max_queue_depth": 128, 00:24:27.459 "max_io_qpairs_per_ctrlr": 127, 00:24:27.459 "in_capsule_data_size": 4096, 00:24:27.459 "max_io_size": 131072, 00:24:27.459 "io_unit_size": 131072, 00:24:27.459 "max_aq_depth": 128, 00:24:27.459 "num_shared_buffers": 511, 00:24:27.459 "buf_cache_size": 4294967295, 00:24:27.459 "dif_insert_or_strip": false, 00:24:27.459 "zcopy": false, 00:24:27.459 "c2h_success": false, 00:24:27.459 "sock_priority": 0, 00:24:27.459 "abort_timeout_sec": 1, 00:24:27.459 "ack_timeout": 0, 00:24:27.459 "data_wr_pool_size": 0 00:24:27.459 } 00:24:27.459 }, 00:24:27.459 { 00:24:27.459 "method": "nvmf_create_subsystem", 00:24:27.459 "params": { 00:24:27.459 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:27.459 "allow_any_host": false, 00:24:27.459 "serial_number": "00000000000000000000", 00:24:27.459 "model_number": "SPDK bdev Controller", 00:24:27.459 "max_namespaces": 32, 00:24:27.460 "min_cntlid": 1, 00:24:27.460 "max_cntlid": 65519, 00:24:27.460 "ana_reporting": false 00:24:27.460 } 00:24:27.460 }, 00:24:27.460 { 00:24:27.460 "method": "nvmf_subsystem_add_host", 00:24:27.460 "params": { 00:24:27.460 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:27.460 "host": "nqn.2016-06.io.spdk:host1", 00:24:27.460 "psk": "key0" 00:24:27.460 } 00:24:27.460 }, 00:24:27.460 { 00:24:27.460 "method": "nvmf_subsystem_add_ns", 00:24:27.460 "params": { 00:24:27.460 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:27.460 "namespace": { 00:24:27.460 "nsid": 1, 00:24:27.460 "bdev_name": "malloc0", 00:24:27.460 "nguid": "5A2804C136924516B64D60424CE967EF", 00:24:27.460 "uuid": "5a2804c1-3692-4516-b64d-60424ce967ef", 00:24:27.460 "no_auto_visible": false 00:24:27.460 } 00:24:27.460 } 00:24:27.460 }, 00:24:27.460 { 00:24:27.460 "method": "nvmf_subsystem_add_listener", 00:24:27.460 "params": { 00:24:27.460 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:27.460 "listen_address": { 00:24:27.460 "trtype": "TCP", 00:24:27.460 "adrfam": "IPv4", 00:24:27.460 "traddr": "10.0.0.2", 00:24:27.460 "trsvcid": "4420" 00:24:27.460 }, 00:24:27.460 "secure_channel": false, 00:24:27.460 "sock_impl": "ssl" 00:24:27.460 } 00:24:27.460 } 00:24:27.460 ] 00:24:27.460 } 00:24:27.460 ] 00:24:27.460 }' 00:24:27.460 07:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:27.460 07:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:27.460 07:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:27.460 07:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3004536 00:24:27.460 07:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:24:27.460 07:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3004536 00:24:27.460 07:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3004536 ']' 00:24:27.460 07:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:27.460 07:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:27.460 07:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:27.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:27.460 07:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:27.460 07:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:27.721 [2024-11-19 07:48:19.467905] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:24:27.721 [2024-11-19 07:48:19.468067] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:27.721 [2024-11-19 07:48:19.608545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:27.979 [2024-11-19 07:48:19.742250] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:27.979 [2024-11-19 07:48:19.742342] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:27.979 [2024-11-19 07:48:19.742368] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:27.979 [2024-11-19 07:48:19.742393] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:27.979 [2024-11-19 07:48:19.742414] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:27.979 [2024-11-19 07:48:19.744196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:28.546 [2024-11-19 07:48:20.299218] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:28.546 [2024-11-19 07:48:20.331233] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:28.546 [2024-11-19 07:48:20.331550] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:28.546 07:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:28.546 07:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:28.546 07:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:28.546 07:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:28.546 07:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:28.546 07:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:28.546 07:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=3004686 00:24:28.546 07:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 3004686 /var/tmp/bdevperf.sock 00:24:28.546 07:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3004686 ']' 00:24:28.805 07:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:28.805 07:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:24:28.805 07:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:28.805 07:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:28.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:28.805 07:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:24:28.805 "subsystems": [ 00:24:28.805 { 00:24:28.805 "subsystem": "keyring", 00:24:28.805 "config": [ 00:24:28.805 { 00:24:28.805 "method": "keyring_file_add_key", 00:24:28.805 "params": { 00:24:28.805 "name": "key0", 00:24:28.805 "path": "/tmp/tmp.DE2e5zWclL" 00:24:28.805 } 00:24:28.805 } 00:24:28.805 ] 00:24:28.805 }, 00:24:28.805 { 00:24:28.805 "subsystem": "iobuf", 00:24:28.805 "config": [ 00:24:28.805 { 00:24:28.805 "method": "iobuf_set_options", 00:24:28.805 "params": { 00:24:28.805 "small_pool_count": 8192, 00:24:28.805 "large_pool_count": 1024, 00:24:28.805 "small_bufsize": 8192, 00:24:28.805 "large_bufsize": 135168, 00:24:28.805 "enable_numa": false 00:24:28.805 } 00:24:28.805 } 00:24:28.805 ] 00:24:28.805 }, 00:24:28.805 { 00:24:28.805 "subsystem": "sock", 00:24:28.805 "config": [ 00:24:28.805 { 00:24:28.805 "method": "sock_set_default_impl", 00:24:28.805 "params": { 00:24:28.805 "impl_name": "posix" 00:24:28.805 } 00:24:28.805 }, 00:24:28.805 { 00:24:28.805 "method": "sock_impl_set_options", 00:24:28.805 "params": { 00:24:28.805 "impl_name": "ssl", 00:24:28.805 "recv_buf_size": 4096, 00:24:28.805 "send_buf_size": 4096, 00:24:28.805 "enable_recv_pipe": true, 00:24:28.805 "enable_quickack": false, 00:24:28.805 "enable_placement_id": 0, 00:24:28.805 "enable_zerocopy_send_server": true, 00:24:28.805 "enable_zerocopy_send_client": false, 00:24:28.805 "zerocopy_threshold": 0, 00:24:28.805 "tls_version": 0, 00:24:28.805 "enable_ktls": false 00:24:28.805 } 00:24:28.805 }, 00:24:28.805 { 00:24:28.805 "method": "sock_impl_set_options", 00:24:28.805 "params": { 00:24:28.805 "impl_name": "posix", 00:24:28.805 "recv_buf_size": 2097152, 00:24:28.805 "send_buf_size": 2097152, 00:24:28.805 "enable_recv_pipe": true, 00:24:28.805 "enable_quickack": false, 00:24:28.805 "enable_placement_id": 0, 00:24:28.805 "enable_zerocopy_send_server": true, 00:24:28.805 "enable_zerocopy_send_client": false, 00:24:28.805 "zerocopy_threshold": 0, 00:24:28.805 "tls_version": 0, 00:24:28.805 "enable_ktls": false 00:24:28.805 } 00:24:28.805 } 00:24:28.805 ] 00:24:28.805 }, 00:24:28.805 { 00:24:28.805 "subsystem": "vmd", 00:24:28.805 "config": [] 00:24:28.805 }, 00:24:28.805 { 00:24:28.805 "subsystem": "accel", 00:24:28.805 "config": [ 00:24:28.805 { 00:24:28.805 "method": "accel_set_options", 00:24:28.805 "params": { 00:24:28.805 "small_cache_size": 128, 00:24:28.805 "large_cache_size": 16, 00:24:28.805 "task_count": 2048, 00:24:28.805 "sequence_count": 2048, 00:24:28.805 "buf_count": 2048 00:24:28.805 } 00:24:28.805 } 00:24:28.805 ] 00:24:28.805 }, 00:24:28.805 { 00:24:28.805 "subsystem": "bdev", 00:24:28.805 "config": [ 00:24:28.805 { 00:24:28.805 "method": "bdev_set_options", 00:24:28.805 "params": { 00:24:28.805 "bdev_io_pool_size": 65535, 00:24:28.805 "bdev_io_cache_size": 256, 00:24:28.805 "bdev_auto_examine": true, 00:24:28.805 "iobuf_small_cache_size": 128, 00:24:28.805 "iobuf_large_cache_size": 16 00:24:28.805 } 00:24:28.805 }, 00:24:28.805 { 00:24:28.805 "method": "bdev_raid_set_options", 00:24:28.805 "params": { 00:24:28.805 "process_window_size_kb": 1024, 00:24:28.805 "process_max_bandwidth_mb_sec": 0 00:24:28.805 } 00:24:28.805 }, 00:24:28.805 { 00:24:28.805 "method": "bdev_iscsi_set_options", 00:24:28.805 "params": { 00:24:28.805 "timeout_sec": 30 00:24:28.805 } 00:24:28.805 }, 00:24:28.805 { 00:24:28.805 "method": "bdev_nvme_set_options", 00:24:28.805 "params": { 00:24:28.805 "action_on_timeout": "none", 00:24:28.805 "timeout_us": 0, 00:24:28.805 "timeout_admin_us": 0, 00:24:28.805 "keep_alive_timeout_ms": 10000, 00:24:28.805 "arbitration_burst": 0, 00:24:28.805 "low_priority_weight": 0, 00:24:28.805 "medium_priority_weight": 0, 00:24:28.805 "high_priority_weight": 0, 00:24:28.805 "nvme_adminq_poll_period_us": 10000, 00:24:28.805 "nvme_ioq_poll_period_us": 0, 00:24:28.805 "io_queue_requests": 512, 00:24:28.805 "delay_cmd_submit": true, 00:24:28.805 "transport_retry_count": 4, 00:24:28.805 "bdev_retry_count": 3, 00:24:28.805 "transport_ack_timeout": 0, 00:24:28.805 "ctrlr_loss_timeout_sec": 0, 00:24:28.805 "reconnect_delay_sec": 0, 00:24:28.805 "fast_io_fail_timeout_sec": 0, 00:24:28.805 "disable_auto_failback": false, 00:24:28.805 "generate_uuids": false, 00:24:28.805 "transport_tos": 0, 00:24:28.805 "nvme_error_stat": false, 00:24:28.805 "rdma_srq_size": 0, 00:24:28.805 "io_path_stat": false, 00:24:28.805 "allow_accel_sequence": false, 00:24:28.805 "rdma_max_cq_size": 0, 00:24:28.805 "rdma_cm_event_timeout_ms": 0, 00:24:28.805 "dhchap_digests": [ 00:24:28.805 "sha256", 00:24:28.805 "sha384", 00:24:28.805 "sha512" 00:24:28.805 ], 00:24:28.805 "dhchap_dhgroups": [ 00:24:28.805 "null", 00:24:28.805 "ffdhe2048", 00:24:28.805 "ffdhe3072", 00:24:28.805 "ffdhe4096", 00:24:28.805 "ffdhe6144", 00:24:28.805 "ffdhe8192" 00:24:28.805 ] 00:24:28.805 } 00:24:28.805 }, 00:24:28.805 { 00:24:28.805 "method": "bdev_nvme_attach_controller", 00:24:28.805 "params": { 00:24:28.805 "name": "nvme0", 00:24:28.805 "trtype": "TCP", 00:24:28.805 "adrfam": "IPv4", 00:24:28.805 "traddr": "10.0.0.2", 00:24:28.805 "trsvcid": "4420", 00:24:28.805 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:28.805 "prchk_reftag": false, 00:24:28.805 "prchk_guard": false, 00:24:28.805 "ctrlr_loss_timeout_sec": 0, 00:24:28.805 "reconnect_delay_sec": 0, 00:24:28.805 "fast_io_fail_timeout_sec": 0, 00:24:28.805 "psk": "key0", 00:24:28.805 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:28.805 "hdgst": false, 00:24:28.805 "ddgst": false, 00:24:28.805 "multipath": "multipath" 00:24:28.805 } 00:24:28.805 }, 00:24:28.805 { 00:24:28.805 "method": "bdev_nvme_set_hotplug", 00:24:28.805 "params": { 00:24:28.805 "period_us": 100000, 00:24:28.805 "enable": false 00:24:28.805 } 00:24:28.805 }, 00:24:28.805 { 00:24:28.805 "method": "bdev_enable_histogram", 00:24:28.805 "params": { 00:24:28.805 "name": "nvme0n1", 00:24:28.805 "enable": true 00:24:28.805 } 00:24:28.805 }, 00:24:28.805 { 00:24:28.805 "method": "bdev_wait_for_examine" 00:24:28.805 } 00:24:28.805 ] 00:24:28.805 }, 00:24:28.805 { 00:24:28.805 "subsystem": "nbd", 00:24:28.805 "config": [] 00:24:28.805 } 00:24:28.805 ] 00:24:28.805 }' 00:24:28.805 07:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:28.806 07:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:28.806 [2024-11-19 07:48:20.560911] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:24:28.806 [2024-11-19 07:48:20.561066] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3004686 ] 00:24:28.806 [2024-11-19 07:48:20.696216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:29.064 [2024-11-19 07:48:20.820652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:29.323 [2024-11-19 07:48:21.232238] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:29.889 07:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:29.889 07:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:29.889 07:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:29.889 07:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:24:30.147 07:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:30.147 07:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:30.147 Running I/O for 1 seconds... 00:24:31.082 2421.00 IOPS, 9.46 MiB/s 00:24:31.082 Latency(us) 00:24:31.082 [2024-11-19T06:48:23.012Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:31.082 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:31.082 Verification LBA range: start 0x0 length 0x2000 00:24:31.082 nvme0n1 : 1.03 2477.30 9.68 0.00 0.00 51025.20 10388.67 45438.29 00:24:31.082 [2024-11-19T06:48:23.012Z] =================================================================================================================== 00:24:31.082 [2024-11-19T06:48:23.012Z] Total : 2477.30 9.68 0.00 0.00 51025.20 10388.67 45438.29 00:24:31.082 { 00:24:31.082 "results": [ 00:24:31.082 { 00:24:31.082 "job": "nvme0n1", 00:24:31.082 "core_mask": "0x2", 00:24:31.082 "workload": "verify", 00:24:31.082 "status": "finished", 00:24:31.082 "verify_range": { 00:24:31.082 "start": 0, 00:24:31.082 "length": 8192 00:24:31.082 }, 00:24:31.082 "queue_depth": 128, 00:24:31.082 "io_size": 4096, 00:24:31.082 "runtime": 1.028942, 00:24:31.082 "iops": 2477.30192761108, 00:24:31.082 "mibps": 9.676960654730781, 00:24:31.082 "io_failed": 0, 00:24:31.082 "io_timeout": 0, 00:24:31.082 "avg_latency_us": 51025.198529561334, 00:24:31.082 "min_latency_us": 10388.66962962963, 00:24:31.082 "max_latency_us": 45438.293333333335 00:24:31.082 } 00:24:31.082 ], 00:24:31.082 "core_count": 1 00:24:31.082 } 00:24:31.082 07:48:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:24:31.082 07:48:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:24:31.082 07:48:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:24:31.082 07:48:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:24:31.082 07:48:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:24:31.082 07:48:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:24:31.082 07:48:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:31.082 07:48:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:24:31.082 07:48:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:24:31.082 07:48:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:24:31.082 07:48:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:31.082 nvmf_trace.0 00:24:31.340 07:48:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:24:31.340 07:48:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 3004686 00:24:31.340 07:48:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3004686 ']' 00:24:31.340 07:48:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3004686 00:24:31.340 07:48:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:31.340 07:48:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:31.340 07:48:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3004686 00:24:31.340 07:48:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:31.340 07:48:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:31.340 07:48:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3004686' 00:24:31.340 killing process with pid 3004686 00:24:31.340 07:48:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3004686 00:24:31.340 Received shutdown signal, test time was about 1.000000 seconds 00:24:31.340 00:24:31.340 Latency(us) 00:24:31.340 [2024-11-19T06:48:23.270Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:31.340 [2024-11-19T06:48:23.270Z] =================================================================================================================== 00:24:31.340 [2024-11-19T06:48:23.270Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:31.340 07:48:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3004686 00:24:32.275 07:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:24:32.275 07:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:32.275 07:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:24:32.275 07:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:32.275 07:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:24:32.275 07:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:32.275 07:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:32.275 rmmod nvme_tcp 00:24:32.275 rmmod nvme_fabrics 00:24:32.275 rmmod nvme_keyring 00:24:32.275 07:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:32.275 07:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:24:32.275 07:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:24:32.275 07:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 3004536 ']' 00:24:32.275 07:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 3004536 00:24:32.275 07:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3004536 ']' 00:24:32.275 07:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3004536 00:24:32.275 07:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:32.275 07:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:32.275 07:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3004536 00:24:32.275 07:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:32.275 07:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:32.275 07:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3004536' 00:24:32.275 killing process with pid 3004536 00:24:32.275 07:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3004536 00:24:32.275 07:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3004536 00:24:33.650 07:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:33.650 07:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:33.650 07:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:33.650 07:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:24:33.650 07:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:24:33.650 07:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:33.650 07:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:24:33.650 07:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:33.650 07:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:33.650 07:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:33.650 07:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:33.650 07:48:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:35.555 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:35.555 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.wyw4P4LwBT /tmp/tmp.ZnmBbFB1zR /tmp/tmp.DE2e5zWclL 00:24:35.555 00:24:35.555 real 1m53.703s 00:24:35.555 user 3m8.929s 00:24:35.555 sys 0m27.292s 00:24:35.555 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:35.555 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:35.555 ************************************ 00:24:35.555 END TEST nvmf_tls 00:24:35.555 ************************************ 00:24:35.555 07:48:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:35.555 07:48:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:35.555 07:48:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:35.555 07:48:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:35.555 ************************************ 00:24:35.555 START TEST nvmf_fips 00:24:35.555 ************************************ 00:24:35.555 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:35.555 * Looking for test storage... 00:24:35.555 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:24:35.555 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:35.555 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:24:35.555 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:35.814 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:35.814 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:35.814 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:35.814 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:35.814 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:35.814 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:35.814 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:35.814 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:35.814 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:24:35.814 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:24:35.814 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:24:35.814 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:35.814 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:35.814 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:24:35.814 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:35.814 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:35.814 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:35.814 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:35.814 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:35.814 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:35.814 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:35.814 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:24:35.814 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:24:35.814 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:35.814 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:24:35.814 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:24:35.814 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:35.814 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:35.814 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:24:35.814 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:35.814 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:35.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:35.815 --rc genhtml_branch_coverage=1 00:24:35.815 --rc genhtml_function_coverage=1 00:24:35.815 --rc genhtml_legend=1 00:24:35.815 --rc geninfo_all_blocks=1 00:24:35.815 --rc geninfo_unexecuted_blocks=1 00:24:35.815 00:24:35.815 ' 00:24:35.815 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:35.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:35.815 --rc genhtml_branch_coverage=1 00:24:35.815 --rc genhtml_function_coverage=1 00:24:35.815 --rc genhtml_legend=1 00:24:35.815 --rc geninfo_all_blocks=1 00:24:35.815 --rc geninfo_unexecuted_blocks=1 00:24:35.815 00:24:35.815 ' 00:24:35.815 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:35.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:35.815 --rc genhtml_branch_coverage=1 00:24:35.815 --rc genhtml_function_coverage=1 00:24:35.815 --rc genhtml_legend=1 00:24:35.815 --rc geninfo_all_blocks=1 00:24:35.815 --rc geninfo_unexecuted_blocks=1 00:24:35.815 00:24:35.815 ' 00:24:35.815 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:35.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:35.815 --rc genhtml_branch_coverage=1 00:24:35.815 --rc genhtml_function_coverage=1 00:24:35.815 --rc genhtml_legend=1 00:24:35.815 --rc geninfo_all_blocks=1 00:24:35.815 --rc geninfo_unexecuted_blocks=1 00:24:35.815 00:24:35.815 ' 00:24:35.815 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:35.815 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:24:35.815 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:35.815 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:35.815 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:35.815 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:35.815 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:35.815 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:35.815 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:35.815 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:35.815 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:35.815 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:35.815 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:35.815 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:35.815 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:35.815 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:35.815 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:35.815 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:35.815 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:35.815 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:24:35.815 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:35.815 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:35.815 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:35.815 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.815 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.815 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.815 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:24:35.815 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.815 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:24:35.815 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:35.815 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:35.815 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:35.815 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:35.815 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:35.815 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:35.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:35.815 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:35.815 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:35.815 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:35.815 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:35.815 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:24:35.815 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:24:35.815 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:24:35.815 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:24:35.815 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:24:35.815 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:24:35.815 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:35.815 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:35.815 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:35.815 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:35.815 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:35.815 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:35.815 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:24:35.815 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:24:35.815 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:24:35.815 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:35.815 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:35.815 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:24:35.815 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:35.815 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:35.815 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:24:35.815 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:35.815 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:35.815 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:35.815 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:24:35.815 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:24:35.815 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:35.815 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:35.815 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:35.815 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:24:35.815 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:35.815 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:35.815 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:24:35.815 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:35.815 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:35.815 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:35.815 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:35.815 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:35.815 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:35.816 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:24:35.816 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:24:35.816 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:35.816 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:24:35.816 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:24:35.816 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:35.816 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:24:35.816 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:24:35.816 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:24:35.816 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:24:35.816 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:24:35.816 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:24:35.816 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:24:35.816 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:24:35.816 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:24:35.816 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:24:35.816 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:24:35.816 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:24:35.816 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:24:35.816 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:24:35.816 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:24:35.816 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:24:35.816 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:24:35.816 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:24:35.816 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:24:35.816 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:24:35.816 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:24:35.816 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:24:35.816 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:24:35.816 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:24:35.816 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:24:35.816 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:35.816 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:24:35.816 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:35.816 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:24:35.816 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:35.816 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:24:35.816 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:24:35.816 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:24:35.816 Error setting digest 00:24:35.816 40F247BE267F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:24:35.816 40F247BE267F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:24:35.816 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:24:35.816 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:35.816 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:35.816 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:35.816 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:24:35.816 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:35.816 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:35.816 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:35.816 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:35.816 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:35.816 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:35.816 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:35.816 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:35.816 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:35.816 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:35.816 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:24:35.816 07:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:38.347 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:38.347 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:24:38.347 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:38.347 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:38.347 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:38.347 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:38.347 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:38.347 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:24:38.347 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:38.347 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:24:38.347 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:24:38.347 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:24:38.347 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:24:38.347 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:24:38.347 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:24:38.347 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:38.347 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:38.347 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:38.347 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:38.347 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:38.347 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:38.347 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:38.347 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:38.347 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:38.347 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:38.347 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:38.347 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:38.347 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:38.347 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:38.347 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:38.347 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:38.347 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:38.347 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:38.347 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:38.347 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:38.347 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:38.347 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:38.347 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:38.347 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:38.347 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:38.347 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:38.347 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:38.347 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:38.347 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:38.347 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:38.347 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:38.347 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:38.347 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:38.347 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:38.347 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:38.347 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:38.347 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:38.347 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:38.347 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:38.347 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:38.347 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:38.347 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:38.347 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:38.347 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:38.347 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:38.347 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:38.347 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:38.347 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:38.348 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:38.348 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:38.348 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:38.348 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:38.348 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:38.348 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:38.348 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:38.348 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:38.348 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:38.348 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:38.348 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:24:38.348 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:38.348 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:38.348 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:38.348 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:38.348 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:38.348 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:38.348 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:38.348 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:38.348 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:38.348 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:38.348 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:38.348 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:38.348 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:38.348 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:38.348 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:38.348 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:38.348 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:38.348 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:38.348 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:38.348 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:38.348 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:38.348 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:38.348 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:38.348 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:38.348 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:38.348 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:38.348 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:38.348 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.149 ms 00:24:38.348 00:24:38.348 --- 10.0.0.2 ping statistics --- 00:24:38.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:38.348 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:24:38.348 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:38.348 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:38.348 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 00:24:38.348 00:24:38.348 --- 10.0.0.1 ping statistics --- 00:24:38.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:38.348 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:24:38.348 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:38.348 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:24:38.348 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:38.348 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:38.348 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:38.348 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:38.348 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:38.348 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:38.348 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:38.348 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:24:38.348 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:38.348 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:38.348 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:38.348 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:38.348 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=3007189 00:24:38.348 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 3007189 00:24:38.348 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 3007189 ']' 00:24:38.348 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:38.348 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:38.348 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:38.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:38.348 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:38.348 07:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:38.348 [2024-11-19 07:48:30.087166] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:24:38.348 [2024-11-19 07:48:30.087349] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:38.348 [2024-11-19 07:48:30.249537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:38.606 [2024-11-19 07:48:30.374597] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:38.606 [2024-11-19 07:48:30.374705] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:38.606 [2024-11-19 07:48:30.374732] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:38.606 [2024-11-19 07:48:30.374764] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:38.606 [2024-11-19 07:48:30.374783] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:38.606 [2024-11-19 07:48:30.376276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:39.172 07:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:39.172 07:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:24:39.172 07:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:39.172 07:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:39.172 07:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:39.172 07:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:39.172 07:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:24:39.172 07:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:39.172 07:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:24:39.172 07:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.sP2 00:24:39.172 07:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:39.172 07:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.sP2 00:24:39.172 07:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.sP2 00:24:39.172 07:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.sP2 00:24:39.172 07:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:39.431 [2024-11-19 07:48:31.279882] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:39.431 [2024-11-19 07:48:31.295844] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:39.431 [2024-11-19 07:48:31.296183] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:39.689 malloc0 00:24:39.689 07:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:39.689 07:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=3007357 00:24:39.689 07:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:39.689 07:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 3007357 /var/tmp/bdevperf.sock 00:24:39.689 07:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 3007357 ']' 00:24:39.689 07:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:39.689 07:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:39.689 07:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:39.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:39.689 07:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:39.689 07:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:39.689 [2024-11-19 07:48:31.506309] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:24:39.689 [2024-11-19 07:48:31.506472] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3007357 ] 00:24:39.948 [2024-11-19 07:48:31.637149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:39.948 [2024-11-19 07:48:31.757360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:40.514 07:48:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:40.514 07:48:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:24:40.514 07:48:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.sP2 00:24:40.772 07:48:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:41.032 [2024-11-19 07:48:32.944512] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:41.290 TLSTESTn1 00:24:41.290 07:48:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:41.290 Running I/O for 10 seconds... 00:24:43.610 2549.00 IOPS, 9.96 MiB/s [2024-11-19T06:48:36.475Z] 2622.00 IOPS, 10.24 MiB/s [2024-11-19T06:48:37.410Z] 2649.00 IOPS, 10.35 MiB/s [2024-11-19T06:48:38.410Z] 2666.00 IOPS, 10.41 MiB/s [2024-11-19T06:48:39.342Z] 2672.20 IOPS, 10.44 MiB/s [2024-11-19T06:48:40.277Z] 2678.00 IOPS, 10.46 MiB/s [2024-11-19T06:48:41.208Z] 2679.29 IOPS, 10.47 MiB/s [2024-11-19T06:48:42.582Z] 2682.00 IOPS, 10.48 MiB/s [2024-11-19T06:48:43.518Z] 2685.33 IOPS, 10.49 MiB/s [2024-11-19T06:48:43.518Z] 2686.20 IOPS, 10.49 MiB/s 00:24:51.588 Latency(us) 00:24:51.588 [2024-11-19T06:48:43.518Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:51.588 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:51.588 Verification LBA range: start 0x0 length 0x2000 00:24:51.588 TLSTESTn1 : 10.03 2691.88 10.52 0.00 0.00 47461.08 9369.22 51652.08 00:24:51.588 [2024-11-19T06:48:43.518Z] =================================================================================================================== 00:24:51.588 [2024-11-19T06:48:43.518Z] Total : 2691.88 10.52 0.00 0.00 47461.08 9369.22 51652.08 00:24:51.588 { 00:24:51.588 "results": [ 00:24:51.588 { 00:24:51.588 "job": "TLSTESTn1", 00:24:51.588 "core_mask": "0x4", 00:24:51.588 "workload": "verify", 00:24:51.588 "status": "finished", 00:24:51.588 "verify_range": { 00:24:51.588 "start": 0, 00:24:51.588 "length": 8192 00:24:51.588 }, 00:24:51.588 "queue_depth": 128, 00:24:51.588 "io_size": 4096, 00:24:51.588 "runtime": 10.026466, 00:24:51.588 "iops": 2691.875681820494, 00:24:51.588 "mibps": 10.515139382111304, 00:24:51.588 "io_failed": 0, 00:24:51.588 "io_timeout": 0, 00:24:51.588 "avg_latency_us": 47461.07623202009, 00:24:51.588 "min_latency_us": 9369.22074074074, 00:24:51.588 "max_latency_us": 51652.07703703704 00:24:51.588 } 00:24:51.588 ], 00:24:51.588 "core_count": 1 00:24:51.588 } 00:24:51.588 07:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:24:51.588 07:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:24:51.588 07:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:24:51.588 07:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:24:51.588 07:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:24:51.588 07:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:51.588 07:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:24:51.588 07:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:24:51.588 07:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:24:51.588 07:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:51.588 nvmf_trace.0 00:24:51.588 07:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:24:51.588 07:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3007357 00:24:51.588 07:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 3007357 ']' 00:24:51.588 07:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 3007357 00:24:51.588 07:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:24:51.588 07:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:51.588 07:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3007357 00:24:51.588 07:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:51.588 07:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:51.588 07:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3007357' 00:24:51.588 killing process with pid 3007357 00:24:51.588 07:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 3007357 00:24:51.588 Received shutdown signal, test time was about 10.000000 seconds 00:24:51.588 00:24:51.588 Latency(us) 00:24:51.588 [2024-11-19T06:48:43.518Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:51.588 [2024-11-19T06:48:43.518Z] =================================================================================================================== 00:24:51.588 [2024-11-19T06:48:43.518Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:51.588 07:48:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 3007357 00:24:52.523 07:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:24:52.523 07:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:52.523 07:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:24:52.523 07:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:52.523 07:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:24:52.523 07:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:52.523 07:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:52.523 rmmod nvme_tcp 00:24:52.523 rmmod nvme_fabrics 00:24:52.523 rmmod nvme_keyring 00:24:52.523 07:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:52.523 07:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:24:52.523 07:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:24:52.523 07:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 3007189 ']' 00:24:52.523 07:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 3007189 00:24:52.523 07:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 3007189 ']' 00:24:52.523 07:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 3007189 00:24:52.523 07:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:24:52.523 07:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:52.523 07:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3007189 00:24:52.523 07:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:52.523 07:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:52.523 07:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3007189' 00:24:52.523 killing process with pid 3007189 00:24:52.523 07:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 3007189 00:24:52.523 07:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 3007189 00:24:53.899 07:48:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:53.899 07:48:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:53.899 07:48:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:53.899 07:48:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:24:53.899 07:48:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:24:53.899 07:48:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:53.899 07:48:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:24:53.899 07:48:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:53.899 07:48:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:53.899 07:48:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:53.899 07:48:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:53.899 07:48:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:55.804 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:55.804 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.sP2 00:24:55.804 00:24:55.804 real 0m20.288s 00:24:55.804 user 0m27.848s 00:24:55.804 sys 0m5.216s 00:24:55.804 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:55.804 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:55.804 ************************************ 00:24:55.804 END TEST nvmf_fips 00:24:55.804 ************************************ 00:24:55.804 07:48:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:55.804 07:48:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:55.804 07:48:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:55.804 07:48:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:55.804 ************************************ 00:24:55.804 START TEST nvmf_control_msg_list 00:24:55.804 ************************************ 00:24:55.804 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:56.063 * Looking for test storage... 00:24:56.063 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:56.063 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:56.063 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:24:56.063 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:56.063 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:56.063 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:56.063 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:56.063 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:56.063 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:24:56.063 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:24:56.063 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:24:56.063 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:24:56.063 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:24:56.063 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:24:56.063 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:24:56.063 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:56.063 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:24:56.063 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:24:56.063 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:56.063 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:56.063 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:24:56.063 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:24:56.063 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:56.063 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:24:56.063 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:24:56.063 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:24:56.063 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:24:56.063 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:56.063 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:24:56.063 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:24:56.063 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:56.063 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:56.063 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:24:56.063 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:56.063 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:56.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:56.063 --rc genhtml_branch_coverage=1 00:24:56.063 --rc genhtml_function_coverage=1 00:24:56.063 --rc genhtml_legend=1 00:24:56.063 --rc geninfo_all_blocks=1 00:24:56.063 --rc geninfo_unexecuted_blocks=1 00:24:56.063 00:24:56.063 ' 00:24:56.063 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:56.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:56.063 --rc genhtml_branch_coverage=1 00:24:56.063 --rc genhtml_function_coverage=1 00:24:56.063 --rc genhtml_legend=1 00:24:56.063 --rc geninfo_all_blocks=1 00:24:56.063 --rc geninfo_unexecuted_blocks=1 00:24:56.063 00:24:56.063 ' 00:24:56.063 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:56.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:56.063 --rc genhtml_branch_coverage=1 00:24:56.063 --rc genhtml_function_coverage=1 00:24:56.063 --rc genhtml_legend=1 00:24:56.063 --rc geninfo_all_blocks=1 00:24:56.063 --rc geninfo_unexecuted_blocks=1 00:24:56.063 00:24:56.063 ' 00:24:56.063 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:56.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:56.063 --rc genhtml_branch_coverage=1 00:24:56.063 --rc genhtml_function_coverage=1 00:24:56.063 --rc genhtml_legend=1 00:24:56.063 --rc geninfo_all_blocks=1 00:24:56.063 --rc geninfo_unexecuted_blocks=1 00:24:56.063 00:24:56.063 ' 00:24:56.063 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:56.063 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:24:56.063 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:56.063 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:56.063 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:56.063 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:56.063 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:56.063 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:56.063 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:56.063 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:56.063 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:56.063 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:56.063 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:56.063 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:56.063 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:56.063 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:56.063 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:56.063 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:56.063 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:56.063 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:24:56.063 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:56.063 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:56.063 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:56.063 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.063 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.064 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.064 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:24:56.064 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.064 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:24:56.064 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:56.064 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:56.064 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:56.064 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:56.064 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:56.064 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:56.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:56.064 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:56.064 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:56.064 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:56.064 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:24:56.064 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:56.064 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:56.064 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:56.064 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:56.064 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:56.064 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:56.064 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:56.064 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:56.064 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:56.064 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:56.064 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:24:56.064 07:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:57.966 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:57.966 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:24:57.966 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:57.966 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:57.966 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:57.966 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:57.966 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:57.966 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:24:57.966 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:57.966 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:24:57.966 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:24:57.966 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:24:57.966 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:24:57.966 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:24:57.966 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:24:57.966 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:57.966 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:57.966 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:57.966 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:57.966 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:57.966 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:57.966 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:57.966 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:57.966 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:57.966 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:57.966 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:57.966 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:57.966 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:57.966 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:57.966 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:57.966 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:57.966 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:57.966 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:57.966 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:57.966 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:57.966 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:57.966 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:57.966 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:57.966 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:57.966 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:57.966 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:57.966 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:57.966 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:57.966 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:57.966 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:57.966 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:57.966 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:57.966 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:57.966 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:57.966 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:57.966 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:57.966 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:57.966 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:57.966 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:57.966 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:57.967 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:57.967 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:57.967 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:57.967 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:57.967 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:57.967 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:57.967 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:57.967 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:57.967 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:57.967 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:57.967 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:57.967 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:57.967 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:58.225 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:58.225 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:58.225 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:58.225 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:58.225 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:58.225 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:24:58.225 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:58.225 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:58.225 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:58.225 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:58.225 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:58.225 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:58.225 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:58.225 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:58.225 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:58.225 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:58.225 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:58.225 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:58.225 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:58.225 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:58.225 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:58.225 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:58.225 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:58.225 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:58.225 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:58.225 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:58.225 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:58.225 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:58.225 07:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:58.225 07:48:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:58.225 07:48:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:58.225 07:48:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:58.225 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:58.225 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.299 ms 00:24:58.225 00:24:58.225 --- 10.0.0.2 ping statistics --- 00:24:58.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:58.225 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:24:58.225 07:48:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:58.225 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:58.225 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.053 ms 00:24:58.225 00:24:58.225 --- 10.0.0.1 ping statistics --- 00:24:58.226 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:58.226 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:24:58.226 07:48:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:58.226 07:48:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:24:58.226 07:48:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:58.226 07:48:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:58.226 07:48:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:58.226 07:48:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:58.226 07:48:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:58.226 07:48:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:58.226 07:48:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:58.226 07:48:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:24:58.226 07:48:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:58.226 07:48:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:58.226 07:48:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:58.226 07:48:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=3010891 00:24:58.226 07:48:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:58.226 07:48:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 3010891 00:24:58.226 07:48:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 3010891 ']' 00:24:58.226 07:48:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:58.226 07:48:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:58.226 07:48:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:58.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:58.226 07:48:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:58.226 07:48:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:58.226 [2024-11-19 07:48:50.144495] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:24:58.226 [2024-11-19 07:48:50.144642] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:58.484 [2024-11-19 07:48:50.295322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:58.742 [2024-11-19 07:48:50.430653] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:58.742 [2024-11-19 07:48:50.430754] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:58.742 [2024-11-19 07:48:50.430801] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:58.742 [2024-11-19 07:48:50.430827] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:58.742 [2024-11-19 07:48:50.430847] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:58.742 [2024-11-19 07:48:50.432489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:59.311 07:48:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:59.311 07:48:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:24:59.311 07:48:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:59.311 07:48:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:59.311 07:48:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:59.311 07:48:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:59.311 07:48:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:24:59.311 07:48:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:59.311 07:48:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:24:59.311 07:48:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.311 07:48:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:59.311 [2024-11-19 07:48:51.143675] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:59.311 07:48:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.311 07:48:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:24:59.311 07:48:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.311 07:48:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:59.311 07:48:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.311 07:48:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:24:59.311 07:48:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.311 07:48:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:59.311 Malloc0 00:24:59.311 07:48:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.311 07:48:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:24:59.311 07:48:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.311 07:48:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:59.311 07:48:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.311 07:48:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:59.311 07:48:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.311 07:48:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:59.311 [2024-11-19 07:48:51.214217] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:59.311 07:48:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.311 07:48:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=3011065 00:24:59.311 07:48:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:59.311 07:48:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=3011067 00:24:59.311 07:48:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:59.311 07:48:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=3011068 00:24:59.311 07:48:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:59.311 07:48:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 3011065 00:24:59.569 [2024-11-19 07:48:51.354706] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:59.569 [2024-11-19 07:48:51.355140] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:59.569 [2024-11-19 07:48:51.355598] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:00.942 Initializing NVMe Controllers 00:25:00.942 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:00.942 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:25:00.942 Initialization complete. Launching workers. 00:25:00.942 ======================================================== 00:25:00.942 Latency(us) 00:25:00.942 Device Information : IOPS MiB/s Average min max 00:25:00.942 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3886.91 15.18 256.70 230.65 1211.85 00:25:00.942 ======================================================== 00:25:00.942 Total : 3886.91 15.18 256.70 230.65 1211.85 00:25:00.942 00:25:00.942 Initializing NVMe Controllers 00:25:00.942 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:00.942 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:25:00.942 Initialization complete. Launching workers. 00:25:00.942 ======================================================== 00:25:00.942 Latency(us) 00:25:00.942 Device Information : IOPS MiB/s Average min max 00:25:00.942 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 40892.50 40769.91 40982.45 00:25:00.942 ======================================================== 00:25:00.942 Total : 25.00 0.10 40892.50 40769.91 40982.45 00:25:00.942 00:25:00.942 07:48:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 3011067 00:25:00.942 07:48:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 3011068 00:25:00.942 Initializing NVMe Controllers 00:25:00.942 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:00.942 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:25:00.942 Initialization complete. Launching workers. 00:25:00.942 ======================================================== 00:25:00.942 Latency(us) 00:25:00.942 Device Information : IOPS MiB/s Average min max 00:25:00.942 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 41047.80 40621.13 41999.88 00:25:00.942 ======================================================== 00:25:00.942 Total : 25.00 0.10 41047.80 40621.13 41999.88 00:25:00.942 00:25:00.942 07:48:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:25:00.942 07:48:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:25:00.942 07:48:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:00.942 07:48:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:25:00.942 07:48:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:00.942 07:48:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:25:00.942 07:48:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:00.942 07:48:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:00.942 rmmod nvme_tcp 00:25:00.942 rmmod nvme_fabrics 00:25:00.942 rmmod nvme_keyring 00:25:00.942 07:48:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:00.942 07:48:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:25:00.942 07:48:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:25:00.942 07:48:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 3010891 ']' 00:25:00.942 07:48:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 3010891 00:25:00.942 07:48:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 3010891 ']' 00:25:00.942 07:48:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 3010891 00:25:00.942 07:48:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:25:00.942 07:48:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:00.942 07:48:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3010891 00:25:00.942 07:48:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:00.942 07:48:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:00.942 07:48:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3010891' 00:25:00.942 killing process with pid 3010891 00:25:00.942 07:48:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 3010891 00:25:00.942 07:48:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 3010891 00:25:02.318 07:48:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:02.318 07:48:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:02.318 07:48:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:02.318 07:48:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:25:02.318 07:48:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:25:02.318 07:48:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:02.318 07:48:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:25:02.319 07:48:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:02.319 07:48:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:02.319 07:48:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:02.319 07:48:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:02.319 07:48:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:04.223 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:04.223 00:25:04.223 real 0m8.311s 00:25:04.223 user 0m8.002s 00:25:04.223 sys 0m2.878s 00:25:04.223 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:04.223 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:04.223 ************************************ 00:25:04.223 END TEST nvmf_control_msg_list 00:25:04.223 ************************************ 00:25:04.223 07:48:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:25:04.223 07:48:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:04.223 07:48:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:04.223 07:48:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:04.223 ************************************ 00:25:04.223 START TEST nvmf_wait_for_buf 00:25:04.223 ************************************ 00:25:04.223 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:25:04.223 * Looking for test storage... 00:25:04.223 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:04.223 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:04.223 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:25:04.224 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:04.482 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:04.482 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:04.482 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:04.482 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:04.482 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:25:04.482 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:25:04.482 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:25:04.482 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:25:04.482 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:25:04.482 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:25:04.482 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:25:04.482 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:04.482 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:25:04.482 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:25:04.482 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:04.482 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:04.482 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:25:04.482 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:25:04.482 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:04.482 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:25:04.482 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:25:04.482 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:25:04.482 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:25:04.482 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:04.482 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:25:04.482 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:25:04.482 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:04.482 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:04.482 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:25:04.482 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:04.482 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:04.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:04.482 --rc genhtml_branch_coverage=1 00:25:04.482 --rc genhtml_function_coverage=1 00:25:04.482 --rc genhtml_legend=1 00:25:04.482 --rc geninfo_all_blocks=1 00:25:04.482 --rc geninfo_unexecuted_blocks=1 00:25:04.482 00:25:04.482 ' 00:25:04.482 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:04.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:04.482 --rc genhtml_branch_coverage=1 00:25:04.482 --rc genhtml_function_coverage=1 00:25:04.482 --rc genhtml_legend=1 00:25:04.482 --rc geninfo_all_blocks=1 00:25:04.482 --rc geninfo_unexecuted_blocks=1 00:25:04.482 00:25:04.482 ' 00:25:04.482 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:04.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:04.482 --rc genhtml_branch_coverage=1 00:25:04.483 --rc genhtml_function_coverage=1 00:25:04.483 --rc genhtml_legend=1 00:25:04.483 --rc geninfo_all_blocks=1 00:25:04.483 --rc geninfo_unexecuted_blocks=1 00:25:04.483 00:25:04.483 ' 00:25:04.483 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:04.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:04.483 --rc genhtml_branch_coverage=1 00:25:04.483 --rc genhtml_function_coverage=1 00:25:04.483 --rc genhtml_legend=1 00:25:04.483 --rc geninfo_all_blocks=1 00:25:04.483 --rc geninfo_unexecuted_blocks=1 00:25:04.483 00:25:04.483 ' 00:25:04.483 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:04.483 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:25:04.483 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:04.483 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:04.483 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:04.483 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:04.483 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:04.483 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:04.483 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:04.483 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:04.483 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:04.483 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:04.483 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:04.483 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:04.483 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:04.483 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:04.483 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:04.483 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:04.483 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:04.483 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:25:04.483 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:04.483 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:04.483 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:04.483 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.483 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.483 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.483 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:25:04.483 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.483 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:25:04.483 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:04.483 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:04.483 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:04.483 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:04.483 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:04.483 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:04.483 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:04.483 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:04.483 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:04.483 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:04.483 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:25:04.483 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:04.483 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:04.483 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:04.483 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:04.483 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:04.483 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:04.483 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:04.483 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:04.483 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:04.483 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:04.483 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:25:04.483 07:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:07.015 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:07.015 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:25:07.015 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:07.015 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:07.015 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:07.015 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:07.015 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:07.015 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:25:07.015 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:07.015 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:25:07.015 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:25:07.015 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:25:07.015 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:25:07.015 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:25:07.015 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:25:07.015 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:07.015 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:07.015 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:07.015 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:07.015 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:07.015 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:07.015 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:07.015 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:07.015 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:07.015 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:07.015 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:07.015 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:07.015 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:07.016 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:07.016 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:07.016 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:07.016 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:07.016 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:07.016 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.310 ms 00:25:07.016 00:25:07.016 --- 10.0.0.2 ping statistics --- 00:25:07.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:07.016 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:07.016 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:07.016 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:25:07.016 00:25:07.016 --- 10.0.0.1 ping statistics --- 00:25:07.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:07.016 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=3013360 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 3013360 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 3013360 ']' 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:07.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:07.016 07:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:07.016 [2024-11-19 07:48:58.678678] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:25:07.016 [2024-11-19 07:48:58.678852] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:07.017 [2024-11-19 07:48:58.823445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:07.274 [2024-11-19 07:48:58.955868] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:07.274 [2024-11-19 07:48:58.955953] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:07.274 [2024-11-19 07:48:58.955978] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:07.274 [2024-11-19 07:48:58.956002] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:07.274 [2024-11-19 07:48:58.956022] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:07.274 [2024-11-19 07:48:58.957618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:07.840 07:48:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:07.840 07:48:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:25:07.840 07:48:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:07.840 07:48:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:07.840 07:48:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:07.840 07:48:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:07.840 07:48:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:25:07.840 07:48:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:07.840 07:48:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:25:07.840 07:48:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.840 07:48:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:07.840 07:48:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.840 07:48:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:25:07.840 07:48:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.840 07:48:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:07.840 07:48:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.840 07:48:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:25:07.840 07:48:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.840 07:48:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:08.098 07:49:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.098 07:49:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:25:08.098 07:49:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.098 07:49:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:08.356 Malloc0 00:25:08.356 07:49:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.356 07:49:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:25:08.356 07:49:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.356 07:49:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:08.356 [2024-11-19 07:49:00.077743] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:08.356 07:49:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.356 07:49:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:25:08.356 07:49:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.356 07:49:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:08.356 07:49:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.356 07:49:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:25:08.356 07:49:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.356 07:49:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:08.356 07:49:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.356 07:49:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:08.356 07:49:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.356 07:49:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:08.356 [2024-11-19 07:49:00.102041] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:08.356 07:49:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.356 07:49:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:08.356 [2024-11-19 07:49:00.253902] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:10.253 Initializing NVMe Controllers 00:25:10.253 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:10.253 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:25:10.253 Initialization complete. Launching workers. 00:25:10.253 ======================================================== 00:25:10.253 Latency(us) 00:25:10.253 Device Information : IOPS MiB/s Average min max 00:25:10.253 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 125.00 15.62 33315.09 23990.98 63838.90 00:25:10.253 ======================================================== 00:25:10.253 Total : 125.00 15.62 33315.09 23990.98 63838.90 00:25:10.253 00:25:10.253 07:49:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:25:10.253 07:49:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:25:10.253 07:49:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.253 07:49:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:10.253 07:49:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.253 07:49:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=1974 00:25:10.253 07:49:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 1974 -eq 0 ]] 00:25:10.253 07:49:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:25:10.253 07:49:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:25:10.253 07:49:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:10.253 07:49:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:25:10.253 07:49:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:10.253 07:49:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:25:10.253 07:49:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:10.253 07:49:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:10.253 rmmod nvme_tcp 00:25:10.253 rmmod nvme_fabrics 00:25:10.253 rmmod nvme_keyring 00:25:10.253 07:49:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:10.253 07:49:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:25:10.253 07:49:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:25:10.253 07:49:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 3013360 ']' 00:25:10.253 07:49:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 3013360 00:25:10.253 07:49:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 3013360 ']' 00:25:10.253 07:49:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 3013360 00:25:10.253 07:49:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:25:10.253 07:49:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:10.253 07:49:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3013360 00:25:10.253 07:49:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:10.253 07:49:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:10.253 07:49:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3013360' 00:25:10.253 killing process with pid 3013360 00:25:10.253 07:49:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 3013360 00:25:10.253 07:49:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 3013360 00:25:11.187 07:49:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:11.187 07:49:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:11.187 07:49:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:11.187 07:49:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:25:11.187 07:49:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:25:11.187 07:49:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:11.187 07:49:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:25:11.187 07:49:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:11.187 07:49:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:11.187 07:49:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:11.187 07:49:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:11.187 07:49:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:13.718 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:13.718 00:25:13.718 real 0m9.006s 00:25:13.718 user 0m5.529s 00:25:13.718 sys 0m2.305s 00:25:13.718 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:13.718 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:13.718 ************************************ 00:25:13.718 END TEST nvmf_wait_for_buf 00:25:13.718 ************************************ 00:25:13.718 07:49:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:25:13.718 07:49:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:25:13.718 07:49:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:13.718 07:49:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:13.718 07:49:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:13.718 ************************************ 00:25:13.718 START TEST nvmf_fuzz 00:25:13.718 ************************************ 00:25:13.718 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:25:13.718 * Looking for test storage... 00:25:13.718 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:13.718 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:13.718 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:25:13.718 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:13.718 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:13.718 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:13.718 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:13.718 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:13.718 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:25:13.718 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:25:13.718 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:25:13.718 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:25:13.718 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:25:13.718 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:25:13.718 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:25:13.718 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:13.718 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:25:13.718 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:25:13.718 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:13.718 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:13.718 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:25:13.718 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:25:13.718 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:13.718 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:25:13.718 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:25:13.718 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:25:13.718 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:25:13.718 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:13.718 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:25:13.718 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:25:13.718 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:13.718 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:13.718 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:25:13.718 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:13.718 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:13.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:13.718 --rc genhtml_branch_coverage=1 00:25:13.718 --rc genhtml_function_coverage=1 00:25:13.719 --rc genhtml_legend=1 00:25:13.719 --rc geninfo_all_blocks=1 00:25:13.719 --rc geninfo_unexecuted_blocks=1 00:25:13.719 00:25:13.719 ' 00:25:13.719 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:13.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:13.719 --rc genhtml_branch_coverage=1 00:25:13.719 --rc genhtml_function_coverage=1 00:25:13.719 --rc genhtml_legend=1 00:25:13.719 --rc geninfo_all_blocks=1 00:25:13.719 --rc geninfo_unexecuted_blocks=1 00:25:13.719 00:25:13.719 ' 00:25:13.719 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:13.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:13.719 --rc genhtml_branch_coverage=1 00:25:13.719 --rc genhtml_function_coverage=1 00:25:13.719 --rc genhtml_legend=1 00:25:13.719 --rc geninfo_all_blocks=1 00:25:13.719 --rc geninfo_unexecuted_blocks=1 00:25:13.719 00:25:13.719 ' 00:25:13.719 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:13.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:13.719 --rc genhtml_branch_coverage=1 00:25:13.719 --rc genhtml_function_coverage=1 00:25:13.719 --rc genhtml_legend=1 00:25:13.719 --rc geninfo_all_blocks=1 00:25:13.719 --rc geninfo_unexecuted_blocks=1 00:25:13.719 00:25:13.719 ' 00:25:13.719 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:13.719 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:25:13.719 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:13.719 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:13.719 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:13.719 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:13.719 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:13.719 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:13.719 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:13.719 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:13.719 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:13.719 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:13.719 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:13.719 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:13.719 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:13.719 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:13.719 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:13.719 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:13.719 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:13.719 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:25:13.719 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:13.719 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:13.719 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:13.719 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.719 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.719 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.719 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:25:13.719 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.719 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:25:13.719 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:13.719 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:13.719 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:13.719 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:13.719 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:13.719 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:13.719 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:13.719 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:13.719 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:13.719 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:13.719 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:25:13.719 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:13.719 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:13.719 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:13.719 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:13.719 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:13.719 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:13.719 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:13.719 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:13.719 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:13.719 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:13.719 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@309 -- # xtrace_disable 00:25:13.719 07:49:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:15.624 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:15.624 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # pci_devs=() 00:25:15.624 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:15.624 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:15.624 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:15.624 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:15.624 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:15.624 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # net_devs=() 00:25:15.624 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:15.624 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # e810=() 00:25:15.624 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # local -ga e810 00:25:15.624 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # x722=() 00:25:15.624 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # local -ga x722 00:25:15.624 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # mlx=() 00:25:15.624 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # local -ga mlx 00:25:15.624 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:15.624 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:15.624 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:15.624 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:15.624 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:15.624 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:15.624 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:15.624 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:15.624 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:15.624 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:15.624 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:15.624 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:15.624 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:15.624 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:15.624 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:15.624 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:15.624 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:15.624 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:15.624 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:15.624 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:15.624 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:15.624 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:15.624 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:15.624 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:15.624 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:15.624 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:15.624 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:15.624 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:15.624 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:15.624 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:15.624 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:15.624 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:15.624 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:15.624 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:15.624 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:15.624 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:15.624 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:15.624 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:15.624 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:15.624 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:15.624 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:15.624 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:15.624 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:15.624 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:15.624 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:15.624 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:15.624 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:15.624 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:15.624 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:15.624 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:15.624 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:15.624 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:15.624 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:15.624 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:15.624 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:15.624 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:15.624 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:15.624 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:15.624 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # is_hw=yes 00:25:15.624 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:15.625 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:15.625 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:15.625 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:15.625 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:15.625 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:15.625 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:15.625 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:15.625 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:15.625 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:15.625 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:15.625 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:15.625 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:15.625 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:15.625 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:15.625 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:15.625 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:15.625 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:15.625 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:15.625 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:15.625 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:15.625 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:15.625 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:15.625 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:15.625 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:15.625 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:15.625 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:15.625 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:25:15.625 00:25:15.625 --- 10.0.0.2 ping statistics --- 00:25:15.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:15.625 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:25:15.625 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:15.625 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:15.625 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:25:15.625 00:25:15.625 --- 10.0.0.1 ping statistics --- 00:25:15.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:15.625 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:25:15.625 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:15.625 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # return 0 00:25:15.625 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:15.625 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:15.625 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:15.625 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:15.625 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:15.625 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:15.625 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:15.625 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=3015860 00:25:15.625 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:15.625 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:15.625 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 3015860 00:25:15.625 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # '[' -z 3015860 ']' 00:25:15.625 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:15.625 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:15.625 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:15.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:15.625 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:15.625 07:49:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:16.599 07:49:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:16.599 07:49:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@868 -- # return 0 00:25:16.599 07:49:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:16.599 07:49:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.599 07:49:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:16.599 07:49:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.599 07:49:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:25:16.599 07:49:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.599 07:49:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:16.882 Malloc0 00:25:16.882 07:49:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.882 07:49:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:16.882 07:49:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.882 07:49:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:16.882 07:49:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.882 07:49:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:16.882 07:49:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.882 07:49:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:16.882 07:49:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.882 07:49:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:16.882 07:49:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.882 07:49:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:16.882 07:49:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.882 07:49:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:25:16.882 07:49:08 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:25:48.945 Fuzzing completed. Shutting down the fuzz application 00:25:48.945 00:25:48.945 Dumping successful admin opcodes: 00:25:48.945 8, 9, 10, 24, 00:25:48.945 Dumping successful io opcodes: 00:25:48.945 0, 9, 00:25:48.945 NS: 0x2000008efec0 I/O qp, Total commands completed: 323952, total successful commands: 1913, random_seed: 98360896 00:25:48.945 NS: 0x2000008efec0 admin qp, Total commands completed: 40816, total successful commands: 333, random_seed: 1864145152 00:25:48.945 07:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:25:49.512 Fuzzing completed. Shutting down the fuzz application 00:25:49.512 00:25:49.512 Dumping successful admin opcodes: 00:25:49.512 24, 00:25:49.512 Dumping successful io opcodes: 00:25:49.512 00:25:49.512 NS: 0x2000008efec0 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 2738188640 00:25:49.512 NS: 0x2000008efec0 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 2738364572 00:25:49.512 07:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:49.512 07:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.512 07:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:49.512 07:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.512 07:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:25:49.512 07:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:25:49.512 07:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:49.512 07:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:25:49.512 07:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:49.512 07:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:25:49.512 07:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:49.512 07:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:49.512 rmmod nvme_tcp 00:25:49.512 rmmod nvme_fabrics 00:25:49.512 rmmod nvme_keyring 00:25:49.512 07:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:49.512 07:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:25:49.512 07:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:25:49.512 07:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@517 -- # '[' -n 3015860 ']' 00:25:49.512 07:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # killprocess 3015860 00:25:49.512 07:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # '[' -z 3015860 ']' 00:25:49.512 07:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # kill -0 3015860 00:25:49.512 07:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # uname 00:25:49.512 07:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:49.512 07:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3015860 00:25:49.512 07:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:49.512 07:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:49.512 07:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3015860' 00:25:49.512 killing process with pid 3015860 00:25:49.512 07:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@973 -- # kill 3015860 00:25:49.512 07:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@978 -- # wait 3015860 00:25:50.886 07:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:50.886 07:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:50.886 07:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:50.886 07:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:25:50.886 07:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-save 00:25:50.886 07:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:50.886 07:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-restore 00:25:50.886 07:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:50.886 07:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:50.886 07:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:50.886 07:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:50.886 07:49:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:52.793 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:52.793 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:25:52.793 00:25:52.793 real 0m39.528s 00:25:52.793 user 0m56.790s 00:25:52.793 sys 0m13.545s 00:25:52.793 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:52.793 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:52.793 ************************************ 00:25:52.793 END TEST nvmf_fuzz 00:25:52.793 ************************************ 00:25:52.793 07:49:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:52.793 07:49:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:52.793 07:49:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:52.793 07:49:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:52.793 ************************************ 00:25:52.793 START TEST nvmf_multiconnection 00:25:52.793 ************************************ 00:25:52.793 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:53.052 * Looking for test storage... 00:25:53.052 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:53.052 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:53.052 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # lcov --version 00:25:53.052 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:53.052 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:53.052 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:53.052 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:53.052 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:53.052 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:25:53.052 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:25:53.052 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:25:53.052 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:25:53.052 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:25:53.052 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:25:53.052 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:25:53.052 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:53.052 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:25:53.052 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:25:53.052 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:53.052 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:53.052 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:25:53.052 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:25:53.052 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:53.052 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:25:53.052 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:25:53.052 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:25:53.052 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:25:53.052 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:53.052 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:25:53.052 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:25:53.052 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:53.052 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:53.052 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:25:53.052 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:53.053 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:53.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:53.053 --rc genhtml_branch_coverage=1 00:25:53.053 --rc genhtml_function_coverage=1 00:25:53.053 --rc genhtml_legend=1 00:25:53.053 --rc geninfo_all_blocks=1 00:25:53.053 --rc geninfo_unexecuted_blocks=1 00:25:53.053 00:25:53.053 ' 00:25:53.053 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:53.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:53.053 --rc genhtml_branch_coverage=1 00:25:53.053 --rc genhtml_function_coverage=1 00:25:53.053 --rc genhtml_legend=1 00:25:53.053 --rc geninfo_all_blocks=1 00:25:53.053 --rc geninfo_unexecuted_blocks=1 00:25:53.053 00:25:53.053 ' 00:25:53.053 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:53.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:53.053 --rc genhtml_branch_coverage=1 00:25:53.053 --rc genhtml_function_coverage=1 00:25:53.053 --rc genhtml_legend=1 00:25:53.053 --rc geninfo_all_blocks=1 00:25:53.053 --rc geninfo_unexecuted_blocks=1 00:25:53.053 00:25:53.053 ' 00:25:53.053 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:53.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:53.053 --rc genhtml_branch_coverage=1 00:25:53.053 --rc genhtml_function_coverage=1 00:25:53.053 --rc genhtml_legend=1 00:25:53.053 --rc geninfo_all_blocks=1 00:25:53.053 --rc geninfo_unexecuted_blocks=1 00:25:53.053 00:25:53.053 ' 00:25:53.053 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:53.053 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:25:53.053 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:53.053 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:53.053 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:53.053 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:53.053 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:53.053 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:53.053 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:53.053 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:53.053 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:53.053 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:53.053 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:53.053 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:53.053 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:53.053 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:53.053 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:53.053 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:53.053 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:53.053 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:25:53.053 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:53.053 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:53.053 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:53.053 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.053 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.053 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.053 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:25:53.053 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.053 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:25:53.053 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:53.053 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:53.053 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:53.053 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:53.053 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:53.053 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:53.053 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:53.053 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:53.053 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:53.053 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:53.053 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:53.053 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:53.053 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:25:53.053 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:25:53.053 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:53.053 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:53.053 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:53.053 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:53.053 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:53.053 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:53.053 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:53.053 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:53.053 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:53.053 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:53.053 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@309 -- # xtrace_disable 00:25:53.053 07:49:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:55.585 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:55.585 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # pci_devs=() 00:25:55.585 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:55.585 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:55.585 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:55.585 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:55.585 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:55.585 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # net_devs=() 00:25:55.585 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:55.585 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # e810=() 00:25:55.585 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # local -ga e810 00:25:55.585 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # x722=() 00:25:55.585 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # local -ga x722 00:25:55.585 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # mlx=() 00:25:55.585 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # local -ga mlx 00:25:55.585 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:55.585 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:55.585 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:55.585 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:55.585 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:55.585 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:55.585 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:55.585 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:55.585 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:55.585 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:55.585 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:55.585 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:55.585 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:55.585 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:55.585 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:55.585 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:55.585 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:55.585 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:55.585 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:55.585 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:55.585 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:55.585 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:55.585 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:55.585 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:55.585 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:55.585 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:55.585 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:55.585 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:55.585 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:55.585 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:55.585 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:55.585 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:55.585 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:55.585 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:55.585 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:55.585 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:55.585 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:55.585 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:55.585 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:55.585 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:55.585 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:55.585 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:55.585 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:55.585 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:55.585 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:55.585 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:55.585 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:55.585 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:55.585 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:55.585 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:55.585 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:55.585 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:55.585 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:55.585 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:55.585 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:55.585 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:55.585 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:55.586 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:55.586 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # is_hw=yes 00:25:55.586 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:55.586 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:55.586 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:55.586 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:55.586 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:55.586 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:55.586 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:55.586 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:55.586 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:55.586 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:55.586 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:55.586 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:55.586 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:55.586 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:55.586 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:55.586 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:55.586 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:55.586 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:55.586 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:55.586 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:55.586 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:55.586 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:55.586 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:55.586 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:55.586 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:55.586 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:55.586 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:55.586 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:25:55.586 00:25:55.586 --- 10.0.0.2 ping statistics --- 00:25:55.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:55.586 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:25:55.586 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:55.586 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:55.586 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:25:55.586 00:25:55.586 --- 10.0.0.1 ping statistics --- 00:25:55.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:55.586 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:25:55.586 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:55.586 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # return 0 00:25:55.586 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:55.586 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:55.586 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:55.586 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:55.586 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:55.586 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:55.586 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:55.586 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:25:55.586 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:55.586 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:55.586 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:55.586 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # nvmfpid=3021744 00:25:55.586 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:55.586 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # waitforlisten 3021744 00:25:55.586 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # '[' -z 3021744 ']' 00:25:55.586 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:55.586 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:55.586 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:55.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:55.586 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:55.586 07:49:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:55.586 [2024-11-19 07:49:47.304279] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:25:55.586 [2024-11-19 07:49:47.304433] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:55.586 [2024-11-19 07:49:47.472788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:55.845 [2024-11-19 07:49:47.620758] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:55.845 [2024-11-19 07:49:47.620840] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:55.845 [2024-11-19 07:49:47.620866] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:55.845 [2024-11-19 07:49:47.620902] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:55.845 [2024-11-19 07:49:47.620926] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:55.845 [2024-11-19 07:49:47.623808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:55.845 [2024-11-19 07:49:47.623868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:55.845 [2024-11-19 07:49:47.627728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:55.845 [2024-11-19 07:49:47.627739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:56.411 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:56.411 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@868 -- # return 0 00:25:56.411 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:56.669 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:56.669 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:56.669 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:56.669 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:56.669 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.669 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:56.669 [2024-11-19 07:49:48.375231] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:56.670 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.670 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:25:56.670 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:56.670 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:56.670 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.670 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:56.670 Malloc1 00:25:56.670 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.670 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:25:56.670 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.670 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:56.670 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.670 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:56.670 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.670 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:56.670 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.670 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:56.670 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.670 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:56.670 [2024-11-19 07:49:48.496717] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:56.670 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.670 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:56.670 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:25:56.670 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.670 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:56.670 Malloc2 00:25:56.670 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.670 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:25:56.670 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.670 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:56.670 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.670 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:25:56.670 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.670 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:56.670 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.670 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:56.670 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.670 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:56.670 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.670 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:56.670 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:25:56.670 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.670 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:56.929 Malloc3 00:25:56.929 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.929 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:25:56.929 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.929 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:56.929 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.929 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:25:56.929 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.929 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:56.929 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.929 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:25:56.929 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.929 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:56.929 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.929 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:56.929 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:25:56.929 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.929 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:56.929 Malloc4 00:25:56.929 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.929 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:25:56.929 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.929 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:56.929 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.929 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:25:56.929 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.929 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:56.929 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.929 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:25:56.929 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.929 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:56.929 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.929 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:56.929 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:25:56.929 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.929 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:57.188 Malloc5 00:25:57.188 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.188 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:25:57.188 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.188 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:57.188 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.188 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:25:57.188 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.188 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:57.188 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.188 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:25:57.188 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.188 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:57.188 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.188 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:57.188 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:25:57.188 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.188 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:57.188 Malloc6 00:25:57.188 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.188 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:25:57.188 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.188 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:57.188 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.188 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:25:57.188 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.189 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:57.189 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.189 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:25:57.189 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.189 07:49:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:57.189 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.189 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:57.189 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:25:57.189 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.189 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:57.189 Malloc7 00:25:57.189 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.189 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:25:57.189 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.189 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:57.189 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.189 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:25:57.189 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.189 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:57.189 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.189 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:25:57.189 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.189 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:57.189 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.189 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:57.189 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:25:57.189 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.189 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:57.447 Malloc8 00:25:57.448 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.448 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:25:57.448 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.448 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:57.448 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.448 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:25:57.448 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.448 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:57.448 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.448 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:25:57.448 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.448 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:57.448 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.448 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:57.448 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:25:57.448 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.448 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:57.448 Malloc9 00:25:57.448 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.448 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:25:57.448 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.448 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:57.448 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.448 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:25:57.448 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.448 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:57.448 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.448 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:25:57.448 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.448 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:57.448 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.448 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:57.448 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:25:57.448 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.448 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:57.706 Malloc10 00:25:57.706 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.706 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:25:57.706 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.706 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:57.706 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.706 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:25:57.706 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.706 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:57.706 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.706 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:25:57.706 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.706 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:57.706 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.706 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:57.706 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:25:57.706 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.707 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:57.707 Malloc11 00:25:57.707 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.707 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:25:57.707 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.707 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:57.707 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.707 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:25:57.707 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.707 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:57.707 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.707 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:25:57.707 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.707 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:57.707 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.707 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:25:57.707 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:57.707 07:49:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:58.642 07:49:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:25:58.642 07:49:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:58.642 07:49:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:58.642 07:49:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:58.642 07:49:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:00.542 07:49:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:00.542 07:49:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:00.542 07:49:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK1 00:26:00.542 07:49:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:00.542 07:49:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:00.542 07:49:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:00.542 07:49:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:00.542 07:49:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:26:01.109 07:49:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:26:01.110 07:49:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:01.110 07:49:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:01.110 07:49:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:01.110 07:49:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:03.639 07:49:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:03.639 07:49:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:03.639 07:49:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK2 00:26:03.639 07:49:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:03.639 07:49:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:03.639 07:49:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:03.639 07:49:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:03.639 07:49:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:26:03.897 07:49:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:26:03.897 07:49:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:03.897 07:49:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:03.897 07:49:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:03.897 07:49:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:05.796 07:49:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:05.796 07:49:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:05.796 07:49:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK3 00:26:06.055 07:49:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:06.055 07:49:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:06.055 07:49:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:06.055 07:49:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:06.055 07:49:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:26:06.622 07:49:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:26:06.622 07:49:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:06.622 07:49:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:06.622 07:49:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:06.622 07:49:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:08.519 07:50:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:08.519 07:50:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:08.519 07:50:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK4 00:26:08.519 07:50:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:08.519 07:50:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:08.519 07:50:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:08.519 07:50:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:08.519 07:50:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:26:09.453 07:50:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:26:09.453 07:50:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:09.453 07:50:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:09.453 07:50:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:09.453 07:50:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:11.982 07:50:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:11.982 07:50:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:11.982 07:50:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK5 00:26:11.982 07:50:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:11.982 07:50:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:11.982 07:50:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:11.982 07:50:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:11.982 07:50:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:26:12.241 07:50:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:26:12.241 07:50:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:12.241 07:50:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:12.241 07:50:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:12.241 07:50:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:14.210 07:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:14.210 07:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:14.210 07:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK6 00:26:14.210 07:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:14.210 07:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:14.210 07:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:14.210 07:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:14.211 07:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:26:15.171 07:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:26:15.171 07:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:15.171 07:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:15.171 07:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:15.171 07:50:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:17.071 07:50:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:17.071 07:50:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:17.071 07:50:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK7 00:26:17.071 07:50:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:17.071 07:50:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:17.071 07:50:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:17.071 07:50:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:17.071 07:50:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:26:18.006 07:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:26:18.006 07:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:18.006 07:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:18.006 07:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:18.006 07:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:19.905 07:50:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:19.905 07:50:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:19.905 07:50:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK8 00:26:19.905 07:50:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:19.905 07:50:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:19.905 07:50:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:19.905 07:50:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:19.905 07:50:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:26:20.840 07:50:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:26:20.841 07:50:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:20.841 07:50:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:20.841 07:50:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:20.841 07:50:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:22.740 07:50:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:22.740 07:50:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:22.740 07:50:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK9 00:26:22.740 07:50:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:22.740 07:50:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:22.740 07:50:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:22.740 07:50:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:22.740 07:50:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:26:23.674 07:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:26:23.674 07:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:23.674 07:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:23.674 07:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:23.674 07:50:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:25.573 07:50:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:25.573 07:50:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:25.573 07:50:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK10 00:26:25.573 07:50:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:25.573 07:50:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:25.573 07:50:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:25.573 07:50:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:25.573 07:50:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:26:26.508 07:50:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:26:26.508 07:50:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:26.508 07:50:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:26.508 07:50:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:26.508 07:50:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:28.409 07:50:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:28.409 07:50:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:28.409 07:50:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK11 00:26:28.409 07:50:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:28.409 07:50:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:28.409 07:50:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:28.409 07:50:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:26:28.409 [global] 00:26:28.409 thread=1 00:26:28.409 invalidate=1 00:26:28.409 rw=read 00:26:28.409 time_based=1 00:26:28.409 runtime=10 00:26:28.409 ioengine=libaio 00:26:28.409 direct=1 00:26:28.409 bs=262144 00:26:28.409 iodepth=64 00:26:28.409 norandommap=1 00:26:28.409 numjobs=1 00:26:28.409 00:26:28.409 [job0] 00:26:28.409 filename=/dev/nvme0n1 00:26:28.409 [job1] 00:26:28.409 filename=/dev/nvme10n1 00:26:28.409 [job2] 00:26:28.409 filename=/dev/nvme1n1 00:26:28.409 [job3] 00:26:28.409 filename=/dev/nvme2n1 00:26:28.409 [job4] 00:26:28.409 filename=/dev/nvme3n1 00:26:28.409 [job5] 00:26:28.409 filename=/dev/nvme4n1 00:26:28.409 [job6] 00:26:28.409 filename=/dev/nvme5n1 00:26:28.409 [job7] 00:26:28.409 filename=/dev/nvme6n1 00:26:28.409 [job8] 00:26:28.409 filename=/dev/nvme7n1 00:26:28.409 [job9] 00:26:28.409 filename=/dev/nvme8n1 00:26:28.409 [job10] 00:26:28.409 filename=/dev/nvme9n1 00:26:28.409 Could not set queue depth (nvme0n1) 00:26:28.409 Could not set queue depth (nvme10n1) 00:26:28.409 Could not set queue depth (nvme1n1) 00:26:28.409 Could not set queue depth (nvme2n1) 00:26:28.409 Could not set queue depth (nvme3n1) 00:26:28.409 Could not set queue depth (nvme4n1) 00:26:28.409 Could not set queue depth (nvme5n1) 00:26:28.409 Could not set queue depth (nvme6n1) 00:26:28.409 Could not set queue depth (nvme7n1) 00:26:28.409 Could not set queue depth (nvme8n1) 00:26:28.409 Could not set queue depth (nvme9n1) 00:26:28.667 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:28.667 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:28.667 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:28.667 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:28.667 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:28.667 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:28.667 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:28.667 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:28.667 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:28.667 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:28.667 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:28.667 fio-3.35 00:26:28.667 Starting 11 threads 00:26:40.871 00:26:40.871 job0: (groupid=0, jobs=1): err= 0: pid=3026115: Tue Nov 19 07:50:31 2024 00:26:40.871 read: IOPS=927, BW=232MiB/s (243MB/s)(2356MiB/10154msec) 00:26:40.871 slat (usec): min=11, max=90516, avg=965.00, stdev=3974.85 00:26:40.871 clat (msec): min=2, max=638, avg=67.95, stdev=60.62 00:26:40.871 lat (msec): min=2, max=638, avg=68.92, stdev=61.16 00:26:40.871 clat percentiles (msec): 00:26:40.872 | 1.00th=[ 6], 5.00th=[ 34], 10.00th=[ 37], 20.00th=[ 42], 00:26:40.872 | 30.00th=[ 45], 40.00th=[ 47], 50.00th=[ 50], 60.00th=[ 52], 00:26:40.872 | 70.00th=[ 62], 80.00th=[ 80], 90.00th=[ 118], 95.00th=[ 150], 00:26:40.872 | 99.00th=[ 405], 99.50th=[ 518], 99.90th=[ 535], 99.95th=[ 575], 00:26:40.872 | 99.99th=[ 642] 00:26:40.872 bw ( KiB/s): min=39424, max=419328, per=28.69%, avg=239572.70, stdev=112264.82, samples=20 00:26:40.872 iops : min= 154, max= 1638, avg=935.80, stdev=438.58, samples=20 00:26:40.872 lat (msec) : 4=0.14%, 10=1.62%, 20=0.14%, 50=52.57%, 100=30.32% 00:26:40.872 lat (msec) : 250=13.19%, 500=1.50%, 750=0.52% 00:26:40.872 cpu : usr=0.48%, sys=3.04%, ctx=1207, majf=0, minf=4097 00:26:40.872 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:26:40.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:40.872 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:40.872 issued rwts: total=9422,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:40.872 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:40.872 job1: (groupid=0, jobs=1): err= 0: pid=3026116: Tue Nov 19 07:50:31 2024 00:26:40.872 read: IOPS=406, BW=102MiB/s (107MB/s)(1033MiB/10156msec) 00:26:40.872 slat (usec): min=8, max=640942, avg=1385.66, stdev=13451.30 00:26:40.872 clat (usec): min=1929, max=1012.1k, avg=155771.26, stdev=175644.30 00:26:40.872 lat (usec): min=1978, max=1458.5k, avg=157156.92, stdev=177606.55 00:26:40.872 clat percentiles (msec): 00:26:40.872 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 13], 20.00th=[ 57], 00:26:40.872 | 30.00th=[ 67], 40.00th=[ 73], 50.00th=[ 83], 60.00th=[ 103], 00:26:40.872 | 70.00th=[ 144], 80.00th=[ 245], 90.00th=[ 435], 95.00th=[ 550], 00:26:40.872 | 99.00th=[ 810], 99.50th=[ 818], 99.90th=[ 877], 99.95th=[ 877], 00:26:40.872 | 99.99th=[ 1011] 00:26:40.872 bw ( KiB/s): min=28672, max=241664, per=12.48%, avg=104169.95, stdev=79003.93, samples=20 00:26:40.872 iops : min= 112, max= 944, avg=406.90, stdev=308.62, samples=20 00:26:40.872 lat (msec) : 2=0.02%, 4=0.77%, 10=7.36%, 20=4.28%, 50=6.61% 00:26:40.872 lat (msec) : 100=40.43%, 250=21.56%, 500=11.47%, 750=5.11%, 1000=2.37% 00:26:40.872 lat (msec) : 2000=0.02% 00:26:40.872 cpu : usr=0.23%, sys=1.27%, ctx=1370, majf=0, minf=4097 00:26:40.872 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:26:40.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:40.872 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:40.872 issued rwts: total=4133,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:40.872 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:40.872 job2: (groupid=0, jobs=1): err= 0: pid=3026117: Tue Nov 19 07:50:31 2024 00:26:40.872 read: IOPS=251, BW=62.9MiB/s (65.9MB/s)(639MiB/10156msec) 00:26:40.872 slat (usec): min=12, max=358888, avg=3642.74, stdev=20750.32 00:26:40.872 clat (msec): min=29, max=1174, avg=250.50, stdev=251.98 00:26:40.872 lat (msec): min=29, max=1174, avg=254.14, stdev=255.81 00:26:40.872 clat percentiles (msec): 00:26:40.872 | 1.00th=[ 46], 5.00th=[ 71], 10.00th=[ 80], 20.00th=[ 88], 00:26:40.872 | 30.00th=[ 96], 40.00th=[ 109], 50.00th=[ 121], 60.00th=[ 146], 00:26:40.872 | 70.00th=[ 211], 80.00th=[ 451], 90.00th=[ 693], 95.00th=[ 827], 00:26:40.872 | 99.00th=[ 986], 99.50th=[ 1062], 99.90th=[ 1167], 99.95th=[ 1167], 00:26:40.872 | 99.99th=[ 1167] 00:26:40.872 bw ( KiB/s): min= 8704, max=195072, per=7.64%, avg=63769.60, stdev=58261.41, samples=20 00:26:40.872 iops : min= 34, max= 762, avg=249.10, stdev=227.58, samples=20 00:26:40.872 lat (msec) : 50=1.53%, 100=32.05%, 250=39.41%, 500=8.34%, 750=10.53% 00:26:40.872 lat (msec) : 1000=7.51%, 2000=0.63% 00:26:40.872 cpu : usr=0.12%, sys=0.90%, ctx=285, majf=0, minf=3721 00:26:40.872 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:26:40.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:40.872 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:40.872 issued rwts: total=2555,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:40.872 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:40.872 job3: (groupid=0, jobs=1): err= 0: pid=3026118: Tue Nov 19 07:50:31 2024 00:26:40.872 read: IOPS=243, BW=60.9MiB/s (63.8MB/s)(618MiB/10160msec) 00:26:40.872 slat (usec): min=11, max=520860, avg=3716.07, stdev=23680.21 00:26:40.872 clat (msec): min=31, max=1351, avg=258.98, stdev=252.94 00:26:40.872 lat (msec): min=31, max=1360, avg=262.70, stdev=256.64 00:26:40.872 clat percentiles (msec): 00:26:40.872 | 1.00th=[ 46], 5.00th=[ 58], 10.00th=[ 65], 20.00th=[ 80], 00:26:40.872 | 30.00th=[ 91], 40.00th=[ 112], 50.00th=[ 153], 60.00th=[ 199], 00:26:40.872 | 70.00th=[ 284], 80.00th=[ 426], 90.00th=[ 592], 95.00th=[ 827], 00:26:40.872 | 99.00th=[ 1167], 99.50th=[ 1167], 99.90th=[ 1217], 99.95th=[ 1351], 00:26:40.872 | 99.99th=[ 1351] 00:26:40.872 bw ( KiB/s): min=14848, max=215040, per=7.39%, avg=61670.40, stdev=55551.28, samples=20 00:26:40.872 iops : min= 58, max= 840, avg=240.90, stdev=217.00, samples=20 00:26:40.872 lat (msec) : 50=2.43%, 100=33.04%, 250=30.77%, 500=17.79%, 750=9.87% 00:26:40.872 lat (msec) : 1000=3.52%, 2000=2.59% 00:26:40.872 cpu : usr=0.10%, sys=0.85%, ctx=265, majf=0, minf=4097 00:26:40.872 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:26:40.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:40.872 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:40.872 issued rwts: total=2473,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:40.872 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:40.872 job4: (groupid=0, jobs=1): err= 0: pid=3026119: Tue Nov 19 07:50:31 2024 00:26:40.872 read: IOPS=177, BW=44.4MiB/s (46.6MB/s)(448MiB/10089msec) 00:26:40.872 slat (usec): min=9, max=311464, avg=3013.57, stdev=19094.59 00:26:40.872 clat (msec): min=2, max=973, avg=356.85, stdev=268.55 00:26:40.872 lat (msec): min=2, max=1059, avg=359.86, stdev=271.47 00:26:40.872 clat percentiles (msec): 00:26:40.872 | 1.00th=[ 9], 5.00th=[ 53], 10.00th=[ 62], 20.00th=[ 74], 00:26:40.872 | 30.00th=[ 128], 40.00th=[ 186], 50.00th=[ 313], 60.00th=[ 460], 00:26:40.872 | 70.00th=[ 535], 80.00th=[ 642], 90.00th=[ 718], 95.00th=[ 852], 00:26:40.872 | 99.00th=[ 944], 99.50th=[ 944], 99.90th=[ 961], 99.95th=[ 978], 00:26:40.872 | 99.99th=[ 978] 00:26:40.872 bw ( KiB/s): min=17408, max=144896, per=5.30%, avg=44288.00, stdev=33696.94, samples=20 00:26:40.872 iops : min= 68, max= 566, avg=173.00, stdev=131.63, samples=20 00:26:40.872 lat (msec) : 4=0.11%, 10=1.28%, 20=0.45%, 50=3.07%, 100=21.36% 00:26:40.872 lat (msec) : 250=19.69%, 500=20.30%, 750=26.10%, 1000=7.64% 00:26:40.872 cpu : usr=0.08%, sys=0.55%, ctx=316, majf=0, minf=4098 00:26:40.872 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.8%, >=64=96.5% 00:26:40.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:40.872 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:40.872 issued rwts: total=1793,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:40.872 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:40.872 job5: (groupid=0, jobs=1): err= 0: pid=3026123: Tue Nov 19 07:50:31 2024 00:26:40.872 read: IOPS=121, BW=30.5MiB/s (31.9MB/s)(308MiB/10094msec) 00:26:40.872 slat (usec): min=13, max=298423, avg=7388.40, stdev=28241.97 00:26:40.872 clat (msec): min=14, max=1052, avg=517.46, stdev=205.64 00:26:40.872 lat (msec): min=119, max=1052, avg=524.85, stdev=209.66 00:26:40.872 clat percentiles (msec): 00:26:40.872 | 1.00th=[ 130], 5.00th=[ 167], 10.00th=[ 194], 20.00th=[ 368], 00:26:40.872 | 30.00th=[ 426], 40.00th=[ 468], 50.00th=[ 510], 60.00th=[ 558], 00:26:40.872 | 70.00th=[ 609], 80.00th=[ 693], 90.00th=[ 802], 95.00th=[ 852], 00:26:40.872 | 99.00th=[ 1003], 99.50th=[ 1003], 99.90th=[ 1053], 99.95th=[ 1053], 00:26:40.872 | 99.99th=[ 1053] 00:26:40.872 bw ( KiB/s): min=13824, max=67584, per=3.58%, avg=29852.80, stdev=11455.53, samples=20 00:26:40.872 iops : min= 54, max= 264, avg=116.60, stdev=44.75, samples=20 00:26:40.872 lat (msec) : 20=0.08%, 250=13.82%, 500=34.55%, 750=38.70%, 1000=11.95% 00:26:40.872 lat (msec) : 2000=0.89% 00:26:40.872 cpu : usr=0.09%, sys=0.48%, ctx=153, majf=0, minf=4097 00:26:40.872 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.3%, 32=2.6%, >=64=94.9% 00:26:40.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:40.872 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:40.872 issued rwts: total=1230,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:40.872 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:40.872 job6: (groupid=0, jobs=1): err= 0: pid=3026131: Tue Nov 19 07:50:31 2024 00:26:40.872 read: IOPS=318, BW=79.5MiB/s (83.4MB/s)(805MiB/10117msec) 00:26:40.872 slat (usec): min=8, max=439245, avg=1730.21, stdev=13541.63 00:26:40.872 clat (usec): min=1521, max=1399.2k, avg=199266.97, stdev=272513.23 00:26:40.872 lat (usec): min=1574, max=1399.2k, avg=200997.18, stdev=274898.18 00:26:40.872 clat percentiles (msec): 00:26:40.872 | 1.00th=[ 3], 5.00th=[ 4], 10.00th=[ 10], 20.00th=[ 21], 00:26:40.872 | 30.00th=[ 36], 40.00th=[ 51], 50.00th=[ 66], 60.00th=[ 72], 00:26:40.873 | 70.00th=[ 161], 80.00th=[ 481], 90.00th=[ 625], 95.00th=[ 760], 00:26:40.873 | 99.00th=[ 986], 99.50th=[ 1368], 99.90th=[ 1401], 99.95th=[ 1401], 00:26:40.873 | 99.99th=[ 1401] 00:26:40.873 bw ( KiB/s): min=14848, max=391168, per=9.67%, avg=80768.00, stdev=103750.04, samples=20 00:26:40.873 iops : min= 58, max= 1528, avg=315.50, stdev=405.27, samples=20 00:26:40.873 lat (msec) : 2=0.22%, 4=4.85%, 10=7.80%, 20=6.74%, 50=20.07% 00:26:40.873 lat (msec) : 100=26.10%, 250=8.98%, 500=7.77%, 750=12.08%, 1000=4.44% 00:26:40.873 lat (msec) : 2000=0.96% 00:26:40.873 cpu : usr=0.17%, sys=0.83%, ctx=924, majf=0, minf=4097 00:26:40.873 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.0% 00:26:40.873 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:40.873 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:40.873 issued rwts: total=3219,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:40.873 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:40.873 job7: (groupid=0, jobs=1): err= 0: pid=3026137: Tue Nov 19 07:50:31 2024 00:26:40.873 read: IOPS=149, BW=37.4MiB/s (39.2MB/s)(380MiB/10153msec) 00:26:40.873 slat (usec): min=8, max=544062, avg=3167.51, stdev=25197.04 00:26:40.873 clat (msec): min=41, max=1577, avg=424.15, stdev=308.99 00:26:40.873 lat (msec): min=41, max=1577, avg=427.32, stdev=312.34 00:26:40.873 clat percentiles (msec): 00:26:40.873 | 1.00th=[ 64], 5.00th=[ 116], 10.00th=[ 127], 20.00th=[ 163], 00:26:40.873 | 30.00th=[ 184], 40.00th=[ 241], 50.00th=[ 351], 60.00th=[ 435], 00:26:40.873 | 70.00th=[ 531], 80.00th=[ 617], 90.00th=[ 885], 95.00th=[ 1083], 00:26:40.873 | 99.00th=[ 1351], 99.50th=[ 1519], 99.90th=[ 1552], 99.95th=[ 1586], 00:26:40.873 | 99.99th=[ 1586] 00:26:40.873 bw ( KiB/s): min= 7680, max=112128, per=4.46%, avg=37277.35, stdev=23423.82, samples=20 00:26:40.873 iops : min= 30, max= 438, avg=145.60, stdev=91.50, samples=20 00:26:40.873 lat (msec) : 50=0.26%, 100=2.24%, 250=39.43%, 500=23.63%, 750=19.62% 00:26:40.873 lat (msec) : 1000=7.70%, 2000=7.11% 00:26:40.873 cpu : usr=0.03%, sys=0.50%, ctx=224, majf=0, minf=4097 00:26:40.873 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.1%, 32=2.1%, >=64=95.9% 00:26:40.873 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:40.873 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:40.873 issued rwts: total=1519,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:40.873 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:40.873 job8: (groupid=0, jobs=1): err= 0: pid=3026159: Tue Nov 19 07:50:31 2024 00:26:40.873 read: IOPS=181, BW=45.4MiB/s (47.6MB/s)(460MiB/10116msec) 00:26:40.873 slat (usec): min=9, max=516716, avg=4592.37, stdev=27397.97 00:26:40.873 clat (msec): min=61, max=1174, avg=347.36, stdev=238.95 00:26:40.873 lat (msec): min=61, max=1251, avg=351.95, stdev=243.33 00:26:40.873 clat percentiles (msec): 00:26:40.873 | 1.00th=[ 77], 5.00th=[ 85], 10.00th=[ 94], 20.00th=[ 113], 00:26:40.873 | 30.00th=[ 148], 40.00th=[ 199], 50.00th=[ 271], 60.00th=[ 384], 00:26:40.873 | 70.00th=[ 485], 80.00th=[ 609], 90.00th=[ 693], 95.00th=[ 760], 00:26:40.873 | 99.00th=[ 919], 99.50th=[ 944], 99.90th=[ 1083], 99.95th=[ 1167], 00:26:40.873 | 99.99th=[ 1167] 00:26:40.873 bw ( KiB/s): min= 7168, max=121344, per=5.44%, avg=45440.00, stdev=29369.85, samples=20 00:26:40.873 iops : min= 28, max= 474, avg=177.50, stdev=114.73, samples=20 00:26:40.873 lat (msec) : 100=12.73%, 250=34.11%, 500=25.57%, 750=22.25%, 1000=4.95% 00:26:40.873 lat (msec) : 2000=0.38% 00:26:40.873 cpu : usr=0.12%, sys=0.45%, ctx=233, majf=0, minf=4097 00:26:40.873 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.7%, >=64=96.6% 00:26:40.873 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:40.873 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:40.873 issued rwts: total=1838,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:40.873 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:40.873 job9: (groupid=0, jobs=1): err= 0: pid=3026176: Tue Nov 19 07:50:31 2024 00:26:40.873 read: IOPS=278, BW=69.6MiB/s (73.0MB/s)(702MiB/10088msec) 00:26:40.873 slat (usec): min=13, max=436397, avg=3557.65, stdev=17735.57 00:26:40.873 clat (msec): min=26, max=871, avg=226.11, stdev=203.00 00:26:40.873 lat (msec): min=26, max=1048, avg=229.66, stdev=206.64 00:26:40.873 clat percentiles (msec): 00:26:40.873 | 1.00th=[ 41], 5.00th=[ 51], 10.00th=[ 53], 20.00th=[ 65], 00:26:40.873 | 30.00th=[ 87], 40.00th=[ 111], 50.00th=[ 140], 60.00th=[ 186], 00:26:40.873 | 70.00th=[ 262], 80.00th=[ 384], 90.00th=[ 550], 95.00th=[ 693], 00:26:40.873 | 99.00th=[ 802], 99.50th=[ 810], 99.90th=[ 869], 99.95th=[ 869], 00:26:40.873 | 99.99th=[ 869] 00:26:40.873 bw ( KiB/s): min=16896, max=226816, per=8.42%, avg=70297.60, stdev=64179.67, samples=20 00:26:40.873 iops : min= 66, max= 886, avg=274.60, stdev=250.70, samples=20 00:26:40.873 lat (msec) : 50=5.27%, 100=31.22%, 250=31.19%, 500=19.40%, 750=10.25% 00:26:40.873 lat (msec) : 1000=2.67% 00:26:40.873 cpu : usr=0.13%, sys=1.08%, ctx=223, majf=0, minf=4097 00:26:40.873 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:26:40.873 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:40.873 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:40.873 issued rwts: total=2809,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:40.873 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:40.873 job10: (groupid=0, jobs=1): err= 0: pid=3026191: Tue Nov 19 07:50:31 2024 00:26:40.873 read: IOPS=212, BW=53.1MiB/s (55.6MB/s)(537MiB/10109msec) 00:26:40.873 slat (usec): min=8, max=228279, avg=3411.47, stdev=16277.95 00:26:40.873 clat (msec): min=2, max=977, avg=297.84, stdev=208.46 00:26:40.873 lat (msec): min=2, max=1002, avg=301.25, stdev=210.97 00:26:40.873 clat percentiles (msec): 00:26:40.873 | 1.00th=[ 7], 5.00th=[ 23], 10.00th=[ 90], 20.00th=[ 121], 00:26:40.873 | 30.00th=[ 150], 40.00th=[ 186], 50.00th=[ 228], 60.00th=[ 330], 00:26:40.873 | 70.00th=[ 380], 80.00th=[ 485], 90.00th=[ 600], 95.00th=[ 726], 00:26:40.873 | 99.00th=[ 852], 99.50th=[ 953], 99.90th=[ 978], 99.95th=[ 978], 00:26:40.873 | 99.99th=[ 978] 00:26:40.873 bw ( KiB/s): min=16384, max=147456, per=6.38%, avg=53299.20, stdev=36650.76, samples=20 00:26:40.873 iops : min= 64, max= 576, avg=208.20, stdev=143.17, samples=20 00:26:40.873 lat (msec) : 4=0.19%, 10=2.05%, 20=1.77%, 50=1.54%, 100=8.62% 00:26:40.873 lat (msec) : 250=38.72%, 500=28.84%, 750=15.19%, 1000=3.08% 00:26:40.873 cpu : usr=0.09%, sys=0.56%, ctx=325, majf=0, minf=4097 00:26:40.873 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.5%, >=64=97.1% 00:26:40.873 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:40.873 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:40.873 issued rwts: total=2146,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:40.873 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:40.873 00:26:40.873 Run status group 0 (all jobs): 00:26:40.873 READ: bw=815MiB/s (855MB/s), 30.5MiB/s-232MiB/s (31.9MB/s-243MB/s), io=8284MiB (8687MB), run=10088-10160msec 00:26:40.873 00:26:40.873 Disk stats (read/write): 00:26:40.873 nvme0n1: ios=18680/0, merge=0/0, ticks=1215173/0, in_queue=1215173, util=97.04% 00:26:40.873 nvme10n1: ios=8112/0, merge=0/0, ticks=1232074/0, in_queue=1232074, util=97.26% 00:26:40.873 nvme1n1: ios=4965/0, merge=0/0, ticks=1225164/0, in_queue=1225164, util=97.55% 00:26:40.873 nvme2n1: ios=4776/0, merge=0/0, ticks=1223773/0, in_queue=1223773, util=97.70% 00:26:40.873 nvme3n1: ios=3396/0, merge=0/0, ticks=1238138/0, in_queue=1238138, util=97.79% 00:26:40.873 nvme4n1: ios=2309/0, merge=0/0, ticks=1210744/0, in_queue=1210744, util=98.15% 00:26:40.874 nvme5n1: ios=6277/0, merge=0/0, ticks=1239614/0, in_queue=1239614, util=98.33% 00:26:40.874 nvme6n1: ios=2867/0, merge=0/0, ticks=1232780/0, in_queue=1232780, util=98.45% 00:26:40.874 nvme7n1: ios=3487/0, merge=0/0, ticks=1230624/0, in_queue=1230624, util=98.91% 00:26:40.874 nvme8n1: ios=5438/0, merge=0/0, ticks=1228364/0, in_queue=1228364, util=99.11% 00:26:40.874 nvme9n1: ios=4081/0, merge=0/0, ticks=1225524/0, in_queue=1225524, util=99.25% 00:26:40.874 07:50:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:26:40.874 [global] 00:26:40.874 thread=1 00:26:40.874 invalidate=1 00:26:40.874 rw=randwrite 00:26:40.874 time_based=1 00:26:40.874 runtime=10 00:26:40.874 ioengine=libaio 00:26:40.874 direct=1 00:26:40.874 bs=262144 00:26:40.874 iodepth=64 00:26:40.874 norandommap=1 00:26:40.874 numjobs=1 00:26:40.874 00:26:40.874 [job0] 00:26:40.874 filename=/dev/nvme0n1 00:26:40.874 [job1] 00:26:40.874 filename=/dev/nvme10n1 00:26:40.874 [job2] 00:26:40.874 filename=/dev/nvme1n1 00:26:40.874 [job3] 00:26:40.874 filename=/dev/nvme2n1 00:26:40.874 [job4] 00:26:40.874 filename=/dev/nvme3n1 00:26:40.874 [job5] 00:26:40.874 filename=/dev/nvme4n1 00:26:40.874 [job6] 00:26:40.874 filename=/dev/nvme5n1 00:26:40.874 [job7] 00:26:40.874 filename=/dev/nvme6n1 00:26:40.874 [job8] 00:26:40.874 filename=/dev/nvme7n1 00:26:40.874 [job9] 00:26:40.874 filename=/dev/nvme8n1 00:26:40.874 [job10] 00:26:40.874 filename=/dev/nvme9n1 00:26:40.874 Could not set queue depth (nvme0n1) 00:26:40.874 Could not set queue depth (nvme10n1) 00:26:40.874 Could not set queue depth (nvme1n1) 00:26:40.874 Could not set queue depth (nvme2n1) 00:26:40.874 Could not set queue depth (nvme3n1) 00:26:40.874 Could not set queue depth (nvme4n1) 00:26:40.874 Could not set queue depth (nvme5n1) 00:26:40.874 Could not set queue depth (nvme6n1) 00:26:40.874 Could not set queue depth (nvme7n1) 00:26:40.874 Could not set queue depth (nvme8n1) 00:26:40.874 Could not set queue depth (nvme9n1) 00:26:40.874 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:40.874 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:40.874 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:40.874 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:40.874 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:40.874 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:40.874 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:40.874 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:40.874 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:40.874 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:40.874 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:40.874 fio-3.35 00:26:40.874 Starting 11 threads 00:26:50.848 00:26:50.848 job0: (groupid=0, jobs=1): err= 0: pid=3027000: Tue Nov 19 07:50:42 2024 00:26:50.848 write: IOPS=573, BW=143MiB/s (150MB/s)(1473MiB/10273msec); 0 zone resets 00:26:50.848 slat (usec): min=19, max=108319, avg=1161.02, stdev=3981.67 00:26:50.848 clat (msec): min=4, max=686, avg=110.36, stdev=113.30 00:26:50.848 lat (msec): min=4, max=686, avg=111.52, stdev=114.21 00:26:50.848 clat percentiles (msec): 00:26:50.848 | 1.00th=[ 13], 5.00th=[ 46], 10.00th=[ 50], 20.00th=[ 54], 00:26:50.848 | 30.00th=[ 56], 40.00th=[ 58], 50.00th=[ 61], 60.00th=[ 66], 00:26:50.848 | 70.00th=[ 92], 80.00th=[ 140], 90.00th=[ 245], 95.00th=[ 426], 00:26:50.848 | 99.00th=[ 542], 99.50th=[ 617], 99.90th=[ 676], 99.95th=[ 684], 00:26:50.848 | 99.99th=[ 684] 00:26:50.848 bw ( KiB/s): min=22528, max=282624, per=16.06%, avg=149222.40, stdev=93432.04, samples=20 00:26:50.848 iops : min= 88, max= 1104, avg=582.90, stdev=364.97, samples=20 00:26:50.848 lat (msec) : 10=0.44%, 20=1.58%, 50=8.72%, 100=61.32%, 250=18.53% 00:26:50.848 lat (msec) : 500=7.76%, 750=1.65% 00:26:50.848 cpu : usr=1.42%, sys=1.79%, ctx=2717, majf=0, minf=2 00:26:50.848 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:26:50.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.848 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:50.848 issued rwts: total=0,5892,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:50.848 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:50.848 job1: (groupid=0, jobs=1): err= 0: pid=3027012: Tue Nov 19 07:50:42 2024 00:26:50.848 write: IOPS=455, BW=114MiB/s (119MB/s)(1145MiB/10053msec); 0 zone resets 00:26:50.848 slat (usec): min=18, max=162873, avg=1683.37, stdev=5144.70 00:26:50.848 clat (usec): min=1848, max=581778, avg=138730.66, stdev=111590.18 00:26:50.848 lat (msec): min=2, max=581, avg=140.41, stdev=112.82 00:26:50.848 clat percentiles (msec): 00:26:50.848 | 1.00th=[ 11], 5.00th=[ 52], 10.00th=[ 55], 20.00th=[ 58], 00:26:50.848 | 30.00th=[ 65], 40.00th=[ 77], 50.00th=[ 88], 60.00th=[ 118], 00:26:50.848 | 70.00th=[ 153], 80.00th=[ 228], 90.00th=[ 313], 95.00th=[ 397], 00:26:50.848 | 99.00th=[ 498], 99.50th=[ 506], 99.90th=[ 550], 99.95th=[ 550], 00:26:50.848 | 99.99th=[ 584] 00:26:50.848 bw ( KiB/s): min=35328, max=251904, per=12.44%, avg=115614.75, stdev=73919.47, samples=20 00:26:50.848 iops : min= 138, max= 984, avg=451.60, stdev=288.77, samples=20 00:26:50.848 lat (msec) : 2=0.02%, 4=0.09%, 10=0.85%, 20=0.81%, 50=2.31% 00:26:50.848 lat (msec) : 100=50.49%, 250=27.54%, 500=16.95%, 750=0.94% 00:26:50.848 cpu : usr=1.30%, sys=1.48%, ctx=1771, majf=0, minf=1 00:26:50.848 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:26:50.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.848 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:50.848 issued rwts: total=0,4579,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:50.848 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:50.848 job2: (groupid=0, jobs=1): err= 0: pid=3027013: Tue Nov 19 07:50:42 2024 00:26:50.848 write: IOPS=269, BW=67.4MiB/s (70.7MB/s)(693MiB/10280msec); 0 zone resets 00:26:50.848 slat (usec): min=21, max=246151, avg=2806.74, stdev=9811.83 00:26:50.848 clat (msec): min=6, max=687, avg=234.41, stdev=159.77 00:26:50.848 lat (msec): min=6, max=688, avg=237.22, stdev=161.96 00:26:50.848 clat percentiles (msec): 00:26:50.848 | 1.00th=[ 14], 5.00th=[ 32], 10.00th=[ 62], 20.00th=[ 89], 00:26:50.848 | 30.00th=[ 130], 40.00th=[ 161], 50.00th=[ 184], 60.00th=[ 222], 00:26:50.848 | 70.00th=[ 309], 80.00th=[ 405], 90.00th=[ 481], 95.00th=[ 523], 00:26:50.848 | 99.00th=[ 625], 99.50th=[ 634], 99.90th=[ 676], 99.95th=[ 684], 00:26:50.848 | 99.99th=[ 693] 00:26:50.848 bw ( KiB/s): min=26624, max=143584, per=7.46%, avg=69284.80, stdev=36175.08, samples=20 00:26:50.848 iops : min= 104, max= 560, avg=270.60, stdev=141.21, samples=20 00:26:50.848 lat (msec) : 10=0.18%, 20=2.42%, 50=5.02%, 100=15.45%, 250=39.55% 00:26:50.848 lat (msec) : 500=30.39%, 750=7.00% 00:26:50.848 cpu : usr=0.76%, sys=1.08%, ctx=1382, majf=0, minf=1 00:26:50.848 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:26:50.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.848 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:50.848 issued rwts: total=0,2771,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:50.848 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:50.848 job3: (groupid=0, jobs=1): err= 0: pid=3027014: Tue Nov 19 07:50:42 2024 00:26:50.848 write: IOPS=198, BW=49.6MiB/s (52.0MB/s)(509MiB/10274msec); 0 zone resets 00:26:50.848 slat (usec): min=19, max=305924, avg=3686.50, stdev=11928.64 00:26:50.848 clat (usec): min=1930, max=765013, avg=318780.62, stdev=140680.70 00:26:50.848 lat (msec): min=2, max=765, avg=322.47, stdev=142.66 00:26:50.848 clat percentiles (msec): 00:26:50.848 | 1.00th=[ 10], 5.00th=[ 73], 10.00th=[ 130], 20.00th=[ 192], 00:26:50.848 | 30.00th=[ 257], 40.00th=[ 296], 50.00th=[ 321], 60.00th=[ 355], 00:26:50.848 | 70.00th=[ 384], 80.00th=[ 439], 90.00th=[ 489], 95.00th=[ 567], 00:26:50.848 | 99.00th=[ 651], 99.50th=[ 667], 99.90th=[ 735], 99.95th=[ 768], 00:26:50.848 | 99.99th=[ 768] 00:26:50.848 bw ( KiB/s): min=23040, max=95744, per=5.44%, avg=50542.70, stdev=18947.04, samples=20 00:26:50.848 iops : min= 90, max= 374, avg=197.40, stdev=73.95, samples=20 00:26:50.848 lat (msec) : 2=0.05%, 4=0.10%, 10=0.88%, 20=0.74%, 50=1.47% 00:26:50.848 lat (msec) : 100=3.93%, 250=21.94%, 500=62.20%, 750=8.59%, 1000=0.10% 00:26:50.848 cpu : usr=0.65%, sys=0.66%, ctx=1019, majf=0, minf=1 00:26:50.848 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.9% 00:26:50.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.848 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:50.848 issued rwts: total=0,2037,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:50.848 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:50.848 job4: (groupid=0, jobs=1): err= 0: pid=3027015: Tue Nov 19 07:50:42 2024 00:26:50.848 write: IOPS=201, BW=50.3MiB/s (52.8MB/s)(517MiB/10270msec); 0 zone resets 00:26:50.848 slat (usec): min=25, max=41314, avg=4618.78, stdev=9165.62 00:26:50.848 clat (msec): min=23, max=758, avg=312.99, stdev=125.86 00:26:50.848 lat (msec): min=23, max=758, avg=317.61, stdev=127.48 00:26:50.848 clat percentiles (msec): 00:26:50.848 | 1.00th=[ 58], 5.00th=[ 138], 10.00th=[ 161], 20.00th=[ 188], 00:26:50.848 | 30.00th=[ 241], 40.00th=[ 271], 50.00th=[ 321], 60.00th=[ 338], 00:26:50.848 | 70.00th=[ 359], 80.00th=[ 439], 90.00th=[ 481], 95.00th=[ 535], 00:26:50.848 | 99.00th=[ 600], 99.50th=[ 659], 99.90th=[ 726], 99.95th=[ 760], 00:26:50.848 | 99.99th=[ 760] 00:26:50.848 bw ( KiB/s): min=28672, max=97792, per=5.52%, avg=51334.15, stdev=18864.56, samples=20 00:26:50.848 iops : min= 112, max= 382, avg=200.50, stdev=73.68, samples=20 00:26:50.848 lat (msec) : 50=0.58%, 100=2.51%, 250=32.74%, 500=57.45%, 750=6.62% 00:26:50.848 lat (msec) : 1000=0.10% 00:26:50.848 cpu : usr=0.56%, sys=0.73%, ctx=607, majf=0, minf=1 00:26:50.848 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.5%, >=64=97.0% 00:26:50.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.848 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:50.848 issued rwts: total=0,2068,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:50.848 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:50.848 job5: (groupid=0, jobs=1): err= 0: pid=3027016: Tue Nov 19 07:50:42 2024 00:26:50.848 write: IOPS=252, BW=63.0MiB/s (66.1MB/s)(642MiB/10190msec); 0 zone resets 00:26:50.848 slat (usec): min=24, max=166893, avg=3260.95, stdev=7721.47 00:26:50.848 clat (msec): min=10, max=562, avg=250.42, stdev=104.22 00:26:50.848 lat (msec): min=11, max=568, avg=253.68, stdev=105.37 00:26:50.848 clat percentiles (msec): 00:26:50.848 | 1.00th=[ 41], 5.00th=[ 99], 10.00th=[ 132], 20.00th=[ 148], 00:26:50.848 | 30.00th=[ 171], 40.00th=[ 224], 50.00th=[ 251], 60.00th=[ 284], 00:26:50.848 | 70.00th=[ 305], 80.00th=[ 321], 90.00th=[ 397], 95.00th=[ 435], 00:26:50.848 | 99.00th=[ 510], 99.50th=[ 535], 99.90th=[ 550], 99.95th=[ 558], 00:26:50.848 | 99.99th=[ 567] 00:26:50.848 bw ( KiB/s): min=30720, max=118272, per=6.90%, avg=64102.40, stdev=27331.60, samples=20 00:26:50.848 iops : min= 120, max= 462, avg=250.40, stdev=106.76, samples=20 00:26:50.848 lat (msec) : 20=0.19%, 50=1.13%, 100=3.86%, 250=44.94%, 500=48.44% 00:26:50.848 lat (msec) : 750=1.44% 00:26:50.848 cpu : usr=0.79%, sys=0.82%, ctx=969, majf=0, minf=1 00:26:50.848 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.5% 00:26:50.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.848 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:50.848 issued rwts: total=0,2568,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:50.848 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:50.848 job6: (groupid=0, jobs=1): err= 0: pid=3027017: Tue Nov 19 07:50:42 2024 00:26:50.848 write: IOPS=273, BW=68.3MiB/s (71.7MB/s)(696MiB/10187msec); 0 zone resets 00:26:50.849 slat (usec): min=16, max=55682, avg=2500.45, stdev=6667.03 00:26:50.849 clat (msec): min=3, max=579, avg=231.49, stdev=117.30 00:26:50.849 lat (msec): min=3, max=579, avg=233.99, stdev=118.96 00:26:50.849 clat percentiles (msec): 00:26:50.849 | 1.00th=[ 10], 5.00th=[ 22], 10.00th=[ 51], 20.00th=[ 131], 00:26:50.849 | 30.00th=[ 157], 40.00th=[ 207], 50.00th=[ 245], 60.00th=[ 275], 00:26:50.849 | 70.00th=[ 305], 80.00th=[ 342], 90.00th=[ 363], 95.00th=[ 397], 00:26:50.849 | 99.00th=[ 489], 99.50th=[ 493], 99.90th=[ 567], 99.95th=[ 575], 00:26:50.849 | 99.99th=[ 584] 00:26:50.849 bw ( KiB/s): min=38912, max=118272, per=7.50%, avg=69657.60, stdev=26544.84, samples=20 00:26:50.849 iops : min= 152, max= 462, avg=272.10, stdev=103.69, samples=20 00:26:50.849 lat (msec) : 4=0.04%, 10=1.22%, 20=3.52%, 50=5.21%, 100=5.82% 00:26:50.849 lat (msec) : 250=36.95%, 500=46.86%, 750=0.39% 00:26:50.849 cpu : usr=0.72%, sys=0.94%, ctx=1572, majf=0, minf=2 00:26:50.849 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.7% 00:26:50.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.849 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:50.849 issued rwts: total=0,2785,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:50.849 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:50.849 job7: (groupid=0, jobs=1): err= 0: pid=3027018: Tue Nov 19 07:50:42 2024 00:26:50.849 write: IOPS=224, BW=56.1MiB/s (58.8MB/s)(576MiB/10275msec); 0 zone resets 00:26:50.849 slat (usec): min=21, max=152554, avg=3340.35, stdev=8572.16 00:26:50.849 clat (msec): min=4, max=758, avg=281.80, stdev=139.01 00:26:50.849 lat (msec): min=4, max=758, avg=285.14, stdev=140.24 00:26:50.849 clat percentiles (msec): 00:26:50.849 | 1.00th=[ 13], 5.00th=[ 83], 10.00th=[ 118], 20.00th=[ 163], 00:26:50.849 | 30.00th=[ 182], 40.00th=[ 230], 50.00th=[ 264], 60.00th=[ 321], 00:26:50.849 | 70.00th=[ 355], 80.00th=[ 393], 90.00th=[ 472], 95.00th=[ 527], 00:26:50.849 | 99.00th=[ 609], 99.50th=[ 659], 99.90th=[ 726], 99.95th=[ 760], 00:26:50.849 | 99.99th=[ 760] 00:26:50.849 bw ( KiB/s): min=26112, max=115200, per=6.17%, avg=57369.60, stdev=25015.29, samples=20 00:26:50.849 iops : min= 102, max= 450, avg=224.10, stdev=97.72, samples=20 00:26:50.849 lat (msec) : 10=0.74%, 20=0.87%, 50=1.56%, 100=4.82%, 250=39.97% 00:26:50.849 lat (msec) : 500=45.53%, 750=6.42%, 1000=0.09% 00:26:50.849 cpu : usr=0.72%, sys=0.70%, ctx=971, majf=0, minf=1 00:26:50.849 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.4%, >=64=97.3% 00:26:50.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.849 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:50.849 issued rwts: total=0,2304,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:50.849 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:50.849 job8: (groupid=0, jobs=1): err= 0: pid=3027019: Tue Nov 19 07:50:42 2024 00:26:50.849 write: IOPS=194, BW=48.7MiB/s (51.0MB/s)(500MiB/10274msec); 0 zone resets 00:26:50.849 slat (usec): min=23, max=58566, avg=4749.80, stdev=9333.10 00:26:50.849 clat (msec): min=23, max=761, avg=323.82, stdev=121.33 00:26:50.849 lat (msec): min=23, max=761, avg=328.57, stdev=122.59 00:26:50.849 clat percentiles (msec): 00:26:50.849 | 1.00th=[ 85], 5.00th=[ 159], 10.00th=[ 169], 20.00th=[ 215], 00:26:50.849 | 30.00th=[ 245], 40.00th=[ 284], 50.00th=[ 330], 60.00th=[ 355], 00:26:50.849 | 70.00th=[ 368], 80.00th=[ 443], 90.00th=[ 485], 95.00th=[ 542], 00:26:50.849 | 99.00th=[ 600], 99.50th=[ 667], 99.90th=[ 760], 99.95th=[ 760], 00:26:50.849 | 99.99th=[ 760] 00:26:50.849 bw ( KiB/s): min=26624, max=96256, per=5.34%, avg=49587.20, stdev=17727.86, samples=20 00:26:50.849 iops : min= 104, max= 376, avg=193.70, stdev=69.25, samples=20 00:26:50.849 lat (msec) : 50=0.40%, 100=0.80%, 250=32.40%, 500=59.05%, 750=7.25% 00:26:50.849 lat (msec) : 1000=0.10% 00:26:50.849 cpu : usr=0.62%, sys=0.62%, ctx=554, majf=0, minf=1 00:26:50.849 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.9% 00:26:50.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.849 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:50.849 issued rwts: total=0,2000,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:50.849 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:50.849 job9: (groupid=0, jobs=1): err= 0: pid=3027020: Tue Nov 19 07:50:42 2024 00:26:50.849 write: IOPS=697, BW=174MiB/s (183MB/s)(1754MiB/10056msec); 0 zone resets 00:26:50.849 slat (usec): min=24, max=237585, avg=1357.08, stdev=4616.91 00:26:50.849 clat (msec): min=4, max=368, avg=90.32, stdev=50.62 00:26:50.849 lat (msec): min=4, max=415, avg=91.67, stdev=51.37 00:26:50.849 clat percentiles (msec): 00:26:50.849 | 1.00th=[ 22], 5.00th=[ 53], 10.00th=[ 55], 20.00th=[ 58], 00:26:50.849 | 30.00th=[ 61], 40.00th=[ 66], 50.00th=[ 71], 60.00th=[ 74], 00:26:50.849 | 70.00th=[ 84], 80.00th=[ 136], 90.00th=[ 171], 95.00th=[ 194], 00:26:50.849 | 99.00th=[ 243], 99.50th=[ 257], 99.90th=[ 363], 99.95th=[ 368], 00:26:50.849 | 99.99th=[ 368] 00:26:50.849 bw ( KiB/s): min=83968, max=277504, per=19.15%, avg=177980.85, stdev=71192.85, samples=20 00:26:50.849 iops : min= 328, max= 1084, avg=695.20, stdev=278.14, samples=20 00:26:50.849 lat (msec) : 10=0.23%, 20=0.56%, 50=3.19%, 100=71.48%, 250=23.91% 00:26:50.849 lat (msec) : 500=0.64% 00:26:50.849 cpu : usr=2.30%, sys=2.08%, ctx=2005, majf=0, minf=1 00:26:50.849 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:26:50.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.849 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:50.849 issued rwts: total=0,7015,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:50.849 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:50.849 job10: (groupid=0, jobs=1): err= 0: pid=3027023: Tue Nov 19 07:50:42 2024 00:26:50.849 write: IOPS=323, BW=81.0MiB/s (84.9MB/s)(825MiB/10180msec); 0 zone resets 00:26:50.849 slat (usec): min=20, max=39022, avg=1771.60, stdev=5436.28 00:26:50.849 clat (usec): min=1040, max=553677, avg=195666.79, stdev=120770.43 00:26:50.849 lat (usec): min=1089, max=553728, avg=197438.39, stdev=122175.33 00:26:50.849 clat percentiles (msec): 00:26:50.849 | 1.00th=[ 3], 5.00th=[ 9], 10.00th=[ 23], 20.00th=[ 87], 00:26:50.849 | 30.00th=[ 131], 40.00th=[ 148], 50.00th=[ 186], 60.00th=[ 241], 00:26:50.849 | 70.00th=[ 275], 80.00th=[ 305], 90.00th=[ 342], 95.00th=[ 409], 00:26:50.849 | 99.00th=[ 464], 99.50th=[ 481], 99.90th=[ 535], 99.95th=[ 550], 00:26:50.849 | 99.99th=[ 558] 00:26:50.849 bw ( KiB/s): min=39424, max=174592, per=8.91%, avg=82816.00, stdev=38069.96, samples=20 00:26:50.849 iops : min= 154, max= 682, avg=323.50, stdev=148.71, samples=20 00:26:50.849 lat (msec) : 2=0.91%, 4=1.49%, 10=2.97%, 20=3.09%, 50=8.76% 00:26:50.849 lat (msec) : 100=4.70%, 250=41.87%, 500=35.90%, 750=0.30% 00:26:50.849 cpu : usr=0.92%, sys=1.18%, ctx=2122, majf=0, minf=1 00:26:50.849 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:26:50.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.849 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:50.849 issued rwts: total=0,3298,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:50.849 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:50.849 00:26:50.849 Run status group 0 (all jobs): 00:26:50.849 WRITE: bw=908MiB/s (952MB/s), 48.7MiB/s-174MiB/s (51.0MB/s-183MB/s), io=9329MiB (9782MB), run=10053-10280msec 00:26:50.849 00:26:50.849 Disk stats (read/write): 00:26:50.849 nvme0n1: ios=49/11717, merge=0/0, ticks=59/1236416, in_queue=1236475, util=97.27% 00:26:50.849 nvme10n1: ios=42/8827, merge=0/0, ticks=1551/1217331, in_queue=1218882, util=99.83% 00:26:50.849 nvme1n1: ios=43/5465, merge=0/0, ticks=6455/1172709, in_queue=1179164, util=99.85% 00:26:50.849 nvme2n1: ios=47/4006, merge=0/0, ticks=2493/1191433, in_queue=1193926, util=99.85% 00:26:50.849 nvme3n1: ios=0/4067, merge=0/0, ticks=0/1219734, in_queue=1219734, util=97.81% 00:26:50.849 nvme4n1: ios=49/5100, merge=0/0, ticks=2726/1236793, in_queue=1239519, util=99.88% 00:26:50.849 nvme5n1: ios=5/5541, merge=0/0, ticks=210/1243963, in_queue=1244173, util=98.63% 00:26:50.849 nvme6n1: ios=47/4539, merge=0/0, ticks=2784/1217126, in_queue=1219910, util=99.87% 00:26:50.849 nvme7n1: ios=0/3932, merge=0/0, ticks=0/1220033, in_queue=1220033, util=98.86% 00:26:50.849 nvme8n1: ios=39/13697, merge=0/0, ticks=2137/1153302, in_queue=1155439, util=99.87% 00:26:50.849 nvme9n1: ios=21/6569, merge=0/0, ticks=63/1248483, in_queue=1248546, util=99.46% 00:26:50.849 07:50:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:26:50.849 07:50:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:26:50.849 07:50:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:50.849 07:50:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:50.849 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:50.849 07:50:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:26:50.849 07:50:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:50.849 07:50:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:50.849 07:50:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK1 00:26:50.849 07:50:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:50.849 07:50:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK1 00:26:50.849 07:50:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:50.849 07:50:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:50.849 07:50:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.849 07:50:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:50.849 07:50:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.849 07:50:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:50.849 07:50:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:26:50.849 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:26:50.849 07:50:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:26:50.850 07:50:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:50.850 07:50:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:50.850 07:50:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK2 00:26:50.850 07:50:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:50.850 07:50:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK2 00:26:50.850 07:50:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:50.850 07:50:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:50.850 07:50:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.850 07:50:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:50.850 07:50:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.850 07:50:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:50.850 07:50:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:26:51.416 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:26:51.416 07:50:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:26:51.416 07:50:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:51.416 07:50:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:51.416 07:50:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK3 00:26:51.416 07:50:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:51.416 07:50:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK3 00:26:51.416 07:50:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:51.416 07:50:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:26:51.416 07:50:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.416 07:50:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:51.416 07:50:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.416 07:50:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:51.416 07:50:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:26:51.674 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:26:51.674 07:50:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:26:51.674 07:50:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:51.674 07:50:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:51.674 07:50:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK4 00:26:51.674 07:50:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:51.674 07:50:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK4 00:26:51.674 07:50:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:51.674 07:50:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:26:51.674 07:50:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.674 07:50:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:51.674 07:50:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.674 07:50:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:51.674 07:50:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:26:51.933 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:26:51.933 07:50:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:26:51.933 07:50:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:51.933 07:50:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:51.933 07:50:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK5 00:26:52.192 07:50:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:52.192 07:50:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK5 00:26:52.192 07:50:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:52.192 07:50:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:26:52.192 07:50:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.192 07:50:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:52.192 07:50:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.192 07:50:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:52.192 07:50:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:26:52.192 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:26:52.192 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:26:52.192 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:52.192 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:52.192 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK6 00:26:52.450 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:52.450 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK6 00:26:52.450 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:52.450 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:26:52.450 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.450 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:52.450 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.450 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:52.450 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:26:52.709 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:26:52.709 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:26:52.709 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:52.709 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:52.709 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK7 00:26:52.709 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:52.709 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK7 00:26:52.709 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:52.709 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:26:52.709 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.709 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:52.709 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.709 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:52.709 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:26:52.968 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:26:52.968 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:26:52.968 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:52.968 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:52.968 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK8 00:26:52.968 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:52.968 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK8 00:26:52.968 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:52.968 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:26:52.968 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.968 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:52.968 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.968 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:52.968 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:26:53.226 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:26:53.226 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:26:53.226 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:53.226 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:53.226 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK9 00:26:53.226 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:53.226 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK9 00:26:53.226 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:53.226 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:26:53.226 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.226 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:53.226 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.226 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:53.226 07:50:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:26:53.484 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:26:53.484 07:50:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:26:53.484 07:50:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:53.484 07:50:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:53.484 07:50:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK10 00:26:53.484 07:50:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:53.484 07:50:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK10 00:26:53.484 07:50:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:53.484 07:50:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:26:53.484 07:50:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.484 07:50:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:53.485 07:50:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.485 07:50:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:53.485 07:50:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:26:53.743 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:26:53.743 07:50:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:26:53.743 07:50:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:53.743 07:50:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:53.743 07:50:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK11 00:26:53.743 07:50:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:53.743 07:50:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK11 00:26:53.743 07:50:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:53.743 07:50:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:26:53.743 07:50:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.743 07:50:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:53.743 07:50:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.743 07:50:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:26:53.743 07:50:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:26:53.743 07:50:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:26:53.743 07:50:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:53.743 07:50:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:26:53.743 07:50:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:53.743 07:50:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:26:53.743 07:50:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:53.743 07:50:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:53.743 rmmod nvme_tcp 00:26:53.743 rmmod nvme_fabrics 00:26:53.743 rmmod nvme_keyring 00:26:53.743 07:50:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:53.743 07:50:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:26:53.743 07:50:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:26:53.743 07:50:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@517 -- # '[' -n 3021744 ']' 00:26:53.743 07:50:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # killprocess 3021744 00:26:53.743 07:50:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # '[' -z 3021744 ']' 00:26:53.743 07:50:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # kill -0 3021744 00:26:53.743 07:50:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # uname 00:26:53.743 07:50:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:53.743 07:50:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3021744 00:26:53.743 07:50:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:53.743 07:50:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:53.743 07:50:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3021744' 00:26:53.743 killing process with pid 3021744 00:26:53.743 07:50:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@973 -- # kill 3021744 00:26:53.743 07:50:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@978 -- # wait 3021744 00:26:57.028 07:50:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:57.028 07:50:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:57.028 07:50:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:57.028 07:50:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:26:57.028 07:50:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-save 00:26:57.028 07:50:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:57.028 07:50:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-restore 00:26:57.028 07:50:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:57.028 07:50:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:57.028 07:50:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:57.028 07:50:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:57.028 07:50:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:58.993 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:58.993 00:26:58.993 real 1m5.791s 00:26:58.993 user 3m49.036s 00:26:58.993 sys 0m17.625s 00:26:58.993 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:58.993 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:58.993 ************************************ 00:26:58.993 END TEST nvmf_multiconnection 00:26:58.993 ************************************ 00:26:58.993 07:50:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:58.993 07:50:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:58.993 07:50:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:58.993 07:50:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:58.993 ************************************ 00:26:58.993 START TEST nvmf_initiator_timeout 00:26:58.993 ************************************ 00:26:58.993 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:58.993 * Looking for test storage... 00:26:58.993 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:58.993 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:58.993 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # lcov --version 00:26:58.993 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:58.993 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:58.993 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:58.993 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:58.993 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:58.993 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:26:58.993 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:26:58.993 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:26:58.993 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:26:58.993 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:26:58.993 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:26:58.993 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:26:58.993 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:58.993 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:26:58.993 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:26:58.993 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:58.993 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:58.993 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:26:58.993 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:26:58.993 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:58.993 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:26:58.993 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:26:58.993 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:26:58.993 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:26:58.993 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:58.993 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:26:58.993 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:26:58.993 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:58.993 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:58.993 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:26:58.993 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:58.993 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:58.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:58.993 --rc genhtml_branch_coverage=1 00:26:58.993 --rc genhtml_function_coverage=1 00:26:58.994 --rc genhtml_legend=1 00:26:58.994 --rc geninfo_all_blocks=1 00:26:58.994 --rc geninfo_unexecuted_blocks=1 00:26:58.994 00:26:58.994 ' 00:26:58.994 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:58.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:58.994 --rc genhtml_branch_coverage=1 00:26:58.994 --rc genhtml_function_coverage=1 00:26:58.994 --rc genhtml_legend=1 00:26:58.994 --rc geninfo_all_blocks=1 00:26:58.994 --rc geninfo_unexecuted_blocks=1 00:26:58.994 00:26:58.994 ' 00:26:58.994 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:58.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:58.994 --rc genhtml_branch_coverage=1 00:26:58.994 --rc genhtml_function_coverage=1 00:26:58.994 --rc genhtml_legend=1 00:26:58.994 --rc geninfo_all_blocks=1 00:26:58.994 --rc geninfo_unexecuted_blocks=1 00:26:58.994 00:26:58.994 ' 00:26:58.994 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:58.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:58.994 --rc genhtml_branch_coverage=1 00:26:58.994 --rc genhtml_function_coverage=1 00:26:58.994 --rc genhtml_legend=1 00:26:58.994 --rc geninfo_all_blocks=1 00:26:58.994 --rc geninfo_unexecuted_blocks=1 00:26:58.994 00:26:58.994 ' 00:26:58.994 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:58.994 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:26:58.994 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:58.994 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:58.994 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:58.994 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:58.994 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:58.994 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:58.994 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:58.994 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:58.994 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:58.994 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:58.994 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:58.994 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:58.994 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:58.994 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:58.994 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:58.994 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:58.994 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:58.994 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:26:58.994 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:58.994 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:58.994 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:58.994 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.994 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.994 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.994 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:26:58.994 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.994 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:26:58.994 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:58.994 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:58.994 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:58.994 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:58.994 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:58.994 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:58.994 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:58.994 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:58.994 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:58.994 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:58.994 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:58.994 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:58.994 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:26:58.994 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:58.994 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:58.994 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:58.994 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:58.994 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:58.994 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:58.994 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:58.994 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:58.994 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:58.994 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:58.994 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@309 -- # xtrace_disable 00:26:58.994 07:50:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:00.899 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # pci_devs=() 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # net_devs=() 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # e810=() 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # local -ga e810 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # x722=() 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # local -ga x722 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # mlx=() 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # local -ga mlx 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:00.900 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:00.900 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:00.900 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:00.900 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # is_hw=yes 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:00.900 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:01.159 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:01.159 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:01.159 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:01.159 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:01.159 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:01.159 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:01.159 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:01.159 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:01.159 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:01.159 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.301 ms 00:27:01.159 00:27:01.159 --- 10.0.0.2 ping statistics --- 00:27:01.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:01.159 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:27:01.159 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:01.159 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:01.159 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:27:01.159 00:27:01.159 --- 10.0.0.1 ping statistics --- 00:27:01.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:01.159 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:27:01.159 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:01.159 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # return 0 00:27:01.159 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:01.159 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:01.159 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:01.159 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:01.159 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:01.159 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:01.159 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:01.159 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:27:01.159 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:01.159 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:01.159 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:01.159 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # nvmfpid=3030623 00:27:01.159 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:01.159 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # waitforlisten 3030623 00:27:01.159 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # '[' -z 3030623 ']' 00:27:01.159 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:01.159 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:01.159 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:01.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:01.159 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:01.159 07:50:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:01.159 [2024-11-19 07:50:53.059770] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:27:01.159 [2024-11-19 07:50:53.059913] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:01.417 [2024-11-19 07:50:53.201586] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:01.417 [2024-11-19 07:50:53.338115] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:01.417 [2024-11-19 07:50:53.338180] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:01.417 [2024-11-19 07:50:53.338214] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:01.417 [2024-11-19 07:50:53.338237] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:01.417 [2024-11-19 07:50:53.338263] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:01.417 [2024-11-19 07:50:53.341142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:01.417 [2024-11-19 07:50:53.341212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:01.417 [2024-11-19 07:50:53.341300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:01.417 [2024-11-19 07:50:53.341302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:02.351 07:50:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:02.351 07:50:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@868 -- # return 0 00:27:02.351 07:50:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:02.351 07:50:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:02.351 07:50:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:02.351 07:50:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:02.351 07:50:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:27:02.351 07:50:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:02.351 07:50:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.351 07:50:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:02.351 Malloc0 00:27:02.351 07:50:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.351 07:50:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:27:02.351 07:50:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.351 07:50:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:02.351 Delay0 00:27:02.351 07:50:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.351 07:50:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:02.351 07:50:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.351 07:50:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:02.351 [2024-11-19 07:50:54.137240] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:02.351 07:50:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.351 07:50:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:27:02.351 07:50:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.351 07:50:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:02.351 07:50:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.351 07:50:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:02.351 07:50:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.351 07:50:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:02.351 07:50:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.351 07:50:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:02.351 07:50:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.351 07:50:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:02.351 [2024-11-19 07:50:54.166926] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:02.351 07:50:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.351 07:50:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:27:02.917 07:50:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:27:02.917 07:50:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # local i=0 00:27:02.918 07:50:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:27:02.918 07:50:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:27:02.918 07:50:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # sleep 2 00:27:05.448 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:27:05.448 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:27:05.448 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:27:05.448 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:27:05.448 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:27:05.448 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # return 0 00:27:05.448 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=3031072 00:27:05.448 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:27:05.448 07:50:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:27:05.448 [global] 00:27:05.448 thread=1 00:27:05.448 invalidate=1 00:27:05.448 rw=write 00:27:05.448 time_based=1 00:27:05.448 runtime=60 00:27:05.448 ioengine=libaio 00:27:05.448 direct=1 00:27:05.448 bs=4096 00:27:05.448 iodepth=1 00:27:05.448 norandommap=0 00:27:05.448 numjobs=1 00:27:05.448 00:27:05.448 verify_dump=1 00:27:05.448 verify_backlog=512 00:27:05.448 verify_state_save=0 00:27:05.448 do_verify=1 00:27:05.448 verify=crc32c-intel 00:27:05.448 [job0] 00:27:05.448 filename=/dev/nvme0n1 00:27:05.448 Could not set queue depth (nvme0n1) 00:27:05.448 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:27:05.448 fio-3.35 00:27:05.448 Starting 1 thread 00:27:07.982 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:27:07.982 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.982 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:07.982 true 00:27:07.982 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.982 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:27:07.982 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.982 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:07.982 true 00:27:07.982 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.982 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:27:07.982 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.982 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:07.982 true 00:27:07.982 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.983 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:27:07.983 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.983 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:07.983 true 00:27:07.983 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.983 07:50:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:27:11.268 07:51:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:27:11.268 07:51:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.268 07:51:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:11.268 true 00:27:11.268 07:51:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.268 07:51:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:27:11.268 07:51:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.268 07:51:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:11.268 true 00:27:11.268 07:51:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.268 07:51:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:27:11.268 07:51:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.268 07:51:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:11.268 true 00:27:11.268 07:51:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.268 07:51:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:27:11.268 07:51:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.268 07:51:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:11.268 true 00:27:11.268 07:51:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.268 07:51:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:27:11.268 07:51:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 3031072 00:28:07.510 00:28:07.510 job0: (groupid=0, jobs=1): err= 0: pid=3031248: Tue Nov 19 07:51:57 2024 00:28:07.510 read: IOPS=225, BW=901KiB/s (923kB/s)(52.8MiB/60011msec) 00:28:07.510 slat (usec): min=5, max=13535, avg=13.24, stdev=141.02 00:28:07.510 clat (usec): min=274, max=40966k, avg=4108.80, stdev=352407.01 00:28:07.510 lat (usec): min=281, max=40966k, avg=4122.04, stdev=352407.24 00:28:07.510 clat percentiles (usec): 00:28:07.510 | 1.00th=[ 285], 5.00th=[ 293], 10.00th=[ 302], 20.00th=[ 306], 00:28:07.510 | 30.00th=[ 314], 40.00th=[ 318], 50.00th=[ 326], 60.00th=[ 330], 00:28:07.510 | 70.00th=[ 338], 80.00th=[ 347], 90.00th=[ 375], 95.00th=[ 490], 00:28:07.510 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:28:07.510 | 99.99th=[41157] 00:28:07.510 write: IOPS=230, BW=921KiB/s (944kB/s)(54.0MiB/60011msec); 0 zone resets 00:28:07.510 slat (usec): min=6, max=28009, avg=19.66, stdev=238.40 00:28:07.510 clat (usec): min=201, max=637, avg=283.33, stdev=69.18 00:28:07.510 lat (usec): min=209, max=28441, avg=302.99, stdev=251.91 00:28:07.510 clat percentiles (usec): 00:28:07.510 | 1.00th=[ 212], 5.00th=[ 219], 10.00th=[ 223], 20.00th=[ 229], 00:28:07.510 | 30.00th=[ 233], 40.00th=[ 241], 50.00th=[ 249], 60.00th=[ 269], 00:28:07.510 | 70.00th=[ 310], 80.00th=[ 355], 90.00th=[ 396], 95.00th=[ 429], 00:28:07.510 | 99.00th=[ 461], 99.50th=[ 469], 99.90th=[ 502], 99.95th=[ 537], 00:28:07.510 | 99.99th=[ 635] 00:28:07.510 bw ( KiB/s): min= 3888, max= 7080, per=100.00%, avg=5529.60, stdev=1086.17, samples=20 00:28:07.510 iops : min= 972, max= 1770, avg=1382.40, stdev=271.54, samples=20 00:28:07.510 lat (usec) : 250=25.63%, 500=72.27%, 750=1.19% 00:28:07.510 lat (msec) : 2=0.01%, 4=0.01%, 50=0.90%, >=2000=0.01% 00:28:07.510 cpu : usr=0.42%, sys=0.94%, ctx=27346, majf=0, minf=1 00:28:07.510 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:07.510 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:07.510 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:07.510 issued rwts: total=13516,13824,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:07.510 latency : target=0, window=0, percentile=100.00%, depth=1 00:28:07.510 00:28:07.510 Run status group 0 (all jobs): 00:28:07.510 READ: bw=901KiB/s (923kB/s), 901KiB/s-901KiB/s (923kB/s-923kB/s), io=52.8MiB (55.4MB), run=60011-60011msec 00:28:07.510 WRITE: bw=921KiB/s (944kB/s), 921KiB/s-921KiB/s (944kB/s-944kB/s), io=54.0MiB (56.6MB), run=60011-60011msec 00:28:07.510 00:28:07.510 Disk stats (read/write): 00:28:07.510 nvme0n1: ios=13566/13824, merge=0/0, ticks=15470/3722, in_queue=19192, util=99.93% 00:28:07.510 07:51:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:28:07.510 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:28:07.510 07:51:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:28:07.510 07:51:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # local i=0 00:28:07.510 07:51:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:28:07.510 07:51:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:07.510 07:51:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:28:07.510 07:51:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:07.510 07:51:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1235 -- # return 0 00:28:07.511 07:51:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:28:07.511 07:51:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:28:07.511 nvmf hotplug test: fio successful as expected 00:28:07.511 07:51:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:07.511 07:51:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.511 07:51:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:07.511 07:51:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.511 07:51:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:28:07.511 07:51:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:28:07.511 07:51:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:28:07.511 07:51:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:07.511 07:51:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:28:07.511 07:51:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:07.511 07:51:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:28:07.511 07:51:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:07.511 07:51:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:07.511 rmmod nvme_tcp 00:28:07.511 rmmod nvme_fabrics 00:28:07.511 rmmod nvme_keyring 00:28:07.511 07:51:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:07.511 07:51:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:28:07.511 07:51:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:28:07.511 07:51:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@517 -- # '[' -n 3030623 ']' 00:28:07.511 07:51:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # killprocess 3030623 00:28:07.511 07:51:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # '[' -z 3030623 ']' 00:28:07.511 07:51:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # kill -0 3030623 00:28:07.511 07:51:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # uname 00:28:07.511 07:51:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:07.511 07:51:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3030623 00:28:07.511 07:51:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:07.511 07:51:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:07.511 07:51:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3030623' 00:28:07.511 killing process with pid 3030623 00:28:07.511 07:51:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@973 -- # kill 3030623 00:28:07.511 07:51:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@978 -- # wait 3030623 00:28:07.511 07:51:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:07.511 07:51:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:07.511 07:51:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:07.511 07:51:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:28:07.511 07:51:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-save 00:28:07.511 07:51:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:07.511 07:51:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:28:07.511 07:51:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:07.511 07:51:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:07.511 07:51:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:07.511 07:51:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:07.511 07:51:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:08.892 07:52:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:08.892 00:28:08.892 real 1m10.181s 00:28:08.892 user 4m16.048s 00:28:08.892 sys 0m7.765s 00:28:08.892 07:52:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:08.892 07:52:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:08.892 ************************************ 00:28:08.892 END TEST nvmf_initiator_timeout 00:28:08.892 ************************************ 00:28:08.892 07:52:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:28:08.892 07:52:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:28:08.892 07:52:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:28:08.892 07:52:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:28:08.892 07:52:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:11.427 07:52:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:11.427 07:52:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:28:11.427 07:52:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:11.427 07:52:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:11.427 07:52:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:11.427 07:52:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:11.427 07:52:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:11.427 07:52:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:28:11.427 07:52:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:11.427 07:52:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:28:11.427 07:52:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:28:11.427 07:52:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:28:11.427 07:52:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:28:11.427 07:52:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:28:11.427 07:52:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:28:11.427 07:52:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:11.427 07:52:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:11.427 07:52:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:11.427 07:52:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:11.427 07:52:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:11.427 07:52:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:11.427 07:52:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:11.427 07:52:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:11.427 07:52:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:11.427 07:52:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:11.427 07:52:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:11.427 07:52:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:11.427 07:52:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:11.427 07:52:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:11.427 07:52:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:11.427 07:52:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:11.427 07:52:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:11.427 07:52:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:11.427 07:52:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:11.427 07:52:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:11.427 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:11.427 07:52:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:11.427 07:52:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:11.427 07:52:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:11.427 07:52:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:11.427 07:52:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:11.427 07:52:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:11.427 07:52:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:11.427 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:11.428 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:11.428 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:11.428 ************************************ 00:28:11.428 START TEST nvmf_perf_adq 00:28:11.428 ************************************ 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:28:11.428 * Looking for test storage... 00:28:11.428 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:11.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:11.428 --rc genhtml_branch_coverage=1 00:28:11.428 --rc genhtml_function_coverage=1 00:28:11.428 --rc genhtml_legend=1 00:28:11.428 --rc geninfo_all_blocks=1 00:28:11.428 --rc geninfo_unexecuted_blocks=1 00:28:11.428 00:28:11.428 ' 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:11.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:11.428 --rc genhtml_branch_coverage=1 00:28:11.428 --rc genhtml_function_coverage=1 00:28:11.428 --rc genhtml_legend=1 00:28:11.428 --rc geninfo_all_blocks=1 00:28:11.428 --rc geninfo_unexecuted_blocks=1 00:28:11.428 00:28:11.428 ' 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:11.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:11.428 --rc genhtml_branch_coverage=1 00:28:11.428 --rc genhtml_function_coverage=1 00:28:11.428 --rc genhtml_legend=1 00:28:11.428 --rc geninfo_all_blocks=1 00:28:11.428 --rc geninfo_unexecuted_blocks=1 00:28:11.428 00:28:11.428 ' 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:11.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:11.428 --rc genhtml_branch_coverage=1 00:28:11.428 --rc genhtml_function_coverage=1 00:28:11.428 --rc genhtml_legend=1 00:28:11.428 --rc geninfo_all_blocks=1 00:28:11.428 --rc geninfo_unexecuted_blocks=1 00:28:11.428 00:28:11.428 ' 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:28:11.428 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:11.429 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:11.429 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:11.429 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.429 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.429 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.429 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:28:11.429 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.429 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:28:11.429 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:11.429 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:11.429 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:11.429 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:11.429 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:11.429 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:11.429 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:11.429 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:11.429 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:11.429 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:11.429 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:28:11.429 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:11.429 07:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:13.336 07:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:13.336 07:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:13.336 07:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:13.336 07:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:13.336 07:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:13.336 07:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:13.336 07:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:13.336 07:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:13.336 07:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:13.336 07:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:13.336 07:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:13.336 07:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:13.336 07:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:13.336 07:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:13.336 07:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:13.336 07:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:13.336 07:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:13.336 07:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:13.336 07:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:13.336 07:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:13.336 07:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:13.336 07:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:13.336 07:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:13.336 07:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:13.336 07:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:13.336 07:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:13.336 07:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:13.336 07:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:13.336 07:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:13.336 07:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:13.336 07:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:13.336 07:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:13.336 07:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:13.336 07:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:13.336 07:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:13.336 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:13.336 07:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:13.336 07:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:13.336 07:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:13.336 07:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:13.336 07:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:13.336 07:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:13.336 07:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:13.336 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:13.336 07:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:13.336 07:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:13.336 07:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:13.336 07:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:13.336 07:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:13.336 07:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:13.336 07:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:13.336 07:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:13.336 07:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:13.336 07:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:13.336 07:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:13.336 07:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:13.336 07:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:13.336 07:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:13.336 07:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:13.336 07:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:13.336 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:13.336 07:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:13.336 07:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:13.336 07:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:13.336 07:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:13.336 07:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:13.336 07:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:13.336 07:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:13.336 07:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:13.336 07:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:13.336 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:13.336 07:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:13.336 07:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:13.336 07:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:13.336 07:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:28:13.336 07:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:28:13.336 07:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:28:13.336 07:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:28:13.336 07:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:28:13.904 07:52:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:28:16.446 07:52:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:28:21.723 07:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:28:21.723 07:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:21.723 07:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:21.723 07:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:21.723 07:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:21.723 07:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:21.723 07:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:21.723 07:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:21.723 07:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:21.723 07:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:21.723 07:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:21.723 07:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:21.723 07:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:21.723 07:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:21.723 07:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:21.723 07:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:21.723 07:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:21.723 07:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:21.723 07:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:21.723 07:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:21.723 07:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:21.723 07:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:21.723 07:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:21.723 07:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:21.723 07:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:21.723 07:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:21.723 07:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:21.724 07:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:21.724 07:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:21.724 07:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:21.724 07:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:21.724 07:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:21.724 07:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:21.724 07:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:21.724 07:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:21.724 07:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:21.724 07:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:21.724 07:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:21.724 07:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:21.724 07:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:21.724 07:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:21.724 07:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:21.724 07:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:21.724 07:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:21.724 07:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:21.724 07:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:21.724 07:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:21.724 07:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:21.724 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:21.724 07:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:21.724 07:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:21.724 07:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:21.724 07:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:21.724 07:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:21.724 07:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:21.724 07:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:21.724 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:21.724 07:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:21.724 07:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:21.724 07:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:21.724 07:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:21.724 07:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:21.724 07:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:21.724 07:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:21.724 07:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:21.724 07:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:21.724 07:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:21.724 07:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:21.724 07:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:21.724 07:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:21.724 07:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:21.724 07:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:21.724 07:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:21.724 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:21.724 07:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:21.724 07:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:21.724 07:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:21.724 07:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:21.724 07:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:21.724 07:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:21.724 07:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:21.724 07:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:21.724 07:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:21.724 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:21.724 07:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:21.724 07:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:21.724 07:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:28:21.724 07:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:21.724 07:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:21.724 07:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:21.724 07:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:21.724 07:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:21.724 07:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:21.724 07:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:21.724 07:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:21.724 07:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:21.724 07:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:21.724 07:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:21.724 07:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:21.724 07:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:21.724 07:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:21.724 07:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:21.724 07:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:21.724 07:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:21.724 07:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:21.724 07:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:21.724 07:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:21.724 07:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:21.724 07:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:21.724 07:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:21.724 07:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:21.724 07:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:21.724 07:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:21.724 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:21.724 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:28:21.724 00:28:21.724 --- 10.0.0.2 ping statistics --- 00:28:21.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:21.724 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:28:21.724 07:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:21.724 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:21.724 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:28:21.724 00:28:21.724 --- 10.0.0.1 ping statistics --- 00:28:21.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:21.724 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:28:21.724 07:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:21.724 07:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:28:21.724 07:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:21.724 07:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:21.724 07:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:21.724 07:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:21.724 07:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:21.724 07:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:21.724 07:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:21.724 07:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:21.724 07:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:21.724 07:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:21.724 07:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:21.724 07:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=3043511 00:28:21.725 07:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:21.725 07:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 3043511 00:28:21.725 07:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 3043511 ']' 00:28:21.725 07:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:21.725 07:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:21.725 07:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:21.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:21.725 07:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:21.725 07:52:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:21.725 [2024-11-19 07:52:13.241173] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:28:21.725 [2024-11-19 07:52:13.241330] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:21.725 [2024-11-19 07:52:13.395911] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:21.725 [2024-11-19 07:52:13.540191] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:21.725 [2024-11-19 07:52:13.540286] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:21.725 [2024-11-19 07:52:13.540312] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:21.725 [2024-11-19 07:52:13.540337] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:21.725 [2024-11-19 07:52:13.540358] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:21.725 [2024-11-19 07:52:13.543154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:21.725 [2024-11-19 07:52:13.543214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:21.725 [2024-11-19 07:52:13.543287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:21.725 [2024-11-19 07:52:13.543293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:22.687 07:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:22.687 07:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:28:22.687 07:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:22.687 07:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:22.687 07:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:22.688 07:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:22.688 07:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:28:22.688 07:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:22.688 07:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:22.688 07:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.688 07:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:22.688 07:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.688 07:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:22.688 07:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:28:22.688 07:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.688 07:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:22.688 07:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.688 07:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:22.688 07:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.688 07:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:22.946 07:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.946 07:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:28:22.946 07:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.946 07:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:22.946 [2024-11-19 07:52:14.666750] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:22.946 07:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.946 07:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:22.946 07:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.946 07:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:22.946 Malloc1 00:28:22.946 07:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.946 07:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:22.946 07:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.946 07:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:22.946 07:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.946 07:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:22.946 07:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.946 07:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:22.946 07:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.946 07:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:22.946 07:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.946 07:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:22.947 [2024-11-19 07:52:14.788647] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:22.947 07:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.947 07:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=3043677 00:28:22.947 07:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:28:22.947 07:52:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:25.475 07:52:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:28:25.475 07:52:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.475 07:52:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:25.475 07:52:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.475 07:52:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:28:25.475 "tick_rate": 2700000000, 00:28:25.475 "poll_groups": [ 00:28:25.475 { 00:28:25.475 "name": "nvmf_tgt_poll_group_000", 00:28:25.475 "admin_qpairs": 1, 00:28:25.475 "io_qpairs": 1, 00:28:25.475 "current_admin_qpairs": 1, 00:28:25.475 "current_io_qpairs": 1, 00:28:25.475 "pending_bdev_io": 0, 00:28:25.475 "completed_nvme_io": 16191, 00:28:25.475 "transports": [ 00:28:25.475 { 00:28:25.475 "trtype": "TCP" 00:28:25.475 } 00:28:25.475 ] 00:28:25.475 }, 00:28:25.475 { 00:28:25.475 "name": "nvmf_tgt_poll_group_001", 00:28:25.475 "admin_qpairs": 0, 00:28:25.475 "io_qpairs": 1, 00:28:25.475 "current_admin_qpairs": 0, 00:28:25.475 "current_io_qpairs": 1, 00:28:25.475 "pending_bdev_io": 0, 00:28:25.475 "completed_nvme_io": 15526, 00:28:25.475 "transports": [ 00:28:25.475 { 00:28:25.475 "trtype": "TCP" 00:28:25.475 } 00:28:25.475 ] 00:28:25.475 }, 00:28:25.475 { 00:28:25.475 "name": "nvmf_tgt_poll_group_002", 00:28:25.475 "admin_qpairs": 0, 00:28:25.475 "io_qpairs": 1, 00:28:25.475 "current_admin_qpairs": 0, 00:28:25.475 "current_io_qpairs": 1, 00:28:25.475 "pending_bdev_io": 0, 00:28:25.475 "completed_nvme_io": 16765, 00:28:25.475 "transports": [ 00:28:25.475 { 00:28:25.475 "trtype": "TCP" 00:28:25.475 } 00:28:25.475 ] 00:28:25.475 }, 00:28:25.475 { 00:28:25.475 "name": "nvmf_tgt_poll_group_003", 00:28:25.475 "admin_qpairs": 0, 00:28:25.475 "io_qpairs": 1, 00:28:25.475 "current_admin_qpairs": 0, 00:28:25.475 "current_io_qpairs": 1, 00:28:25.475 "pending_bdev_io": 0, 00:28:25.475 "completed_nvme_io": 16606, 00:28:25.475 "transports": [ 00:28:25.475 { 00:28:25.475 "trtype": "TCP" 00:28:25.475 } 00:28:25.475 ] 00:28:25.475 } 00:28:25.475 ] 00:28:25.475 }' 00:28:25.475 07:52:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:28:25.475 07:52:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:28:25.475 07:52:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:28:25.475 07:52:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:28:25.475 07:52:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 3043677 00:28:33.595 Initializing NVMe Controllers 00:28:33.595 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:33.595 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:33.595 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:33.595 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:33.595 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:33.595 Initialization complete. Launching workers. 00:28:33.595 ======================================================== 00:28:33.595 Latency(us) 00:28:33.595 Device Information : IOPS MiB/s Average min max 00:28:33.595 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 8851.70 34.58 7232.30 3456.97 10821.75 00:28:33.595 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 8469.00 33.08 7556.67 2639.68 12483.41 00:28:33.595 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 9064.10 35.41 7068.93 2882.98 44388.39 00:28:33.595 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 8908.90 34.80 7183.95 2878.54 11159.50 00:28:33.595 ======================================================== 00:28:33.595 Total : 35293.70 137.87 7255.98 2639.68 44388.39 00:28:33.595 00:28:33.595 07:52:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:28:33.595 07:52:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:33.595 07:52:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:28:33.595 07:52:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:33.595 07:52:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:28:33.595 07:52:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:33.595 07:52:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:33.595 rmmod nvme_tcp 00:28:33.595 rmmod nvme_fabrics 00:28:33.595 rmmod nvme_keyring 00:28:33.595 07:52:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:33.595 07:52:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:28:33.595 07:52:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:28:33.595 07:52:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 3043511 ']' 00:28:33.595 07:52:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 3043511 00:28:33.595 07:52:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 3043511 ']' 00:28:33.595 07:52:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 3043511 00:28:33.595 07:52:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:28:33.595 07:52:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:33.595 07:52:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3043511 00:28:33.595 07:52:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:33.595 07:52:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:33.595 07:52:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3043511' 00:28:33.595 killing process with pid 3043511 00:28:33.595 07:52:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 3043511 00:28:33.595 07:52:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 3043511 00:28:34.534 07:52:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:34.534 07:52:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:34.534 07:52:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:34.534 07:52:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:28:34.534 07:52:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:28:34.534 07:52:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:34.534 07:52:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:28:34.534 07:52:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:34.534 07:52:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:34.534 07:52:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:34.534 07:52:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:34.534 07:52:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:37.073 07:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:37.073 07:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:28:37.073 07:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:28:37.073 07:52:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:28:37.638 07:52:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:28:40.173 07:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:45.456 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:45.456 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:45.456 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:45.456 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:45.456 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:45.457 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:45.457 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:45.457 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:45.457 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:45.457 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:45.457 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:45.457 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:28:45.457 00:28:45.457 --- 10.0.0.2 ping statistics --- 00:28:45.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:45.457 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:28:45.457 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:45.457 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:45.457 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:28:45.457 00:28:45.457 --- 10.0.0.1 ping statistics --- 00:28:45.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:45.457 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:28:45.457 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:45.457 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:28:45.457 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:45.457 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:45.457 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:45.457 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:45.457 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:45.457 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:45.457 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:45.457 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:28:45.457 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:28:45.457 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:28:45.457 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:28:45.457 net.core.busy_poll = 1 00:28:45.457 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:28:45.457 net.core.busy_read = 1 00:28:45.457 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:28:45.457 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:28:45.457 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:28:45.457 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:28:45.457 07:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:28:45.457 07:52:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:45.457 07:52:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:45.457 07:52:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:45.457 07:52:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:45.457 07:52:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=3046545 00:28:45.457 07:52:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:45.457 07:52:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 3046545 00:28:45.457 07:52:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 3046545 ']' 00:28:45.457 07:52:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:45.457 07:52:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:45.457 07:52:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:45.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:45.457 07:52:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:45.457 07:52:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:45.457 [2024-11-19 07:52:37.095826] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:28:45.457 [2024-11-19 07:52:37.095956] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:45.457 [2024-11-19 07:52:37.251862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:45.713 [2024-11-19 07:52:37.393485] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:45.713 [2024-11-19 07:52:37.393575] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:45.713 [2024-11-19 07:52:37.393602] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:45.713 [2024-11-19 07:52:37.393636] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:45.713 [2024-11-19 07:52:37.393657] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:45.713 [2024-11-19 07:52:37.396442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:45.713 [2024-11-19 07:52:37.396511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:45.713 [2024-11-19 07:52:37.396607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:45.713 [2024-11-19 07:52:37.396612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:46.280 07:52:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:46.280 07:52:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:28:46.280 07:52:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:46.280 07:52:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:46.280 07:52:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:46.280 07:52:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:46.280 07:52:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:28:46.280 07:52:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:46.280 07:52:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.280 07:52:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:46.280 07:52:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:46.280 07:52:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.280 07:52:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:46.280 07:52:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:28:46.280 07:52:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.280 07:52:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:46.280 07:52:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.280 07:52:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:46.280 07:52:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.280 07:52:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:46.850 07:52:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.850 07:52:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:28:46.850 07:52:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.850 07:52:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:46.850 [2024-11-19 07:52:38.497798] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:46.850 07:52:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.850 07:52:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:46.850 07:52:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.850 07:52:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:46.850 Malloc1 00:28:46.850 07:52:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.850 07:52:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:46.850 07:52:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.850 07:52:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:46.850 07:52:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.850 07:52:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:46.850 07:52:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.850 07:52:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:46.850 07:52:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.850 07:52:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:46.850 07:52:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.850 07:52:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:46.850 [2024-11-19 07:52:38.619598] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:46.850 07:52:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.850 07:52:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=3046705 00:28:46.850 07:52:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:28:46.850 07:52:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:48.751 07:52:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:28:48.751 07:52:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.751 07:52:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:48.751 07:52:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.751 07:52:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:28:48.751 "tick_rate": 2700000000, 00:28:48.751 "poll_groups": [ 00:28:48.751 { 00:28:48.751 "name": "nvmf_tgt_poll_group_000", 00:28:48.751 "admin_qpairs": 1, 00:28:48.751 "io_qpairs": 0, 00:28:48.751 "current_admin_qpairs": 1, 00:28:48.751 "current_io_qpairs": 0, 00:28:48.751 "pending_bdev_io": 0, 00:28:48.751 "completed_nvme_io": 0, 00:28:48.751 "transports": [ 00:28:48.751 { 00:28:48.751 "trtype": "TCP" 00:28:48.751 } 00:28:48.751 ] 00:28:48.751 }, 00:28:48.751 { 00:28:48.751 "name": "nvmf_tgt_poll_group_001", 00:28:48.751 "admin_qpairs": 0, 00:28:48.751 "io_qpairs": 4, 00:28:48.751 "current_admin_qpairs": 0, 00:28:48.751 "current_io_qpairs": 4, 00:28:48.751 "pending_bdev_io": 0, 00:28:48.751 "completed_nvme_io": 22939, 00:28:48.751 "transports": [ 00:28:48.751 { 00:28:48.751 "trtype": "TCP" 00:28:48.751 } 00:28:48.751 ] 00:28:48.751 }, 00:28:48.751 { 00:28:48.751 "name": "nvmf_tgt_poll_group_002", 00:28:48.751 "admin_qpairs": 0, 00:28:48.751 "io_qpairs": 0, 00:28:48.751 "current_admin_qpairs": 0, 00:28:48.751 "current_io_qpairs": 0, 00:28:48.751 "pending_bdev_io": 0, 00:28:48.751 "completed_nvme_io": 0, 00:28:48.751 "transports": [ 00:28:48.751 { 00:28:48.751 "trtype": "TCP" 00:28:48.751 } 00:28:48.751 ] 00:28:48.751 }, 00:28:48.751 { 00:28:48.751 "name": "nvmf_tgt_poll_group_003", 00:28:48.751 "admin_qpairs": 0, 00:28:48.751 "io_qpairs": 0, 00:28:48.751 "current_admin_qpairs": 0, 00:28:48.751 "current_io_qpairs": 0, 00:28:48.751 "pending_bdev_io": 0, 00:28:48.751 "completed_nvme_io": 0, 00:28:48.751 "transports": [ 00:28:48.751 { 00:28:48.751 "trtype": "TCP" 00:28:48.751 } 00:28:48.751 ] 00:28:48.751 } 00:28:48.751 ] 00:28:48.751 }' 00:28:48.751 07:52:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:28:48.751 07:52:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:28:48.751 07:52:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=3 00:28:48.751 07:52:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 3 -lt 2 ]] 00:28:49.010 07:52:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 3046705 00:28:57.220 Initializing NVMe Controllers 00:28:57.220 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:57.220 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:57.220 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:57.220 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:57.220 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:57.220 Initialization complete. Launching workers. 00:28:57.220 ======================================================== 00:28:57.220 Latency(us) 00:28:57.220 Device Information : IOPS MiB/s Average min max 00:28:57.220 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 2750.10 10.74 23274.44 4321.96 70993.97 00:28:57.220 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 3168.90 12.38 20198.32 2564.90 70959.61 00:28:57.220 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 3424.10 13.38 18694.01 2610.02 70832.97 00:28:57.220 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 3189.30 12.46 20074.63 3079.91 70431.37 00:28:57.220 ======================================================== 00:28:57.220 Total : 12532.40 48.95 20430.86 2564.90 70993.97 00:28:57.220 00:28:57.220 07:52:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:28:57.220 07:52:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:57.220 07:52:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:28:57.220 07:52:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:57.220 07:52:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:28:57.220 07:52:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:57.220 07:52:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:57.220 rmmod nvme_tcp 00:28:57.220 rmmod nvme_fabrics 00:28:57.220 rmmod nvme_keyring 00:28:57.220 07:52:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:57.220 07:52:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:28:57.220 07:52:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:28:57.220 07:52:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 3046545 ']' 00:28:57.220 07:52:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 3046545 00:28:57.220 07:52:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 3046545 ']' 00:28:57.220 07:52:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 3046545 00:28:57.220 07:52:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:28:57.220 07:52:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:57.220 07:52:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3046545 00:28:57.220 07:52:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:57.220 07:52:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:57.220 07:52:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3046545' 00:28:57.220 killing process with pid 3046545 00:28:57.220 07:52:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 3046545 00:28:57.220 07:52:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 3046545 00:28:58.598 07:52:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:58.598 07:52:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:58.598 07:52:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:58.598 07:52:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:28:58.598 07:52:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:28:58.598 07:52:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:58.598 07:52:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:28:58.598 07:52:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:58.598 07:52:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:58.598 07:52:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:58.598 07:52:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:58.598 07:52:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:00.503 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:00.503 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:29:00.503 00:29:00.503 real 0m49.440s 00:29:00.503 user 2m53.370s 00:29:00.503 sys 0m10.574s 00:29:00.503 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:00.503 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:00.503 ************************************ 00:29:00.503 END TEST nvmf_perf_adq 00:29:00.503 ************************************ 00:29:00.504 07:52:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:29:00.504 07:52:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:00.504 07:52:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:00.504 07:52:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:00.504 ************************************ 00:29:00.504 START TEST nvmf_shutdown 00:29:00.504 ************************************ 00:29:00.504 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:29:00.504 * Looking for test storage... 00:29:00.504 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:00.504 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:00.504 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:29:00.504 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:00.763 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:00.763 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:00.763 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:00.763 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:00.763 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:29:00.763 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:29:00.763 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:29:00.763 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:29:00.763 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:29:00.763 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:29:00.763 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:29:00.763 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:00.763 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:29:00.763 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:29:00.763 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:00.763 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:00.763 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:29:00.763 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:29:00.763 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:00.763 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:29:00.763 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:29:00.763 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:29:00.763 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:29:00.763 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:00.763 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:29:00.763 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:29:00.763 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:00.763 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:00.763 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:29:00.763 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:00.764 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:00.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.764 --rc genhtml_branch_coverage=1 00:29:00.764 --rc genhtml_function_coverage=1 00:29:00.764 --rc genhtml_legend=1 00:29:00.764 --rc geninfo_all_blocks=1 00:29:00.764 --rc geninfo_unexecuted_blocks=1 00:29:00.764 00:29:00.764 ' 00:29:00.764 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:00.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.764 --rc genhtml_branch_coverage=1 00:29:00.764 --rc genhtml_function_coverage=1 00:29:00.764 --rc genhtml_legend=1 00:29:00.764 --rc geninfo_all_blocks=1 00:29:00.764 --rc geninfo_unexecuted_blocks=1 00:29:00.764 00:29:00.764 ' 00:29:00.764 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:00.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.764 --rc genhtml_branch_coverage=1 00:29:00.764 --rc genhtml_function_coverage=1 00:29:00.764 --rc genhtml_legend=1 00:29:00.764 --rc geninfo_all_blocks=1 00:29:00.764 --rc geninfo_unexecuted_blocks=1 00:29:00.764 00:29:00.764 ' 00:29:00.764 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:00.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.764 --rc genhtml_branch_coverage=1 00:29:00.764 --rc genhtml_function_coverage=1 00:29:00.764 --rc genhtml_legend=1 00:29:00.764 --rc geninfo_all_blocks=1 00:29:00.764 --rc geninfo_unexecuted_blocks=1 00:29:00.764 00:29:00.764 ' 00:29:00.764 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:00.764 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:29:00.764 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:00.764 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:00.764 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:00.764 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:00.764 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:00.764 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:00.764 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:00.764 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:00.764 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:00.764 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:00.764 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:00.764 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:00.764 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:00.764 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:00.764 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:00.764 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:00.764 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:00.764 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:29:00.764 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:00.764 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:00.764 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:00.764 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.764 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.764 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.764 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:29:00.764 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.764 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:29:00.764 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:00.764 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:00.764 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:00.764 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:00.764 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:00.764 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:00.764 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:00.764 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:00.764 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:00.764 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:00.764 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:29:00.764 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:29:00.764 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:29:00.764 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:00.764 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:00.764 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:00.764 ************************************ 00:29:00.764 START TEST nvmf_shutdown_tc1 00:29:00.764 ************************************ 00:29:00.764 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:29:00.764 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:29:00.764 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:00.764 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:00.764 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:00.764 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:00.764 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:00.764 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:00.764 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:00.764 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:00.764 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:00.764 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:00.764 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:00.764 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:00.764 07:52:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:02.672 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:02.672 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:02.672 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:02.672 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:02.672 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:02.672 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:02.672 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:02.672 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:29:02.672 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:02.672 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:29:02.672 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:29:02.672 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:29:02.672 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:29:02.672 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:29:02.672 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:02.672 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:02.672 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:02.672 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:02.672 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:02.672 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:02.672 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:02.672 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:02.672 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:02.672 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:02.672 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:02.672 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:02.672 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:02.672 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:02.672 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:02.672 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:02.672 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:02.672 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:02.672 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:02.672 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:02.672 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:02.672 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:02.672 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:02.672 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:02.672 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:02.672 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:02.672 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:02.672 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:02.672 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:02.672 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:02.672 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:02.672 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:02.672 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:02.672 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:02.672 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:02.672 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:02.672 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:02.672 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:02.672 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:02.672 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:02.672 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:02.672 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:02.673 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:02.673 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:02.673 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:02.673 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:02.673 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:02.673 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:02.673 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:02.673 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:02.673 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:02.673 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:02.673 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:02.673 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:02.673 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:02.673 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:02.673 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:02.673 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:02.673 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:02.673 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:02.673 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:02.673 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:02.673 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:02.673 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:02.673 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:02.673 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:02.673 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:02.673 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:02.673 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:02.673 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:02.673 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:02.673 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:02.673 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:02.673 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:02.673 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:02.673 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:02.673 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:02.673 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:02.673 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:02.673 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:02.673 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:02.673 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:02.931 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:02.931 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:02.931 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:02.931 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:02.931 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:02.931 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.277 ms 00:29:02.931 00:29:02.931 --- 10.0.0.2 ping statistics --- 00:29:02.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:02.931 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:29:02.931 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:02.931 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:02.931 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.093 ms 00:29:02.931 00:29:02.932 --- 10.0.0.1 ping statistics --- 00:29:02.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:02.932 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:29:02.932 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:02.932 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:29:02.932 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:02.932 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:02.932 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:02.932 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:02.932 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:02.932 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:02.932 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:02.932 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:02.932 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:02.932 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:02.932 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:02.932 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=3049999 00:29:02.932 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:02.932 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 3049999 00:29:02.932 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 3049999 ']' 00:29:02.932 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:02.932 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:02.932 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:02.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:02.932 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:02.932 07:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:02.932 [2024-11-19 07:52:54.744836] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:29:02.932 [2024-11-19 07:52:54.744981] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:03.191 [2024-11-19 07:52:54.899098] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:03.191 [2024-11-19 07:52:55.045118] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:03.191 [2024-11-19 07:52:55.045196] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:03.191 [2024-11-19 07:52:55.045222] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:03.191 [2024-11-19 07:52:55.045247] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:03.191 [2024-11-19 07:52:55.045268] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:03.191 [2024-11-19 07:52:55.048460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:03.191 [2024-11-19 07:52:55.048506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:03.191 [2024-11-19 07:52:55.048556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:03.191 [2024-11-19 07:52:55.048559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:04.125 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:04.125 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:29:04.125 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:04.125 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:04.125 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:04.125 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:04.125 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:04.125 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.125 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:04.125 [2024-11-19 07:52:55.728332] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:04.125 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.125 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:04.125 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:04.125 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:04.125 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:04.125 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:04.125 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:04.125 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:04.125 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:04.125 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:04.125 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:04.125 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:04.125 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:04.125 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:04.125 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:04.125 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:04.125 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:04.125 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:04.125 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:04.125 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:04.125 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:04.125 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:04.125 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:04.125 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:04.125 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:04.125 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:04.125 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:04.125 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.125 07:52:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:04.125 Malloc1 00:29:04.125 [2024-11-19 07:52:55.862897] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:04.125 Malloc2 00:29:04.125 Malloc3 00:29:04.383 Malloc4 00:29:04.383 Malloc5 00:29:04.641 Malloc6 00:29:04.641 Malloc7 00:29:04.641 Malloc8 00:29:04.900 Malloc9 00:29:04.900 Malloc10 00:29:04.900 07:52:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.900 07:52:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:04.900 07:52:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:04.900 07:52:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:04.900 07:52:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=3050310 00:29:04.900 07:52:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 3050310 /var/tmp/bdevperf.sock 00:29:04.901 07:52:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 3050310 ']' 00:29:04.901 07:52:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:29:04.901 07:52:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:04.901 07:52:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:04.901 07:52:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:04.901 07:52:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:29:04.901 07:52:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:04.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:04.901 07:52:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:29:04.901 07:52:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:04.901 07:52:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:04.901 07:52:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:04.901 07:52:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:04.901 { 00:29:04.901 "params": { 00:29:04.901 "name": "Nvme$subsystem", 00:29:04.901 "trtype": "$TEST_TRANSPORT", 00:29:04.901 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:04.901 "adrfam": "ipv4", 00:29:04.901 "trsvcid": "$NVMF_PORT", 00:29:04.901 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:04.901 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:04.901 "hdgst": ${hdgst:-false}, 00:29:04.901 "ddgst": ${ddgst:-false} 00:29:04.901 }, 00:29:04.901 "method": "bdev_nvme_attach_controller" 00:29:04.901 } 00:29:04.901 EOF 00:29:04.901 )") 00:29:04.901 07:52:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:04.901 07:52:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:04.901 07:52:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:04.901 { 00:29:04.901 "params": { 00:29:04.901 "name": "Nvme$subsystem", 00:29:04.901 "trtype": "$TEST_TRANSPORT", 00:29:04.901 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:04.901 "adrfam": "ipv4", 00:29:04.901 "trsvcid": "$NVMF_PORT", 00:29:04.901 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:04.901 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:04.901 "hdgst": ${hdgst:-false}, 00:29:04.901 "ddgst": ${ddgst:-false} 00:29:04.901 }, 00:29:04.901 "method": "bdev_nvme_attach_controller" 00:29:04.901 } 00:29:04.901 EOF 00:29:04.901 )") 00:29:04.901 07:52:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:04.901 07:52:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:04.901 07:52:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:04.901 { 00:29:04.901 "params": { 00:29:04.901 "name": "Nvme$subsystem", 00:29:04.901 "trtype": "$TEST_TRANSPORT", 00:29:04.901 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:04.901 "adrfam": "ipv4", 00:29:04.901 "trsvcid": "$NVMF_PORT", 00:29:04.901 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:04.901 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:04.901 "hdgst": ${hdgst:-false}, 00:29:04.901 "ddgst": ${ddgst:-false} 00:29:04.901 }, 00:29:04.901 "method": "bdev_nvme_attach_controller" 00:29:04.901 } 00:29:04.901 EOF 00:29:04.901 )") 00:29:04.901 07:52:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:04.901 07:52:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:04.901 07:52:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:04.901 { 00:29:04.901 "params": { 00:29:04.901 "name": "Nvme$subsystem", 00:29:04.901 "trtype": "$TEST_TRANSPORT", 00:29:04.901 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:04.901 "adrfam": "ipv4", 00:29:04.901 "trsvcid": "$NVMF_PORT", 00:29:04.901 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:04.901 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:04.901 "hdgst": ${hdgst:-false}, 00:29:04.901 "ddgst": ${ddgst:-false} 00:29:04.901 }, 00:29:04.901 "method": "bdev_nvme_attach_controller" 00:29:04.901 } 00:29:04.901 EOF 00:29:04.901 )") 00:29:04.901 07:52:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:04.901 07:52:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:04.901 07:52:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:04.901 { 00:29:04.901 "params": { 00:29:04.901 "name": "Nvme$subsystem", 00:29:04.901 "trtype": "$TEST_TRANSPORT", 00:29:04.901 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:04.901 "adrfam": "ipv4", 00:29:04.901 "trsvcid": "$NVMF_PORT", 00:29:04.901 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:04.901 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:04.901 "hdgst": ${hdgst:-false}, 00:29:04.901 "ddgst": ${ddgst:-false} 00:29:04.901 }, 00:29:04.901 "method": "bdev_nvme_attach_controller" 00:29:04.901 } 00:29:04.901 EOF 00:29:04.901 )") 00:29:04.901 07:52:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:04.901 07:52:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:04.901 07:52:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:04.901 { 00:29:04.901 "params": { 00:29:04.901 "name": "Nvme$subsystem", 00:29:04.901 "trtype": "$TEST_TRANSPORT", 00:29:04.901 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:04.901 "adrfam": "ipv4", 00:29:04.901 "trsvcid": "$NVMF_PORT", 00:29:04.901 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:04.901 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:04.901 "hdgst": ${hdgst:-false}, 00:29:04.901 "ddgst": ${ddgst:-false} 00:29:04.901 }, 00:29:04.901 "method": "bdev_nvme_attach_controller" 00:29:04.901 } 00:29:04.901 EOF 00:29:04.901 )") 00:29:04.901 07:52:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:04.901 07:52:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:04.901 07:52:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:04.901 { 00:29:04.901 "params": { 00:29:04.901 "name": "Nvme$subsystem", 00:29:04.901 "trtype": "$TEST_TRANSPORT", 00:29:04.901 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:04.901 "adrfam": "ipv4", 00:29:04.901 "trsvcid": "$NVMF_PORT", 00:29:04.901 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:04.901 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:04.901 "hdgst": ${hdgst:-false}, 00:29:04.901 "ddgst": ${ddgst:-false} 00:29:04.901 }, 00:29:04.901 "method": "bdev_nvme_attach_controller" 00:29:04.901 } 00:29:04.901 EOF 00:29:04.901 )") 00:29:04.901 07:52:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:04.901 07:52:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:04.901 07:52:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:04.901 { 00:29:04.901 "params": { 00:29:04.901 "name": "Nvme$subsystem", 00:29:04.901 "trtype": "$TEST_TRANSPORT", 00:29:04.901 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:04.901 "adrfam": "ipv4", 00:29:04.901 "trsvcid": "$NVMF_PORT", 00:29:04.901 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:04.901 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:04.901 "hdgst": ${hdgst:-false}, 00:29:04.902 "ddgst": ${ddgst:-false} 00:29:04.902 }, 00:29:04.902 "method": "bdev_nvme_attach_controller" 00:29:04.902 } 00:29:04.902 EOF 00:29:04.902 )") 00:29:04.902 07:52:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:04.902 07:52:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:04.902 07:52:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:04.902 { 00:29:04.902 "params": { 00:29:04.902 "name": "Nvme$subsystem", 00:29:04.902 "trtype": "$TEST_TRANSPORT", 00:29:04.902 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:04.902 "adrfam": "ipv4", 00:29:04.902 "trsvcid": "$NVMF_PORT", 00:29:04.902 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:04.902 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:04.902 "hdgst": ${hdgst:-false}, 00:29:04.902 "ddgst": ${ddgst:-false} 00:29:04.902 }, 00:29:04.902 "method": "bdev_nvme_attach_controller" 00:29:04.902 } 00:29:04.902 EOF 00:29:04.902 )") 00:29:04.902 07:52:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:04.902 07:52:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:04.902 07:52:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:04.902 { 00:29:04.902 "params": { 00:29:04.902 "name": "Nvme$subsystem", 00:29:04.902 "trtype": "$TEST_TRANSPORT", 00:29:04.902 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:04.902 "adrfam": "ipv4", 00:29:04.902 "trsvcid": "$NVMF_PORT", 00:29:04.902 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:04.902 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:04.902 "hdgst": ${hdgst:-false}, 00:29:04.902 "ddgst": ${ddgst:-false} 00:29:04.902 }, 00:29:04.902 "method": "bdev_nvme_attach_controller" 00:29:04.902 } 00:29:04.902 EOF 00:29:04.902 )") 00:29:04.902 07:52:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:04.902 07:52:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:29:04.902 07:52:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:29:04.902 07:52:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:04.902 "params": { 00:29:04.902 "name": "Nvme1", 00:29:04.902 "trtype": "tcp", 00:29:04.902 "traddr": "10.0.0.2", 00:29:04.902 "adrfam": "ipv4", 00:29:04.902 "trsvcid": "4420", 00:29:04.902 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:04.902 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:04.902 "hdgst": false, 00:29:04.902 "ddgst": false 00:29:04.902 }, 00:29:04.902 "method": "bdev_nvme_attach_controller" 00:29:04.902 },{ 00:29:04.902 "params": { 00:29:04.902 "name": "Nvme2", 00:29:04.902 "trtype": "tcp", 00:29:04.902 "traddr": "10.0.0.2", 00:29:04.902 "adrfam": "ipv4", 00:29:04.902 "trsvcid": "4420", 00:29:04.902 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:04.902 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:04.902 "hdgst": false, 00:29:04.902 "ddgst": false 00:29:04.902 }, 00:29:04.902 "method": "bdev_nvme_attach_controller" 00:29:04.902 },{ 00:29:04.902 "params": { 00:29:04.902 "name": "Nvme3", 00:29:04.902 "trtype": "tcp", 00:29:04.902 "traddr": "10.0.0.2", 00:29:04.902 "adrfam": "ipv4", 00:29:04.902 "trsvcid": "4420", 00:29:04.902 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:04.902 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:04.902 "hdgst": false, 00:29:04.902 "ddgst": false 00:29:04.902 }, 00:29:04.902 "method": "bdev_nvme_attach_controller" 00:29:04.902 },{ 00:29:04.902 "params": { 00:29:04.902 "name": "Nvme4", 00:29:04.902 "trtype": "tcp", 00:29:04.902 "traddr": "10.0.0.2", 00:29:04.902 "adrfam": "ipv4", 00:29:04.902 "trsvcid": "4420", 00:29:04.902 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:04.902 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:04.902 "hdgst": false, 00:29:04.902 "ddgst": false 00:29:04.902 }, 00:29:04.902 "method": "bdev_nvme_attach_controller" 00:29:04.902 },{ 00:29:04.902 "params": { 00:29:04.902 "name": "Nvme5", 00:29:04.902 "trtype": "tcp", 00:29:04.902 "traddr": "10.0.0.2", 00:29:04.902 "adrfam": "ipv4", 00:29:04.902 "trsvcid": "4420", 00:29:04.902 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:04.902 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:04.902 "hdgst": false, 00:29:04.902 "ddgst": false 00:29:04.902 }, 00:29:04.902 "method": "bdev_nvme_attach_controller" 00:29:04.902 },{ 00:29:04.902 "params": { 00:29:04.902 "name": "Nvme6", 00:29:04.902 "trtype": "tcp", 00:29:04.902 "traddr": "10.0.0.2", 00:29:04.902 "adrfam": "ipv4", 00:29:04.902 "trsvcid": "4420", 00:29:04.902 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:04.902 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:04.902 "hdgst": false, 00:29:04.902 "ddgst": false 00:29:04.902 }, 00:29:04.902 "method": "bdev_nvme_attach_controller" 00:29:04.902 },{ 00:29:04.902 "params": { 00:29:04.902 "name": "Nvme7", 00:29:04.902 "trtype": "tcp", 00:29:04.902 "traddr": "10.0.0.2", 00:29:04.902 "adrfam": "ipv4", 00:29:04.902 "trsvcid": "4420", 00:29:04.902 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:04.902 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:04.902 "hdgst": false, 00:29:04.902 "ddgst": false 00:29:04.902 }, 00:29:04.902 "method": "bdev_nvme_attach_controller" 00:29:04.902 },{ 00:29:04.902 "params": { 00:29:04.902 "name": "Nvme8", 00:29:04.902 "trtype": "tcp", 00:29:04.902 "traddr": "10.0.0.2", 00:29:04.902 "adrfam": "ipv4", 00:29:04.902 "trsvcid": "4420", 00:29:04.902 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:04.902 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:04.902 "hdgst": false, 00:29:04.902 "ddgst": false 00:29:04.902 }, 00:29:04.902 "method": "bdev_nvme_attach_controller" 00:29:04.902 },{ 00:29:04.902 "params": { 00:29:04.902 "name": "Nvme9", 00:29:04.902 "trtype": "tcp", 00:29:04.902 "traddr": "10.0.0.2", 00:29:04.902 "adrfam": "ipv4", 00:29:04.902 "trsvcid": "4420", 00:29:04.902 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:04.902 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:04.902 "hdgst": false, 00:29:04.902 "ddgst": false 00:29:04.902 }, 00:29:04.902 "method": "bdev_nvme_attach_controller" 00:29:04.902 },{ 00:29:04.902 "params": { 00:29:04.902 "name": "Nvme10", 00:29:04.902 "trtype": "tcp", 00:29:04.902 "traddr": "10.0.0.2", 00:29:04.902 "adrfam": "ipv4", 00:29:04.902 "trsvcid": "4420", 00:29:04.902 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:04.902 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:04.902 "hdgst": false, 00:29:04.902 "ddgst": false 00:29:04.902 }, 00:29:04.902 "method": "bdev_nvme_attach_controller" 00:29:04.902 }' 00:29:05.161 [2024-11-19 07:52:56.880590] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:29:05.161 [2024-11-19 07:52:56.880762] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:29:05.161 [2024-11-19 07:52:57.019681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:05.419 [2024-11-19 07:52:57.150126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:07.948 07:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:07.948 07:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:29:07.948 07:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:07.948 07:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.948 07:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:07.948 07:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.948 07:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 3050310 00:29:07.948 07:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:29:07.948 07:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:29:08.883 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 3050310 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:29:08.883 07:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 3049999 00:29:08.883 07:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:29:08.883 07:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:08.883 07:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:29:08.883 07:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:29:08.883 07:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:08.883 07:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:08.883 { 00:29:08.883 "params": { 00:29:08.883 "name": "Nvme$subsystem", 00:29:08.883 "trtype": "$TEST_TRANSPORT", 00:29:08.883 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:08.883 "adrfam": "ipv4", 00:29:08.883 "trsvcid": "$NVMF_PORT", 00:29:08.883 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:08.883 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:08.883 "hdgst": ${hdgst:-false}, 00:29:08.883 "ddgst": ${ddgst:-false} 00:29:08.883 }, 00:29:08.883 "method": "bdev_nvme_attach_controller" 00:29:08.883 } 00:29:08.883 EOF 00:29:08.883 )") 00:29:08.883 07:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:08.883 07:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:08.883 07:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:08.883 { 00:29:08.883 "params": { 00:29:08.883 "name": "Nvme$subsystem", 00:29:08.883 "trtype": "$TEST_TRANSPORT", 00:29:08.883 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:08.883 "adrfam": "ipv4", 00:29:08.883 "trsvcid": "$NVMF_PORT", 00:29:08.883 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:08.883 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:08.883 "hdgst": ${hdgst:-false}, 00:29:08.883 "ddgst": ${ddgst:-false} 00:29:08.883 }, 00:29:08.883 "method": "bdev_nvme_attach_controller" 00:29:08.883 } 00:29:08.883 EOF 00:29:08.883 )") 00:29:08.883 07:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:08.883 07:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:08.883 07:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:08.883 { 00:29:08.883 "params": { 00:29:08.883 "name": "Nvme$subsystem", 00:29:08.883 "trtype": "$TEST_TRANSPORT", 00:29:08.883 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:08.883 "adrfam": "ipv4", 00:29:08.883 "trsvcid": "$NVMF_PORT", 00:29:08.883 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:08.883 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:08.883 "hdgst": ${hdgst:-false}, 00:29:08.883 "ddgst": ${ddgst:-false} 00:29:08.883 }, 00:29:08.883 "method": "bdev_nvme_attach_controller" 00:29:08.883 } 00:29:08.883 EOF 00:29:08.883 )") 00:29:08.883 07:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:08.883 07:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:08.883 07:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:08.883 { 00:29:08.883 "params": { 00:29:08.883 "name": "Nvme$subsystem", 00:29:08.883 "trtype": "$TEST_TRANSPORT", 00:29:08.883 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:08.883 "adrfam": "ipv4", 00:29:08.883 "trsvcid": "$NVMF_PORT", 00:29:08.883 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:08.883 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:08.883 "hdgst": ${hdgst:-false}, 00:29:08.883 "ddgst": ${ddgst:-false} 00:29:08.883 }, 00:29:08.883 "method": "bdev_nvme_attach_controller" 00:29:08.883 } 00:29:08.883 EOF 00:29:08.883 )") 00:29:08.883 07:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:08.883 07:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:08.883 07:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:08.883 { 00:29:08.883 "params": { 00:29:08.883 "name": "Nvme$subsystem", 00:29:08.883 "trtype": "$TEST_TRANSPORT", 00:29:08.883 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:08.883 "adrfam": "ipv4", 00:29:08.883 "trsvcid": "$NVMF_PORT", 00:29:08.883 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:08.883 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:08.883 "hdgst": ${hdgst:-false}, 00:29:08.883 "ddgst": ${ddgst:-false} 00:29:08.883 }, 00:29:08.883 "method": "bdev_nvme_attach_controller" 00:29:08.883 } 00:29:08.883 EOF 00:29:08.883 )") 00:29:08.883 07:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:08.883 07:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:08.883 07:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:08.883 { 00:29:08.883 "params": { 00:29:08.883 "name": "Nvme$subsystem", 00:29:08.883 "trtype": "$TEST_TRANSPORT", 00:29:08.883 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:08.883 "adrfam": "ipv4", 00:29:08.883 "trsvcid": "$NVMF_PORT", 00:29:08.883 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:08.883 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:08.883 "hdgst": ${hdgst:-false}, 00:29:08.883 "ddgst": ${ddgst:-false} 00:29:08.883 }, 00:29:08.883 "method": "bdev_nvme_attach_controller" 00:29:08.883 } 00:29:08.883 EOF 00:29:08.883 )") 00:29:08.883 07:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:08.883 07:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:08.883 07:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:08.883 { 00:29:08.883 "params": { 00:29:08.883 "name": "Nvme$subsystem", 00:29:08.883 "trtype": "$TEST_TRANSPORT", 00:29:08.883 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:08.883 "adrfam": "ipv4", 00:29:08.883 "trsvcid": "$NVMF_PORT", 00:29:08.883 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:08.883 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:08.883 "hdgst": ${hdgst:-false}, 00:29:08.883 "ddgst": ${ddgst:-false} 00:29:08.883 }, 00:29:08.883 "method": "bdev_nvme_attach_controller" 00:29:08.883 } 00:29:08.883 EOF 00:29:08.883 )") 00:29:08.883 07:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:08.883 07:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:08.883 07:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:08.883 { 00:29:08.883 "params": { 00:29:08.883 "name": "Nvme$subsystem", 00:29:08.883 "trtype": "$TEST_TRANSPORT", 00:29:08.883 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:08.883 "adrfam": "ipv4", 00:29:08.883 "trsvcid": "$NVMF_PORT", 00:29:08.883 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:08.883 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:08.883 "hdgst": ${hdgst:-false}, 00:29:08.883 "ddgst": ${ddgst:-false} 00:29:08.883 }, 00:29:08.883 "method": "bdev_nvme_attach_controller" 00:29:08.883 } 00:29:08.883 EOF 00:29:08.883 )") 00:29:08.883 07:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:08.883 07:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:08.883 07:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:08.883 { 00:29:08.883 "params": { 00:29:08.883 "name": "Nvme$subsystem", 00:29:08.883 "trtype": "$TEST_TRANSPORT", 00:29:08.883 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:08.883 "adrfam": "ipv4", 00:29:08.883 "trsvcid": "$NVMF_PORT", 00:29:08.883 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:08.883 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:08.883 "hdgst": ${hdgst:-false}, 00:29:08.883 "ddgst": ${ddgst:-false} 00:29:08.883 }, 00:29:08.883 "method": "bdev_nvme_attach_controller" 00:29:08.883 } 00:29:08.883 EOF 00:29:08.883 )") 00:29:08.883 07:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:08.883 07:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:08.884 07:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:08.884 { 00:29:08.884 "params": { 00:29:08.884 "name": "Nvme$subsystem", 00:29:08.884 "trtype": "$TEST_TRANSPORT", 00:29:08.884 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:08.884 "adrfam": "ipv4", 00:29:08.884 "trsvcid": "$NVMF_PORT", 00:29:08.884 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:08.884 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:08.884 "hdgst": ${hdgst:-false}, 00:29:08.884 "ddgst": ${ddgst:-false} 00:29:08.884 }, 00:29:08.884 "method": "bdev_nvme_attach_controller" 00:29:08.884 } 00:29:08.884 EOF 00:29:08.884 )") 00:29:08.884 07:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:08.884 07:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:29:08.884 07:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:29:08.884 07:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:08.884 "params": { 00:29:08.884 "name": "Nvme1", 00:29:08.884 "trtype": "tcp", 00:29:08.884 "traddr": "10.0.0.2", 00:29:08.884 "adrfam": "ipv4", 00:29:08.884 "trsvcid": "4420", 00:29:08.884 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:08.884 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:08.884 "hdgst": false, 00:29:08.884 "ddgst": false 00:29:08.884 }, 00:29:08.884 "method": "bdev_nvme_attach_controller" 00:29:08.884 },{ 00:29:08.884 "params": { 00:29:08.884 "name": "Nvme2", 00:29:08.884 "trtype": "tcp", 00:29:08.884 "traddr": "10.0.0.2", 00:29:08.884 "adrfam": "ipv4", 00:29:08.884 "trsvcid": "4420", 00:29:08.884 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:08.884 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:08.884 "hdgst": false, 00:29:08.884 "ddgst": false 00:29:08.884 }, 00:29:08.884 "method": "bdev_nvme_attach_controller" 00:29:08.884 },{ 00:29:08.884 "params": { 00:29:08.884 "name": "Nvme3", 00:29:08.884 "trtype": "tcp", 00:29:08.884 "traddr": "10.0.0.2", 00:29:08.884 "adrfam": "ipv4", 00:29:08.884 "trsvcid": "4420", 00:29:08.884 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:08.884 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:08.884 "hdgst": false, 00:29:08.884 "ddgst": false 00:29:08.884 }, 00:29:08.884 "method": "bdev_nvme_attach_controller" 00:29:08.884 },{ 00:29:08.884 "params": { 00:29:08.884 "name": "Nvme4", 00:29:08.884 "trtype": "tcp", 00:29:08.884 "traddr": "10.0.0.2", 00:29:08.884 "adrfam": "ipv4", 00:29:08.884 "trsvcid": "4420", 00:29:08.884 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:08.884 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:08.884 "hdgst": false, 00:29:08.884 "ddgst": false 00:29:08.884 }, 00:29:08.884 "method": "bdev_nvme_attach_controller" 00:29:08.884 },{ 00:29:08.884 "params": { 00:29:08.884 "name": "Nvme5", 00:29:08.884 "trtype": "tcp", 00:29:08.884 "traddr": "10.0.0.2", 00:29:08.884 "adrfam": "ipv4", 00:29:08.884 "trsvcid": "4420", 00:29:08.884 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:08.884 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:08.884 "hdgst": false, 00:29:08.884 "ddgst": false 00:29:08.884 }, 00:29:08.884 "method": "bdev_nvme_attach_controller" 00:29:08.884 },{ 00:29:08.884 "params": { 00:29:08.884 "name": "Nvme6", 00:29:08.884 "trtype": "tcp", 00:29:08.884 "traddr": "10.0.0.2", 00:29:08.884 "adrfam": "ipv4", 00:29:08.884 "trsvcid": "4420", 00:29:08.884 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:08.884 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:08.884 "hdgst": false, 00:29:08.884 "ddgst": false 00:29:08.884 }, 00:29:08.884 "method": "bdev_nvme_attach_controller" 00:29:08.884 },{ 00:29:08.884 "params": { 00:29:08.884 "name": "Nvme7", 00:29:08.884 "trtype": "tcp", 00:29:08.884 "traddr": "10.0.0.2", 00:29:08.884 "adrfam": "ipv4", 00:29:08.884 "trsvcid": "4420", 00:29:08.884 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:08.884 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:08.884 "hdgst": false, 00:29:08.884 "ddgst": false 00:29:08.884 }, 00:29:08.884 "method": "bdev_nvme_attach_controller" 00:29:08.884 },{ 00:29:08.884 "params": { 00:29:08.884 "name": "Nvme8", 00:29:08.884 "trtype": "tcp", 00:29:08.884 "traddr": "10.0.0.2", 00:29:08.884 "adrfam": "ipv4", 00:29:08.884 "trsvcid": "4420", 00:29:08.884 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:08.884 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:08.884 "hdgst": false, 00:29:08.884 "ddgst": false 00:29:08.884 }, 00:29:08.884 "method": "bdev_nvme_attach_controller" 00:29:08.884 },{ 00:29:08.884 "params": { 00:29:08.884 "name": "Nvme9", 00:29:08.884 "trtype": "tcp", 00:29:08.884 "traddr": "10.0.0.2", 00:29:08.884 "adrfam": "ipv4", 00:29:08.884 "trsvcid": "4420", 00:29:08.884 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:08.884 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:08.884 "hdgst": false, 00:29:08.884 "ddgst": false 00:29:08.884 }, 00:29:08.884 "method": "bdev_nvme_attach_controller" 00:29:08.884 },{ 00:29:08.884 "params": { 00:29:08.884 "name": "Nvme10", 00:29:08.884 "trtype": "tcp", 00:29:08.884 "traddr": "10.0.0.2", 00:29:08.884 "adrfam": "ipv4", 00:29:08.884 "trsvcid": "4420", 00:29:08.884 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:08.884 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:08.884 "hdgst": false, 00:29:08.884 "ddgst": false 00:29:08.884 }, 00:29:08.884 "method": "bdev_nvme_attach_controller" 00:29:08.884 }' 00:29:08.884 [2024-11-19 07:53:00.704705] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:29:08.884 [2024-11-19 07:53:00.704838] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3050740 ] 00:29:09.142 [2024-11-19 07:53:00.855299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:09.142 [2024-11-19 07:53:00.984841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:11.043 Running I/O for 1 seconds... 00:29:11.868 1485.00 IOPS, 92.81 MiB/s 00:29:11.868 Latency(us) 00:29:11.868 [2024-11-19T06:53:03.798Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:11.868 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:11.868 Verification LBA range: start 0x0 length 0x400 00:29:11.868 Nvme1n1 : 1.11 172.35 10.77 0.00 0.00 367220.56 39807.05 298261.62 00:29:11.868 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:11.868 Verification LBA range: start 0x0 length 0x400 00:29:11.868 Nvme2n1 : 1.05 183.66 11.48 0.00 0.00 337821.39 25437.68 315349.52 00:29:11.868 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:11.868 Verification LBA range: start 0x0 length 0x400 00:29:11.868 Nvme3n1 : 1.21 211.65 13.23 0.00 0.00 289142.71 21262.79 299815.06 00:29:11.868 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:11.868 Verification LBA range: start 0x0 length 0x400 00:29:11.868 Nvme4n1 : 1.20 216.54 13.53 0.00 0.00 277267.88 6165.24 293601.28 00:29:11.868 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:11.868 Verification LBA range: start 0x0 length 0x400 00:29:11.868 Nvme5n1 : 1.22 210.13 13.13 0.00 0.00 279881.39 22622.06 298261.62 00:29:11.868 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:11.868 Verification LBA range: start 0x0 length 0x400 00:29:11.868 Nvme6n1 : 1.20 217.08 13.57 0.00 0.00 265226.25 12330.48 296708.17 00:29:11.868 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:11.868 Verification LBA range: start 0x0 length 0x400 00:29:11.868 Nvme7n1 : 1.22 209.22 13.08 0.00 0.00 273078.04 20971.52 299815.06 00:29:11.868 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:11.868 Verification LBA range: start 0x0 length 0x400 00:29:11.868 Nvme8n1 : 1.23 208.02 13.00 0.00 0.00 269944.79 22330.79 301368.51 00:29:11.868 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:11.868 Verification LBA range: start 0x0 length 0x400 00:29:11.868 Nvme9n1 : 1.18 162.22 10.14 0.00 0.00 337481.32 24563.86 338651.21 00:29:11.868 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:11.868 Verification LBA range: start 0x0 length 0x400 00:29:11.868 Nvme10n1 : 1.24 210.29 13.14 0.00 0.00 257394.60 2827.76 315349.52 00:29:11.868 [2024-11-19T06:53:03.798Z] =================================================================================================================== 00:29:11.868 [2024-11-19T06:53:03.798Z] Total : 2001.16 125.07 0.00 0.00 291089.79 2827.76 338651.21 00:29:12.803 07:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:29:12.803 07:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:12.803 07:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:12.803 07:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:12.803 07:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:12.803 07:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:12.803 07:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:29:12.803 07:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:12.803 07:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:29:12.803 07:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:12.803 07:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:12.803 rmmod nvme_tcp 00:29:13.062 rmmod nvme_fabrics 00:29:13.062 rmmod nvme_keyring 00:29:13.062 07:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:13.062 07:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:29:13.062 07:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:29:13.062 07:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 3049999 ']' 00:29:13.062 07:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 3049999 00:29:13.062 07:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 3049999 ']' 00:29:13.062 07:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 3049999 00:29:13.062 07:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:29:13.062 07:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:13.062 07:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3049999 00:29:13.062 07:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:13.062 07:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:13.062 07:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3049999' 00:29:13.062 killing process with pid 3049999 00:29:13.062 07:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 3049999 00:29:13.062 07:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 3049999 00:29:15.591 07:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:15.591 07:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:15.591 07:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:15.591 07:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:29:15.591 07:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:29:15.591 07:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:15.591 07:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:29:15.591 07:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:15.591 07:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:15.849 07:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:15.849 07:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:15.849 07:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:17.752 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:17.752 00:29:17.752 real 0m17.064s 00:29:17.752 user 0m55.239s 00:29:17.752 sys 0m3.935s 00:29:17.752 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:17.752 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:17.752 ************************************ 00:29:17.752 END TEST nvmf_shutdown_tc1 00:29:17.752 ************************************ 00:29:17.752 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:29:17.752 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:17.752 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:17.752 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:17.752 ************************************ 00:29:17.752 START TEST nvmf_shutdown_tc2 00:29:17.752 ************************************ 00:29:17.752 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:29:17.752 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:29:17.752 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:17.752 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:17.752 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:17.752 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:17.752 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:17.752 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:17.752 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:17.752 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:17.752 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:17.752 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:17.752 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:17.752 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:17.752 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:17.752 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:17.752 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:17.752 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:17.752 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:17.752 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:17.752 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:17.752 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:17.752 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:29:17.752 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:17.752 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:29:17.752 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:29:17.752 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:29:17.752 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:29:17.752 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:29:17.753 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:17.753 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:17.753 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:17.753 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:17.753 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:17.753 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:17.753 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:17.753 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:17.753 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:17.753 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:17.753 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:17.753 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:17.753 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:17.753 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:17.753 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:17.753 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:17.753 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:17.753 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:17.753 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:17.753 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:17.753 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:17.753 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:17.753 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:17.753 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:17.753 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:17.753 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:17.753 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:17.753 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:17.753 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:17.753 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:17.753 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:17.753 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:17.753 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:17.753 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:17.753 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:17.753 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:17.753 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:17.753 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:17.753 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:17.753 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:17.753 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:17.753 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:17.753 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:17.753 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:17.753 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:17.753 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:17.753 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:17.753 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:17.753 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:17.753 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:17.753 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:17.753 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:17.753 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:17.753 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:17.753 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:17.753 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:17.753 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:17.753 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:17.753 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:17.753 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:17.753 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:17.753 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:17.753 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:17.753 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:17.753 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:17.753 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:17.753 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:17.753 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:17.753 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:17.753 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:17.753 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:17.753 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:17.753 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:17.753 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:17.753 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:17.753 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:17.753 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:17.753 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:18.012 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:18.012 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:18.012 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:18.012 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:18.012 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:18.012 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:18.012 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:18.012 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:18.012 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:18.012 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:29:18.012 00:29:18.012 --- 10.0.0.2 ping statistics --- 00:29:18.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:18.012 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:29:18.012 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:18.012 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:18.012 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.085 ms 00:29:18.012 00:29:18.012 --- 10.0.0.1 ping statistics --- 00:29:18.013 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:18.013 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:29:18.013 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:18.013 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:29:18.013 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:18.013 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:18.013 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:18.013 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:18.013 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:18.013 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:18.013 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:18.013 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:18.013 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:18.013 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:18.013 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:18.013 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3051911 00:29:18.013 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:18.013 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3051911 00:29:18.013 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3051911 ']' 00:29:18.013 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:18.013 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:18.013 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:18.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:18.013 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:18.013 07:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:18.013 [2024-11-19 07:53:09.939320] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:29:18.013 [2024-11-19 07:53:09.939458] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:18.270 [2024-11-19 07:53:10.100920] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:18.528 [2024-11-19 07:53:10.245312] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:18.528 [2024-11-19 07:53:10.245400] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:18.528 [2024-11-19 07:53:10.245433] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:18.528 [2024-11-19 07:53:10.245457] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:18.528 [2024-11-19 07:53:10.245477] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:18.528 [2024-11-19 07:53:10.248360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:18.528 [2024-11-19 07:53:10.248482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:18.528 [2024-11-19 07:53:10.248541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:18.528 [2024-11-19 07:53:10.248557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:19.095 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:19.095 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:29:19.095 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:19.095 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:19.095 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:19.095 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:19.095 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:19.095 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.095 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:19.095 [2024-11-19 07:53:10.917080] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:19.095 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.095 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:19.095 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:19.095 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:19.095 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:19.095 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:19.095 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:19.095 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:19.095 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:19.095 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:19.095 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:19.095 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:19.095 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:19.095 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:19.095 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:19.095 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:19.095 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:19.095 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:19.095 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:19.095 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:19.095 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:19.095 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:19.095 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:19.095 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:19.095 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:19.095 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:19.095 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:19.095 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.095 07:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:19.353 Malloc1 00:29:19.353 [2024-11-19 07:53:11.065623] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:19.353 Malloc2 00:29:19.353 Malloc3 00:29:19.611 Malloc4 00:29:19.611 Malloc5 00:29:19.611 Malloc6 00:29:19.870 Malloc7 00:29:19.870 Malloc8 00:29:20.128 Malloc9 00:29:20.128 Malloc10 00:29:20.128 07:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.128 07:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:20.128 07:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:20.128 07:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:20.128 07:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=3052209 00:29:20.128 07:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 3052209 /var/tmp/bdevperf.sock 00:29:20.128 07:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3052209 ']' 00:29:20.128 07:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:20.128 07:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:20.128 07:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:20.128 07:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:20.128 07:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:29:20.128 07:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:20.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:20.128 07:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:29:20.128 07:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:20.128 07:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:20.129 07:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:20.129 07:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:20.129 { 00:29:20.129 "params": { 00:29:20.129 "name": "Nvme$subsystem", 00:29:20.129 "trtype": "$TEST_TRANSPORT", 00:29:20.129 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:20.129 "adrfam": "ipv4", 00:29:20.129 "trsvcid": "$NVMF_PORT", 00:29:20.129 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:20.129 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:20.129 "hdgst": ${hdgst:-false}, 00:29:20.129 "ddgst": ${ddgst:-false} 00:29:20.129 }, 00:29:20.129 "method": "bdev_nvme_attach_controller" 00:29:20.129 } 00:29:20.129 EOF 00:29:20.129 )") 00:29:20.129 07:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:20.129 07:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:20.129 07:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:20.129 { 00:29:20.129 "params": { 00:29:20.129 "name": "Nvme$subsystem", 00:29:20.129 "trtype": "$TEST_TRANSPORT", 00:29:20.129 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:20.129 "adrfam": "ipv4", 00:29:20.129 "trsvcid": "$NVMF_PORT", 00:29:20.129 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:20.129 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:20.129 "hdgst": ${hdgst:-false}, 00:29:20.129 "ddgst": ${ddgst:-false} 00:29:20.129 }, 00:29:20.129 "method": "bdev_nvme_attach_controller" 00:29:20.129 } 00:29:20.129 EOF 00:29:20.129 )") 00:29:20.129 07:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:20.129 07:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:20.129 07:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:20.129 { 00:29:20.129 "params": { 00:29:20.129 "name": "Nvme$subsystem", 00:29:20.129 "trtype": "$TEST_TRANSPORT", 00:29:20.129 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:20.129 "adrfam": "ipv4", 00:29:20.129 "trsvcid": "$NVMF_PORT", 00:29:20.129 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:20.129 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:20.129 "hdgst": ${hdgst:-false}, 00:29:20.129 "ddgst": ${ddgst:-false} 00:29:20.129 }, 00:29:20.129 "method": "bdev_nvme_attach_controller" 00:29:20.129 } 00:29:20.129 EOF 00:29:20.129 )") 00:29:20.129 07:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:20.129 07:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:20.129 07:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:20.129 { 00:29:20.129 "params": { 00:29:20.129 "name": "Nvme$subsystem", 00:29:20.129 "trtype": "$TEST_TRANSPORT", 00:29:20.129 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:20.129 "adrfam": "ipv4", 00:29:20.129 "trsvcid": "$NVMF_PORT", 00:29:20.129 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:20.129 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:20.129 "hdgst": ${hdgst:-false}, 00:29:20.129 "ddgst": ${ddgst:-false} 00:29:20.129 }, 00:29:20.129 "method": "bdev_nvme_attach_controller" 00:29:20.129 } 00:29:20.129 EOF 00:29:20.129 )") 00:29:20.129 07:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:20.129 07:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:20.129 07:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:20.129 { 00:29:20.129 "params": { 00:29:20.129 "name": "Nvme$subsystem", 00:29:20.129 "trtype": "$TEST_TRANSPORT", 00:29:20.129 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:20.129 "adrfam": "ipv4", 00:29:20.129 "trsvcid": "$NVMF_PORT", 00:29:20.129 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:20.129 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:20.129 "hdgst": ${hdgst:-false}, 00:29:20.129 "ddgst": ${ddgst:-false} 00:29:20.129 }, 00:29:20.129 "method": "bdev_nvme_attach_controller" 00:29:20.129 } 00:29:20.129 EOF 00:29:20.129 )") 00:29:20.129 07:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:20.129 07:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:20.129 07:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:20.129 { 00:29:20.129 "params": { 00:29:20.129 "name": "Nvme$subsystem", 00:29:20.129 "trtype": "$TEST_TRANSPORT", 00:29:20.129 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:20.129 "adrfam": "ipv4", 00:29:20.129 "trsvcid": "$NVMF_PORT", 00:29:20.129 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:20.129 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:20.129 "hdgst": ${hdgst:-false}, 00:29:20.129 "ddgst": ${ddgst:-false} 00:29:20.129 }, 00:29:20.129 "method": "bdev_nvme_attach_controller" 00:29:20.129 } 00:29:20.129 EOF 00:29:20.129 )") 00:29:20.129 07:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:20.129 07:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:20.129 07:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:20.129 { 00:29:20.129 "params": { 00:29:20.129 "name": "Nvme$subsystem", 00:29:20.129 "trtype": "$TEST_TRANSPORT", 00:29:20.129 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:20.129 "adrfam": "ipv4", 00:29:20.129 "trsvcid": "$NVMF_PORT", 00:29:20.129 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:20.129 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:20.129 "hdgst": ${hdgst:-false}, 00:29:20.129 "ddgst": ${ddgst:-false} 00:29:20.129 }, 00:29:20.129 "method": "bdev_nvme_attach_controller" 00:29:20.129 } 00:29:20.129 EOF 00:29:20.129 )") 00:29:20.129 07:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:20.129 07:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:20.129 07:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:20.129 { 00:29:20.129 "params": { 00:29:20.129 "name": "Nvme$subsystem", 00:29:20.129 "trtype": "$TEST_TRANSPORT", 00:29:20.129 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:20.129 "adrfam": "ipv4", 00:29:20.129 "trsvcid": "$NVMF_PORT", 00:29:20.129 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:20.129 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:20.129 "hdgst": ${hdgst:-false}, 00:29:20.129 "ddgst": ${ddgst:-false} 00:29:20.129 }, 00:29:20.129 "method": "bdev_nvme_attach_controller" 00:29:20.129 } 00:29:20.129 EOF 00:29:20.129 )") 00:29:20.129 07:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:20.129 07:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:20.129 07:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:20.129 { 00:29:20.129 "params": { 00:29:20.129 "name": "Nvme$subsystem", 00:29:20.129 "trtype": "$TEST_TRANSPORT", 00:29:20.129 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:20.129 "adrfam": "ipv4", 00:29:20.129 "trsvcid": "$NVMF_PORT", 00:29:20.129 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:20.129 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:20.129 "hdgst": ${hdgst:-false}, 00:29:20.129 "ddgst": ${ddgst:-false} 00:29:20.129 }, 00:29:20.129 "method": "bdev_nvme_attach_controller" 00:29:20.129 } 00:29:20.129 EOF 00:29:20.129 )") 00:29:20.129 07:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:20.129 07:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:20.129 07:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:20.129 { 00:29:20.129 "params": { 00:29:20.129 "name": "Nvme$subsystem", 00:29:20.129 "trtype": "$TEST_TRANSPORT", 00:29:20.129 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:20.129 "adrfam": "ipv4", 00:29:20.129 "trsvcid": "$NVMF_PORT", 00:29:20.129 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:20.129 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:20.129 "hdgst": ${hdgst:-false}, 00:29:20.129 "ddgst": ${ddgst:-false} 00:29:20.129 }, 00:29:20.129 "method": "bdev_nvme_attach_controller" 00:29:20.129 } 00:29:20.129 EOF 00:29:20.129 )") 00:29:20.129 07:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:20.129 07:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:29:20.129 07:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:29:20.129 07:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:20.129 "params": { 00:29:20.129 "name": "Nvme1", 00:29:20.129 "trtype": "tcp", 00:29:20.129 "traddr": "10.0.0.2", 00:29:20.129 "adrfam": "ipv4", 00:29:20.129 "trsvcid": "4420", 00:29:20.129 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:20.130 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:20.130 "hdgst": false, 00:29:20.130 "ddgst": false 00:29:20.130 }, 00:29:20.130 "method": "bdev_nvme_attach_controller" 00:29:20.130 },{ 00:29:20.130 "params": { 00:29:20.130 "name": "Nvme2", 00:29:20.130 "trtype": "tcp", 00:29:20.130 "traddr": "10.0.0.2", 00:29:20.130 "adrfam": "ipv4", 00:29:20.130 "trsvcid": "4420", 00:29:20.130 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:20.130 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:20.130 "hdgst": false, 00:29:20.130 "ddgst": false 00:29:20.130 }, 00:29:20.130 "method": "bdev_nvme_attach_controller" 00:29:20.130 },{ 00:29:20.130 "params": { 00:29:20.130 "name": "Nvme3", 00:29:20.130 "trtype": "tcp", 00:29:20.130 "traddr": "10.0.0.2", 00:29:20.130 "adrfam": "ipv4", 00:29:20.130 "trsvcid": "4420", 00:29:20.130 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:20.130 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:20.130 "hdgst": false, 00:29:20.130 "ddgst": false 00:29:20.130 }, 00:29:20.130 "method": "bdev_nvme_attach_controller" 00:29:20.130 },{ 00:29:20.130 "params": { 00:29:20.130 "name": "Nvme4", 00:29:20.130 "trtype": "tcp", 00:29:20.130 "traddr": "10.0.0.2", 00:29:20.130 "adrfam": "ipv4", 00:29:20.130 "trsvcid": "4420", 00:29:20.130 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:20.130 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:20.130 "hdgst": false, 00:29:20.130 "ddgst": false 00:29:20.130 }, 00:29:20.130 "method": "bdev_nvme_attach_controller" 00:29:20.130 },{ 00:29:20.130 "params": { 00:29:20.130 "name": "Nvme5", 00:29:20.130 "trtype": "tcp", 00:29:20.130 "traddr": "10.0.0.2", 00:29:20.130 "adrfam": "ipv4", 00:29:20.130 "trsvcid": "4420", 00:29:20.130 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:20.130 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:20.130 "hdgst": false, 00:29:20.130 "ddgst": false 00:29:20.130 }, 00:29:20.130 "method": "bdev_nvme_attach_controller" 00:29:20.130 },{ 00:29:20.130 "params": { 00:29:20.130 "name": "Nvme6", 00:29:20.130 "trtype": "tcp", 00:29:20.130 "traddr": "10.0.0.2", 00:29:20.130 "adrfam": "ipv4", 00:29:20.130 "trsvcid": "4420", 00:29:20.130 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:20.130 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:20.130 "hdgst": false, 00:29:20.130 "ddgst": false 00:29:20.130 }, 00:29:20.130 "method": "bdev_nvme_attach_controller" 00:29:20.130 },{ 00:29:20.130 "params": { 00:29:20.130 "name": "Nvme7", 00:29:20.130 "trtype": "tcp", 00:29:20.130 "traddr": "10.0.0.2", 00:29:20.130 "adrfam": "ipv4", 00:29:20.130 "trsvcid": "4420", 00:29:20.130 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:20.130 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:20.130 "hdgst": false, 00:29:20.130 "ddgst": false 00:29:20.130 }, 00:29:20.130 "method": "bdev_nvme_attach_controller" 00:29:20.130 },{ 00:29:20.130 "params": { 00:29:20.130 "name": "Nvme8", 00:29:20.130 "trtype": "tcp", 00:29:20.130 "traddr": "10.0.0.2", 00:29:20.130 "adrfam": "ipv4", 00:29:20.130 "trsvcid": "4420", 00:29:20.130 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:20.130 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:20.130 "hdgst": false, 00:29:20.130 "ddgst": false 00:29:20.130 }, 00:29:20.130 "method": "bdev_nvme_attach_controller" 00:29:20.130 },{ 00:29:20.130 "params": { 00:29:20.130 "name": "Nvme9", 00:29:20.130 "trtype": "tcp", 00:29:20.130 "traddr": "10.0.0.2", 00:29:20.130 "adrfam": "ipv4", 00:29:20.130 "trsvcid": "4420", 00:29:20.130 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:20.130 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:20.130 "hdgst": false, 00:29:20.130 "ddgst": false 00:29:20.130 }, 00:29:20.130 "method": "bdev_nvme_attach_controller" 00:29:20.130 },{ 00:29:20.130 "params": { 00:29:20.130 "name": "Nvme10", 00:29:20.130 "trtype": "tcp", 00:29:20.130 "traddr": "10.0.0.2", 00:29:20.130 "adrfam": "ipv4", 00:29:20.130 "trsvcid": "4420", 00:29:20.130 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:20.130 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:20.130 "hdgst": false, 00:29:20.130 "ddgst": false 00:29:20.130 }, 00:29:20.130 "method": "bdev_nvme_attach_controller" 00:29:20.130 }' 00:29:20.388 [2024-11-19 07:53:12.072238] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:29:20.388 [2024-11-19 07:53:12.072382] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3052209 ] 00:29:20.388 [2024-11-19 07:53:12.220931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:20.647 [2024-11-19 07:53:12.350044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:22.549 Running I/O for 10 seconds... 00:29:23.116 07:53:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:23.116 07:53:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:29:23.116 07:53:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:23.116 07:53:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.116 07:53:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:23.116 07:53:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.116 07:53:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:29:23.116 07:53:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:23.116 07:53:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:29:23.116 07:53:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:29:23.116 07:53:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:29:23.116 07:53:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:29:23.116 07:53:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:23.116 07:53:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:23.116 07:53:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:23.116 07:53:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.116 07:53:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:23.116 07:53:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.116 07:53:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:29:23.116 07:53:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:29:23.116 07:53:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:29:23.375 07:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:29:23.375 07:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:23.375 07:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:23.375 07:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:23.375 07:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.375 07:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:23.375 1481.00 IOPS, 92.56 MiB/s [2024-11-19T06:53:15.305Z] 07:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.375 07:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:29:23.375 07:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:29:23.375 07:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:29:23.375 07:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:29:23.375 07:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:29:23.375 07:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 3052209 00:29:23.375 07:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 3052209 ']' 00:29:23.375 07:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 3052209 00:29:23.375 07:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:29:23.375 07:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:23.375 07:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3052209 00:29:23.375 07:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:23.375 07:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:23.375 07:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3052209' 00:29:23.375 killing process with pid 3052209 00:29:23.375 07:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 3052209 00:29:23.375 07:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 3052209 00:29:23.634 Received shutdown signal, test time was about 1.180645 seconds 00:29:23.634 00:29:23.634 Latency(us) 00:29:23.634 [2024-11-19T06:53:15.564Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:23.634 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:23.634 Verification LBA range: start 0x0 length 0x400 00:29:23.634 Nvme1n1 : 1.13 170.65 10.67 0.00 0.00 371101.14 31845.64 301368.51 00:29:23.634 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:23.634 Verification LBA range: start 0x0 length 0x400 00:29:23.634 Nvme2n1 : 1.16 220.39 13.77 0.00 0.00 281747.34 21942.42 281173.71 00:29:23.634 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:23.634 Verification LBA range: start 0x0 length 0x400 00:29:23.634 Nvme3n1 : 1.17 219.07 13.69 0.00 0.00 279306.05 19612.25 302921.96 00:29:23.634 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:23.634 Verification LBA range: start 0x0 length 0x400 00:29:23.634 Nvme4n1 : 1.15 222.42 13.90 0.00 0.00 270132.91 22136.60 301368.51 00:29:23.634 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:23.634 Verification LBA range: start 0x0 length 0x400 00:29:23.634 Nvme5n1 : 1.17 218.08 13.63 0.00 0.00 269843.53 22233.69 299815.06 00:29:23.634 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:23.634 Verification LBA range: start 0x0 length 0x400 00:29:23.634 Nvme6n1 : 1.11 173.37 10.84 0.00 0.00 332712.52 24855.13 316902.97 00:29:23.634 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:23.634 Verification LBA range: start 0x0 length 0x400 00:29:23.634 Nvme7n1 : 1.18 212.75 13.30 0.00 0.00 266293.16 22233.69 321563.31 00:29:23.634 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:23.634 Verification LBA range: start 0x0 length 0x400 00:29:23.634 Nvme8n1 : 1.13 170.04 10.63 0.00 0.00 326856.75 20971.52 302921.96 00:29:23.634 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:23.634 Verification LBA range: start 0x0 length 0x400 00:29:23.634 Nvme9n1 : 1.15 166.24 10.39 0.00 0.00 329061.01 25437.68 338651.21 00:29:23.634 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:23.634 Verification LBA range: start 0x0 length 0x400 00:29:23.634 Nvme10n1 : 1.14 168.01 10.50 0.00 0.00 318691.56 23301.69 307582.29 00:29:23.634 [2024-11-19T06:53:15.564Z] =================================================================================================================== 00:29:23.634 [2024-11-19T06:53:15.564Z] Total : 1941.01 121.31 0.00 0.00 300206.01 19612.25 338651.21 00:29:24.568 07:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:29:25.502 07:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 3051911 00:29:25.502 07:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:29:25.502 07:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:25.502 07:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:25.502 07:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:25.502 07:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:25.502 07:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:25.502 07:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:29:25.502 07:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:25.502 07:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:29:25.502 07:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:25.502 07:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:25.502 rmmod nvme_tcp 00:29:25.502 rmmod nvme_fabrics 00:29:25.502 rmmod nvme_keyring 00:29:25.502 07:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:25.502 07:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:29:25.502 07:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:29:25.502 07:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 3051911 ']' 00:29:25.502 07:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 3051911 00:29:25.502 07:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 3051911 ']' 00:29:25.502 07:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 3051911 00:29:25.502 07:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:29:25.502 07:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:25.502 07:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3051911 00:29:25.502 07:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:25.502 07:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:25.502 07:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3051911' 00:29:25.502 killing process with pid 3051911 00:29:25.502 07:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 3051911 00:29:25.502 07:53:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 3051911 00:29:28.786 07:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:28.786 07:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:28.786 07:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:28.786 07:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:29:28.786 07:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:29:28.786 07:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:28.786 07:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:29:28.786 07:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:28.786 07:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:28.786 07:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:28.786 07:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:28.786 07:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:30.692 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:30.692 00:29:30.692 real 0m12.524s 00:29:30.692 user 0m42.101s 00:29:30.692 sys 0m2.056s 00:29:30.692 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:30.693 ************************************ 00:29:30.693 END TEST nvmf_shutdown_tc2 00:29:30.693 ************************************ 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:30.693 ************************************ 00:29:30.693 START TEST nvmf_shutdown_tc3 00:29:30.693 ************************************ 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:30.693 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:30.693 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:30.693 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:30.693 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:30.693 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:30.694 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:30.694 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:30.694 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:30.694 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:30.694 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:30.694 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:30.694 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:30.694 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:30.694 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:30.694 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:30.694 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:30.694 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:30.694 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:30.694 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:30.694 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:30.694 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:30.694 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:30.694 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:30.694 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:30.694 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:30.694 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:30.694 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:30.694 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:30.694 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:30.694 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:29:30.694 00:29:30.694 --- 10.0.0.2 ping statistics --- 00:29:30.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:30.694 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:29:30.694 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:30.694 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:30.694 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:29:30.694 00:29:30.694 --- 10.0.0.1 ping statistics --- 00:29:30.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:30.694 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:29:30.694 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:30.694 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:29:30.694 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:30.694 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:30.694 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:30.694 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:30.694 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:30.694 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:30.694 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:30.694 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:30.694 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:30.694 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:30.694 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:30.694 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=3053543 00:29:30.694 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:30.694 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 3053543 00:29:30.694 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 3053543 ']' 00:29:30.694 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:30.694 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:30.694 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:30.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:30.694 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:30.694 07:53:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:30.694 [2024-11-19 07:53:22.511429] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:29:30.694 [2024-11-19 07:53:22.511564] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:30.953 [2024-11-19 07:53:22.659256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:30.953 [2024-11-19 07:53:22.798858] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:30.953 [2024-11-19 07:53:22.798950] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:30.953 [2024-11-19 07:53:22.798975] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:30.953 [2024-11-19 07:53:22.798999] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:30.953 [2024-11-19 07:53:22.799019] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:30.953 [2024-11-19 07:53:22.801897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:30.953 [2024-11-19 07:53:22.801997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:30.953 [2024-11-19 07:53:22.802039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:30.953 [2024-11-19 07:53:22.802046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:31.886 07:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:31.886 07:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:29:31.886 07:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:31.886 07:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:31.886 07:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:31.886 07:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:31.886 07:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:31.886 07:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.886 07:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:31.886 [2024-11-19 07:53:23.524685] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:31.886 07:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.886 07:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:31.886 07:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:31.886 07:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:31.886 07:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:31.886 07:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:31.886 07:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:31.886 07:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:31.886 07:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:31.886 07:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:31.886 07:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:31.886 07:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:31.886 07:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:31.886 07:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:31.886 07:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:31.886 07:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:31.886 07:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:31.887 07:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:31.887 07:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:31.887 07:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:31.887 07:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:31.887 07:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:31.887 07:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:31.887 07:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:31.887 07:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:31.887 07:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:31.887 07:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:31.887 07:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.887 07:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:31.887 Malloc1 00:29:31.887 [2024-11-19 07:53:23.669746] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:31.887 Malloc2 00:29:32.144 Malloc3 00:29:32.144 Malloc4 00:29:32.144 Malloc5 00:29:32.402 Malloc6 00:29:32.402 Malloc7 00:29:32.661 Malloc8 00:29:32.661 Malloc9 00:29:32.661 Malloc10 00:29:32.661 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.661 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:32.661 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:32.661 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:32.661 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=3053828 00:29:32.661 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 3053828 /var/tmp/bdevperf.sock 00:29:32.661 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 3053828 ']' 00:29:32.661 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:32.661 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:32.661 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:32.661 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:32.661 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:29:32.661 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:32.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:32.661 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:29:32.661 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:32.661 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:32.920 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:32.920 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:32.920 { 00:29:32.920 "params": { 00:29:32.920 "name": "Nvme$subsystem", 00:29:32.920 "trtype": "$TEST_TRANSPORT", 00:29:32.920 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:32.920 "adrfam": "ipv4", 00:29:32.920 "trsvcid": "$NVMF_PORT", 00:29:32.920 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:32.920 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:32.920 "hdgst": ${hdgst:-false}, 00:29:32.920 "ddgst": ${ddgst:-false} 00:29:32.920 }, 00:29:32.920 "method": "bdev_nvme_attach_controller" 00:29:32.920 } 00:29:32.920 EOF 00:29:32.920 )") 00:29:32.920 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:32.920 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:32.920 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:32.920 { 00:29:32.920 "params": { 00:29:32.920 "name": "Nvme$subsystem", 00:29:32.920 "trtype": "$TEST_TRANSPORT", 00:29:32.920 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:32.920 "adrfam": "ipv4", 00:29:32.920 "trsvcid": "$NVMF_PORT", 00:29:32.920 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:32.920 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:32.920 "hdgst": ${hdgst:-false}, 00:29:32.920 "ddgst": ${ddgst:-false} 00:29:32.920 }, 00:29:32.920 "method": "bdev_nvme_attach_controller" 00:29:32.920 } 00:29:32.920 EOF 00:29:32.920 )") 00:29:32.920 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:32.920 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:32.920 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:32.920 { 00:29:32.920 "params": { 00:29:32.920 "name": "Nvme$subsystem", 00:29:32.920 "trtype": "$TEST_TRANSPORT", 00:29:32.920 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:32.920 "adrfam": "ipv4", 00:29:32.920 "trsvcid": "$NVMF_PORT", 00:29:32.920 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:32.920 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:32.920 "hdgst": ${hdgst:-false}, 00:29:32.920 "ddgst": ${ddgst:-false} 00:29:32.920 }, 00:29:32.920 "method": "bdev_nvme_attach_controller" 00:29:32.920 } 00:29:32.920 EOF 00:29:32.920 )") 00:29:32.920 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:32.920 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:32.920 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:32.920 { 00:29:32.920 "params": { 00:29:32.920 "name": "Nvme$subsystem", 00:29:32.920 "trtype": "$TEST_TRANSPORT", 00:29:32.920 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:32.920 "adrfam": "ipv4", 00:29:32.920 "trsvcid": "$NVMF_PORT", 00:29:32.920 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:32.920 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:32.920 "hdgst": ${hdgst:-false}, 00:29:32.920 "ddgst": ${ddgst:-false} 00:29:32.920 }, 00:29:32.920 "method": "bdev_nvme_attach_controller" 00:29:32.920 } 00:29:32.920 EOF 00:29:32.920 )") 00:29:32.920 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:32.920 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:32.920 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:32.920 { 00:29:32.920 "params": { 00:29:32.920 "name": "Nvme$subsystem", 00:29:32.920 "trtype": "$TEST_TRANSPORT", 00:29:32.920 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:32.920 "adrfam": "ipv4", 00:29:32.920 "trsvcid": "$NVMF_PORT", 00:29:32.920 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:32.920 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:32.920 "hdgst": ${hdgst:-false}, 00:29:32.920 "ddgst": ${ddgst:-false} 00:29:32.920 }, 00:29:32.920 "method": "bdev_nvme_attach_controller" 00:29:32.920 } 00:29:32.920 EOF 00:29:32.920 )") 00:29:32.920 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:32.920 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:32.920 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:32.920 { 00:29:32.920 "params": { 00:29:32.920 "name": "Nvme$subsystem", 00:29:32.920 "trtype": "$TEST_TRANSPORT", 00:29:32.920 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:32.920 "adrfam": "ipv4", 00:29:32.920 "trsvcid": "$NVMF_PORT", 00:29:32.920 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:32.920 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:32.920 "hdgst": ${hdgst:-false}, 00:29:32.920 "ddgst": ${ddgst:-false} 00:29:32.920 }, 00:29:32.920 "method": "bdev_nvme_attach_controller" 00:29:32.920 } 00:29:32.920 EOF 00:29:32.920 )") 00:29:32.920 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:32.920 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:32.920 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:32.920 { 00:29:32.920 "params": { 00:29:32.920 "name": "Nvme$subsystem", 00:29:32.920 "trtype": "$TEST_TRANSPORT", 00:29:32.920 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:32.920 "adrfam": "ipv4", 00:29:32.920 "trsvcid": "$NVMF_PORT", 00:29:32.920 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:32.920 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:32.920 "hdgst": ${hdgst:-false}, 00:29:32.921 "ddgst": ${ddgst:-false} 00:29:32.921 }, 00:29:32.921 "method": "bdev_nvme_attach_controller" 00:29:32.921 } 00:29:32.921 EOF 00:29:32.921 )") 00:29:32.921 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:32.921 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:32.921 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:32.921 { 00:29:32.921 "params": { 00:29:32.921 "name": "Nvme$subsystem", 00:29:32.921 "trtype": "$TEST_TRANSPORT", 00:29:32.921 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:32.921 "adrfam": "ipv4", 00:29:32.921 "trsvcid": "$NVMF_PORT", 00:29:32.921 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:32.921 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:32.921 "hdgst": ${hdgst:-false}, 00:29:32.921 "ddgst": ${ddgst:-false} 00:29:32.921 }, 00:29:32.921 "method": "bdev_nvme_attach_controller" 00:29:32.921 } 00:29:32.921 EOF 00:29:32.921 )") 00:29:32.921 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:32.921 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:32.921 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:32.921 { 00:29:32.921 "params": { 00:29:32.921 "name": "Nvme$subsystem", 00:29:32.921 "trtype": "$TEST_TRANSPORT", 00:29:32.921 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:32.921 "adrfam": "ipv4", 00:29:32.921 "trsvcid": "$NVMF_PORT", 00:29:32.921 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:32.921 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:32.921 "hdgst": ${hdgst:-false}, 00:29:32.921 "ddgst": ${ddgst:-false} 00:29:32.921 }, 00:29:32.921 "method": "bdev_nvme_attach_controller" 00:29:32.921 } 00:29:32.921 EOF 00:29:32.921 )") 00:29:32.921 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:32.921 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:32.921 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:32.921 { 00:29:32.921 "params": { 00:29:32.921 "name": "Nvme$subsystem", 00:29:32.921 "trtype": "$TEST_TRANSPORT", 00:29:32.921 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:32.921 "adrfam": "ipv4", 00:29:32.921 "trsvcid": "$NVMF_PORT", 00:29:32.921 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:32.921 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:32.921 "hdgst": ${hdgst:-false}, 00:29:32.921 "ddgst": ${ddgst:-false} 00:29:32.921 }, 00:29:32.921 "method": "bdev_nvme_attach_controller" 00:29:32.921 } 00:29:32.921 EOF 00:29:32.921 )") 00:29:32.921 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:32.921 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:29:32.921 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:29:32.921 07:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:32.921 "params": { 00:29:32.921 "name": "Nvme1", 00:29:32.921 "trtype": "tcp", 00:29:32.921 "traddr": "10.0.0.2", 00:29:32.921 "adrfam": "ipv4", 00:29:32.921 "trsvcid": "4420", 00:29:32.921 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:32.921 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:32.921 "hdgst": false, 00:29:32.921 "ddgst": false 00:29:32.921 }, 00:29:32.921 "method": "bdev_nvme_attach_controller" 00:29:32.921 },{ 00:29:32.921 "params": { 00:29:32.921 "name": "Nvme2", 00:29:32.921 "trtype": "tcp", 00:29:32.921 "traddr": "10.0.0.2", 00:29:32.921 "adrfam": "ipv4", 00:29:32.921 "trsvcid": "4420", 00:29:32.921 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:32.921 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:32.921 "hdgst": false, 00:29:32.921 "ddgst": false 00:29:32.921 }, 00:29:32.921 "method": "bdev_nvme_attach_controller" 00:29:32.921 },{ 00:29:32.921 "params": { 00:29:32.921 "name": "Nvme3", 00:29:32.921 "trtype": "tcp", 00:29:32.921 "traddr": "10.0.0.2", 00:29:32.921 "adrfam": "ipv4", 00:29:32.921 "trsvcid": "4420", 00:29:32.921 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:32.921 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:32.921 "hdgst": false, 00:29:32.921 "ddgst": false 00:29:32.921 }, 00:29:32.921 "method": "bdev_nvme_attach_controller" 00:29:32.921 },{ 00:29:32.921 "params": { 00:29:32.921 "name": "Nvme4", 00:29:32.921 "trtype": "tcp", 00:29:32.921 "traddr": "10.0.0.2", 00:29:32.921 "adrfam": "ipv4", 00:29:32.921 "trsvcid": "4420", 00:29:32.921 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:32.921 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:32.921 "hdgst": false, 00:29:32.921 "ddgst": false 00:29:32.921 }, 00:29:32.921 "method": "bdev_nvme_attach_controller" 00:29:32.921 },{ 00:29:32.921 "params": { 00:29:32.921 "name": "Nvme5", 00:29:32.921 "trtype": "tcp", 00:29:32.921 "traddr": "10.0.0.2", 00:29:32.921 "adrfam": "ipv4", 00:29:32.921 "trsvcid": "4420", 00:29:32.921 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:32.921 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:32.921 "hdgst": false, 00:29:32.921 "ddgst": false 00:29:32.921 }, 00:29:32.921 "method": "bdev_nvme_attach_controller" 00:29:32.921 },{ 00:29:32.921 "params": { 00:29:32.921 "name": "Nvme6", 00:29:32.921 "trtype": "tcp", 00:29:32.921 "traddr": "10.0.0.2", 00:29:32.921 "adrfam": "ipv4", 00:29:32.921 "trsvcid": "4420", 00:29:32.921 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:32.921 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:32.921 "hdgst": false, 00:29:32.921 "ddgst": false 00:29:32.921 }, 00:29:32.921 "method": "bdev_nvme_attach_controller" 00:29:32.921 },{ 00:29:32.921 "params": { 00:29:32.921 "name": "Nvme7", 00:29:32.921 "trtype": "tcp", 00:29:32.921 "traddr": "10.0.0.2", 00:29:32.921 "adrfam": "ipv4", 00:29:32.921 "trsvcid": "4420", 00:29:32.921 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:32.921 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:32.921 "hdgst": false, 00:29:32.921 "ddgst": false 00:29:32.921 }, 00:29:32.921 "method": "bdev_nvme_attach_controller" 00:29:32.921 },{ 00:29:32.921 "params": { 00:29:32.921 "name": "Nvme8", 00:29:32.921 "trtype": "tcp", 00:29:32.921 "traddr": "10.0.0.2", 00:29:32.921 "adrfam": "ipv4", 00:29:32.921 "trsvcid": "4420", 00:29:32.921 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:32.921 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:32.921 "hdgst": false, 00:29:32.921 "ddgst": false 00:29:32.921 }, 00:29:32.921 "method": "bdev_nvme_attach_controller" 00:29:32.921 },{ 00:29:32.921 "params": { 00:29:32.921 "name": "Nvme9", 00:29:32.921 "trtype": "tcp", 00:29:32.921 "traddr": "10.0.0.2", 00:29:32.921 "adrfam": "ipv4", 00:29:32.921 "trsvcid": "4420", 00:29:32.922 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:32.922 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:32.922 "hdgst": false, 00:29:32.922 "ddgst": false 00:29:32.922 }, 00:29:32.922 "method": "bdev_nvme_attach_controller" 00:29:32.922 },{ 00:29:32.922 "params": { 00:29:32.922 "name": "Nvme10", 00:29:32.922 "trtype": "tcp", 00:29:32.922 "traddr": "10.0.0.2", 00:29:32.922 "adrfam": "ipv4", 00:29:32.922 "trsvcid": "4420", 00:29:32.922 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:32.922 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:32.922 "hdgst": false, 00:29:32.922 "ddgst": false 00:29:32.922 }, 00:29:32.922 "method": "bdev_nvme_attach_controller" 00:29:32.922 }' 00:29:32.922 [2024-11-19 07:53:24.683173] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:29:32.922 [2024-11-19 07:53:24.683320] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3053828 ] 00:29:32.922 [2024-11-19 07:53:24.832306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:33.180 [2024-11-19 07:53:24.960708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:35.082 Running I/O for 10 seconds... 00:29:35.648 07:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:35.648 07:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:29:35.648 07:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:35.648 07:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.648 07:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:35.648 07:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.648 07:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:35.648 07:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:29:35.648 07:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:35.648 07:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:29:35.648 07:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:29:35.648 07:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:29:35.648 07:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:29:35.648 07:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:35.648 07:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:35.648 07:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:35.648 07:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.648 07:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:35.648 07:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.648 07:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:29:35.648 07:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:29:35.648 07:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:29:35.907 07:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:29:35.907 07:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:35.907 07:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:35.907 07:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:35.907 07:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.907 07:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:35.907 07:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.907 07:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=130 00:29:35.907 07:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 130 -ge 100 ']' 00:29:35.907 07:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:29:35.907 07:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:29:35.907 07:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:29:35.907 07:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 3053543 00:29:35.907 07:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 3053543 ']' 00:29:35.907 07:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 3053543 00:29:35.907 07:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:29:35.907 07:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:35.907 07:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3053543 00:29:35.907 07:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:35.907 07:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:35.907 07:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3053543' 00:29:35.907 killing process with pid 3053543 00:29:35.907 07:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 3053543 00:29:35.907 07:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 3053543 00:29:35.907 [2024-11-19 07:53:27.827938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:35.907 [2024-11-19 07:53:27.828034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:35.907 [2024-11-19 07:53:27.828066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:35.907 [2024-11-19 07:53:27.828105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:35.907 [2024-11-19 07:53:27.828124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:35.907 [2024-11-19 07:53:27.828143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:35.907 [2024-11-19 07:53:27.828163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:35.907 [2024-11-19 07:53:27.828181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:35.907 [2024-11-19 07:53:27.828201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:35.907 [2024-11-19 07:53:27.828219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:35.907 [2024-11-19 07:53:27.828237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:35.907 [2024-11-19 07:53:27.828255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:35.907 [2024-11-19 07:53:27.828274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:35.907 [2024-11-19 07:53:27.828293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:35.907 [2024-11-19 07:53:27.828311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:35.907 [2024-11-19 07:53:27.828328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:35.907 [2024-11-19 07:53:27.828347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:35.907 [2024-11-19 07:53:27.828365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:35.907 [2024-11-19 07:53:27.828383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:35.907 [2024-11-19 07:53:27.828401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:35.907 [2024-11-19 07:53:27.828419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:35.907 [2024-11-19 07:53:27.828436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:35.907 [2024-11-19 07:53:27.828455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:35.907 [2024-11-19 07:53:27.828473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:35.907 [2024-11-19 07:53:27.828491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:35.907 [2024-11-19 07:53:27.828509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:35.907 [2024-11-19 07:53:27.828527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:35.907 [2024-11-19 07:53:27.828545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:35.907 [2024-11-19 07:53:27.828563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:35.907 [2024-11-19 07:53:27.828587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:35.907 [2024-11-19 07:53:27.828606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:35.907 [2024-11-19 07:53:27.828625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:35.907 [2024-11-19 07:53:27.828643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:35.907 [2024-11-19 07:53:27.828661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:35.907 [2024-11-19 07:53:27.828703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:35.907 [2024-11-19 07:53:27.828727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:35.907 [2024-11-19 07:53:27.828746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:35.907 [2024-11-19 07:53:27.828765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:35.907 [2024-11-19 07:53:27.828783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:35.907 [2024-11-19 07:53:27.828802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:35.907 [2024-11-19 07:53:27.828820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:35.907 [2024-11-19 07:53:27.828839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:35.907 [2024-11-19 07:53:27.828857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:35.908 [2024-11-19 07:53:27.828875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:35.908 [2024-11-19 07:53:27.828893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:35.908 [2024-11-19 07:53:27.828914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:35.908 [2024-11-19 07:53:27.828934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:35.908 [2024-11-19 07:53:27.828953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:35.908 [2024-11-19 07:53:27.828972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:35.908 [2024-11-19 07:53:27.829000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:35.908 [2024-11-19 07:53:27.829019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:35.908 [2024-11-19 07:53:27.829037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:35.908 [2024-11-19 07:53:27.829056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:35.908 [2024-11-19 07:53:27.829075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:35.908 [2024-11-19 07:53:27.829094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:35.908 [2024-11-19 07:53:27.829124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:35.908 [2024-11-19 07:53:27.829144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:35.908 [2024-11-19 07:53:27.829163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:35.908 [2024-11-19 07:53:27.829182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:35.908 [2024-11-19 07:53:27.829201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:35.908 [2024-11-19 07:53:27.829220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:35.908 [2024-11-19 07:53:27.829239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:35.908 [2024-11-19 07:53:27.829257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:35.908 [2024-11-19 07:53:27.831938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:35.908 [2024-11-19 07:53:27.831990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:35.908 [2024-11-19 07:53:27.832012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:35.908 [2024-11-19 07:53:27.832042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:35.908 [2024-11-19 07:53:27.832060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:35.908 [2024-11-19 07:53:27.832077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:35.908 [2024-11-19 07:53:27.832095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:35.908 [2024-11-19 07:53:27.832112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:35.908 [2024-11-19 07:53:27.832130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:35.908 [2024-11-19 07:53:27.832148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:35.908 [2024-11-19 07:53:27.832166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:35.908 [2024-11-19 07:53:27.832183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:35.908 [2024-11-19 07:53:27.832201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:35.908 [2024-11-19 07:53:27.832220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:35.908 [2024-11-19 07:53:27.832238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:35.908 [2024-11-19 07:53:27.832256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:35.908 [2024-11-19 07:53:27.832275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:35.908 [2024-11-19 07:53:27.832293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:35.908 [2024-11-19 07:53:27.832318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:35.908 [2024-11-19 07:53:27.832338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:35.908 [2024-11-19 07:53:27.832356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:35.908 [2024-11-19 07:53:27.832375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:35.908 [2024-11-19 07:53:27.832393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:35.908 [2024-11-19 07:53:27.832412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:35.908 [2024-11-19 07:53:27.832431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:35.908 [2024-11-19 07:53:27.832449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:35.908 [2024-11-19 07:53:27.832467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:35.908 [2024-11-19 07:53:27.832485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:35.908 [2024-11-19 07:53:27.832504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:35.908 [2024-11-19 07:53:27.832522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:35.908 [2024-11-19 07:53:27.832540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:35.908 [2024-11-19 07:53:27.832558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:35.908 [2024-11-19 07:53:27.832576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:35.908 [2024-11-19 07:53:27.832594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:35.908 [2024-11-19 07:53:27.832612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:35.908 [2024-11-19 07:53:27.832630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:35.908 [2024-11-19 07:53:27.832648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:35.908 [2024-11-19 07:53:27.832666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:35.908 [2024-11-19 07:53:27.832710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:35.908 [2024-11-19 07:53:27.832734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:35.908 [2024-11-19 07:53:27.832753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:35.908 [2024-11-19 07:53:27.832771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:35.908 [2024-11-19 07:53:27.832789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:35.908 [2024-11-19 07:53:27.832807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:35.908 [2024-11-19 07:53:27.832827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:35.908 [2024-11-19 07:53:27.832850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:35.908 [2024-11-19 07:53:27.832869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:35.908 [2024-11-19 07:53:27.832888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:35.908 [2024-11-19 07:53:27.832906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:35.908 [2024-11-19 07:53:27.832924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:35.908 [2024-11-19 07:53:27.832943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:35.908 [2024-11-19 07:53:27.832961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:35.908 [2024-11-19 07:53:27.832991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:35.908 [2024-11-19 07:53:27.835473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:35.908 [2024-11-19 07:53:27.835504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:35.908 [2024-11-19 07:53:27.835525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:35.908 [2024-11-19 07:53:27.835544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:35.908 [2024-11-19 07:53:27.835579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:35.908 [2024-11-19 07:53:27.835598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:35.908 [2024-11-19 07:53:27.835617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:35.908 [2024-11-19 07:53:27.835635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:35.908 [2024-11-19 07:53:27.835653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:35.909 [2024-11-19 07:53:27.835684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:35.909 [2024-11-19 07:53:27.835713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:35.909 [2024-11-19 07:53:27.835732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:35.909 [2024-11-19 07:53:27.835750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:35.909 [2024-11-19 07:53:27.835769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:35.909 [2024-11-19 07:53:27.835788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:35.909 [2024-11-19 07:53:27.835806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:35.909 [2024-11-19 07:53:27.835825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:35.909 [2024-11-19 07:53:27.835843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:35.909 [2024-11-19 07:53:27.835868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:35.909 [2024-11-19 07:53:27.835888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:35.909 [2024-11-19 07:53:27.835907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:35.909 [2024-11-19 07:53:27.835925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:35.909 [2024-11-19 07:53:27.835944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:35.909 [2024-11-19 07:53:27.835963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:35.909 [2024-11-19 07:53:27.835993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:35.909 [2024-11-19 07:53:27.836013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:35.909 [2024-11-19 07:53:27.836032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:35.909 [2024-11-19 07:53:27.836051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:35.909 [2024-11-19 07:53:27.836069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:35.909 [2024-11-19 07:53:27.836088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:35.909 [2024-11-19 07:53:27.836107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:35.909 [2024-11-19 07:53:27.836125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:35.909 [2024-11-19 07:53:27.836143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:35.909 [2024-11-19 07:53:27.836162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:35.909 [2024-11-19 07:53:27.836181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:35.909 [2024-11-19 07:53:27.836200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:35.909 [2024-11-19 07:53:27.836219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:35.909 [2024-11-19 07:53:27.836238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:35.909 [2024-11-19 07:53:27.836256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:35.909 [2024-11-19 07:53:27.836275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:35.909 [2024-11-19 07:53:27.836294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:35.909 [2024-11-19 07:53:27.836313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:35.909 [2024-11-19 07:53:27.836331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:35.909 [2024-11-19 07:53:27.836349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:35.909 [2024-11-19 07:53:27.836373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:35.909 [2024-11-19 07:53:27.836392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:35.909 [2024-11-19 07:53:27.836412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:35.909 [2024-11-19 07:53:27.836431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:35.909 [2024-11-19 07:53:27.836449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:35.909 [2024-11-19 07:53:27.836467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:35.909 [2024-11-19 07:53:27.836486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:35.909 [2024-11-19 07:53:27.836507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:35.909 [2024-11-19 07:53:27.836525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:35.909 [2024-11-19 07:53:27.836545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:35.909 [2024-11-19 07:53:27.836564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:35.909 [2024-11-19 07:53:27.836583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:35.909 [2024-11-19 07:53:27.836602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:35.909 [2024-11-19 07:53:27.836620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:35.909 [2024-11-19 07:53:27.836639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:35.909 [2024-11-19 07:53:27.836657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:35.909 [2024-11-19 07:53:27.836687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:35.909 [2024-11-19 07:53:27.836715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:35.909 [2024-11-19 07:53:27.836734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:36.181 [2024-11-19 07:53:27.840218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.181 [2024-11-19 07:53:27.840262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.181 [2024-11-19 07:53:27.840287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.181 [2024-11-19 07:53:27.840305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.181 [2024-11-19 07:53:27.840324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.181 [2024-11-19 07:53:27.840343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.181 [2024-11-19 07:53:27.840362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.181 [2024-11-19 07:53:27.840411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.181 [2024-11-19 07:53:27.840432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.181 [2024-11-19 07:53:27.840450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.181 [2024-11-19 07:53:27.840482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.181 [2024-11-19 07:53:27.840502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.181 [2024-11-19 07:53:27.840521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.181 [2024-11-19 07:53:27.840539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.181 [2024-11-19 07:53:27.840558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.181 [2024-11-19 07:53:27.840576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.181 [2024-11-19 07:53:27.840594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.181 [2024-11-19 07:53:27.840613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.181 [2024-11-19 07:53:27.840631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.181 [2024-11-19 07:53:27.840649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.181 [2024-11-19 07:53:27.840678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.181 [2024-11-19 07:53:27.840707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.181 [2024-11-19 07:53:27.840726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.181 [2024-11-19 07:53:27.840745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.181 [2024-11-19 07:53:27.840763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.181 [2024-11-19 07:53:27.840782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.181 [2024-11-19 07:53:27.840801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.181 [2024-11-19 07:53:27.840819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.181 [2024-11-19 07:53:27.840838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.181 [2024-11-19 07:53:27.840857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.181 [2024-11-19 07:53:27.840875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.181 [2024-11-19 07:53:27.840894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.181 [2024-11-19 07:53:27.840913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.181 [2024-11-19 07:53:27.840931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.181 [2024-11-19 07:53:27.840955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.181 [2024-11-19 07:53:27.840985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.181 [2024-11-19 07:53:27.841019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.181 [2024-11-19 07:53:27.841037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.181 [2024-11-19 07:53:27.841055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.181 [2024-11-19 07:53:27.841072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.181 [2024-11-19 07:53:27.841091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.181 [2024-11-19 07:53:27.841109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.181 [2024-11-19 07:53:27.841127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.181 [2024-11-19 07:53:27.841145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.182 [2024-11-19 07:53:27.841163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.182 [2024-11-19 07:53:27.841181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.182 [2024-11-19 07:53:27.841198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.182 [2024-11-19 07:53:27.841216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.182 [2024-11-19 07:53:27.841234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.182 [2024-11-19 07:53:27.841252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.182 [2024-11-19 07:53:27.841270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.182 [2024-11-19 07:53:27.841287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.182 [2024-11-19 07:53:27.841306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.182 [2024-11-19 07:53:27.841324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.182 [2024-11-19 07:53:27.841342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.182 [2024-11-19 07:53:27.841360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.182 [2024-11-19 07:53:27.841378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.182 [2024-11-19 07:53:27.841396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.182 [2024-11-19 07:53:27.841414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.182 [2024-11-19 07:53:27.841432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.182 [2024-11-19 07:53:27.841455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.182 [2024-11-19 07:53:27.841474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.182 [2024-11-19 07:53:27.841493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:36.182 [2024-11-19 07:53:27.843935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.182 [2024-11-19 07:53:27.843988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.182 [2024-11-19 07:53:27.844026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.182 [2024-11-19 07:53:27.844045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.182 [2024-11-19 07:53:27.844078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.182 [2024-11-19 07:53:27.844097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.182 [2024-11-19 07:53:27.844116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.182 [2024-11-19 07:53:27.844137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.182 [2024-11-19 07:53:27.844162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.182 [2024-11-19 07:53:27.844181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.182 [2024-11-19 07:53:27.844199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.182 [2024-11-19 07:53:27.844219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.182 [2024-11-19 07:53:27.844236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.182 [2024-11-19 07:53:27.844255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.182 [2024-11-19 07:53:27.844273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.182 [2024-11-19 07:53:27.844291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.182 [2024-11-19 07:53:27.844310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.182 [2024-11-19 07:53:27.844331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.182 [2024-11-19 07:53:27.844353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.182 [2024-11-19 07:53:27.844372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.182 [2024-11-19 07:53:27.844389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.182 [2024-11-19 07:53:27.844407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.182 [2024-11-19 07:53:27.844426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.182 [2024-11-19 07:53:27.844451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.182 [2024-11-19 07:53:27.844470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.182 [2024-11-19 07:53:27.844488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.182 [2024-11-19 07:53:27.844506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.182 [2024-11-19 07:53:27.844539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.182 [2024-11-19 07:53:27.844556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.182 [2024-11-19 07:53:27.844590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.182 [2024-11-19 07:53:27.844610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.182 [2024-11-19 07:53:27.844627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.182 [2024-11-19 07:53:27.844646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.182 [2024-11-19 07:53:27.844663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.182 [2024-11-19 07:53:27.844699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.182 [2024-11-19 07:53:27.844720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.182 [2024-11-19 07:53:27.844739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.182 [2024-11-19 07:53:27.844759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.182 [2024-11-19 07:53:27.844777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.182 [2024-11-19 07:53:27.844821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.182 [2024-11-19 07:53:27.844841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.182 [2024-11-19 07:53:27.844859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.182 [2024-11-19 07:53:27.844877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.182 [2024-11-19 07:53:27.844896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.182 [2024-11-19 07:53:27.844914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.182 [2024-11-19 07:53:27.844938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.182 [2024-11-19 07:53:27.844958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.182 [2024-11-19 07:53:27.844990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.182 [2024-11-19 07:53:27.845010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.182 [2024-11-19 07:53:27.845034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.182 [2024-11-19 07:53:27.845054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.182 [2024-11-19 07:53:27.845072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.182 [2024-11-19 07:53:27.845091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.182 [2024-11-19 07:53:27.845109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.182 [2024-11-19 07:53:27.845126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.182 [2024-11-19 07:53:27.845145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.182 [2024-11-19 07:53:27.845163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.182 [2024-11-19 07:53:27.845181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.182 [2024-11-19 07:53:27.845199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.182 [2024-11-19 07:53:27.845217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.182 [2024-11-19 07:53:27.845236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.182 [2024-11-19 07:53:27.845254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.183 [2024-11-19 07:53:27.845272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:36.183 [2024-11-19 07:53:27.845812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.183 [2024-11-19 07:53:27.845866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.183 [2024-11-19 07:53:27.845897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.183 [2024-11-19 07:53:27.845921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.183 [2024-11-19 07:53:27.845945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.183 [2024-11-19 07:53:27.845967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.183 [2024-11-19 07:53:27.846001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.183 [2024-11-19 07:53:27.846022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.183 [2024-11-19 07:53:27.846043] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7f00 is same with the state(6) to be set 00:29:36.183 [2024-11-19 07:53:27.846170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.183 [2024-11-19 07:53:27.846200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.183 [2024-11-19 07:53:27.846223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.183 [2024-11-19 07:53:27.846257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.183 [2024-11-19 07:53:27.846281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.183 [2024-11-19 07:53:27.846304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.183 [2024-11-19 07:53:27.846327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.183 [2024-11-19 07:53:27.846348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.183 [2024-11-19 07:53:27.846368] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2f00 is same with the state(6) to be set 00:29:36.183 [2024-11-19 07:53:27.846441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.183 [2024-11-19 07:53:27.846469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.183 [2024-11-19 07:53:27.846493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.183 [2024-11-19 07:53:27.846515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.183 [2024-11-19 07:53:27.846537] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.183 [2024-11-19 07:53:27.846558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.183 [2024-11-19 07:53:27.846580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.183 [2024-11-19 07:53:27.846602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.183 [2024-11-19 07:53:27.846622] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f3900 is same with the state(6) to be set 00:29:36.183 [2024-11-19 07:53:27.846739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.183 [2024-11-19 07:53:27.846769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.183 [2024-11-19 07:53:27.846811] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.183 [2024-11-19 07:53:27.846833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.183 [2024-11-19 07:53:27.846855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.183 [2024-11-19 07:53:27.846876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.183 [2024-11-19 07:53:27.846899] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.183 [2024-11-19 07:53:27.846919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.183 [2024-11-19 07:53:27.846939] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4300 is same with the state(6) to be set 00:29:36.183 [2024-11-19 07:53:27.847044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.183 [2024-11-19 07:53:27.847073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.183 [2024-11-19 07:53:27.847102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.183 [2024-11-19 07:53:27.847124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.183 [2024-11-19 07:53:27.847147] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.183 [2024-11-19 07:53:27.847168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.183 [2024-11-19 07:53:27.847191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.183 [2024-11-19 07:53:27.847213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.183 [2024-11-19 07:53:27.847233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:29:36.183 [2024-11-19 07:53:27.848024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.183 [2024-11-19 07:53:27.848069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.183 [2024-11-19 07:53:27.848122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.183 [2024-11-19 07:53:27.848147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.183 [2024-11-19 07:53:27.848175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.183 [2024-11-19 07:53:27.848198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.183 [2024-11-19 07:53:27.848224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.183 [2024-11-19 07:53:27.848247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.183 [2024-11-19 07:53:27.848273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.183 [2024-11-19 07:53:27.848295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.183 [2024-11-19 07:53:27.848321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.183 [2024-11-19 07:53:27.848343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.183 [2024-11-19 07:53:27.848368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.183 [2024-11-19 07:53:27.848391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.183 [2024-11-19 07:53:27.848417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.183 [2024-11-19 07:53:27.848439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.183 [2024-11-19 07:53:27.848465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.183 [2024-11-19 07:53:27.848488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.183 [2024-11-19 07:53:27.848520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.183 [2024-11-19 07:53:27.848544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.183 [2024-11-19 07:53:27.848570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.183 [2024-11-19 07:53:27.848593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.183 [2024-11-19 07:53:27.848619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.183 [2024-11-19 07:53:27.848642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.183 [2024-11-19 07:53:27.848678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.183 [2024-11-19 07:53:27.848709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.183 [2024-11-19 07:53:27.848736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.183 [2024-11-19 07:53:27.848759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.183 [2024-11-19 07:53:27.848785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.183 [2024-11-19 07:53:27.848819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.183 [2024-11-19 07:53:27.848845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.183 [2024-11-19 07:53:27.848868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.183 [2024-11-19 07:53:27.848894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.184 [2024-11-19 07:53:27.848917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.184 [2024-11-19 07:53:27.848943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.184 [2024-11-19 07:53:27.848965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.184 [2024-11-19 07:53:27.849000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.184 [2024-11-19 07:53:27.849022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.184 [2024-11-19 07:53:27.849047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.184 [2024-11-19 07:53:27.849070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.184 [2024-11-19 07:53:27.849096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.184 [2024-11-19 07:53:27.849118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.184 [2024-11-19 07:53:27.849153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.184 [2024-11-19 07:53:27.849180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.184 [2024-11-19 07:53:27.849179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.184 [2024-11-19 07:53:27.849208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.184 [2024-11-19 07:53:27.849217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.184 [2024-11-19 07:53:27.849231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.184 [2024-11-19 07:53:27.849240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.184 [2024-11-19 07:53:27.849256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:12[2024-11-19 07:53:27.849259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.184 with the state(6) to be set 00:29:36.184 [2024-11-19 07:53:27.849280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same [2024-11-19 07:53:27.849281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(6) to be set 00:29:36.184 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.184 [2024-11-19 07:53:27.849301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.184 [2024-11-19 07:53:27.849309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.184 [2024-11-19 07:53:27.849320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.184 [2024-11-19 07:53:27.849331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.184 [2024-11-19 07:53:27.849340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.184 [2024-11-19 07:53:27.849358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:12[2024-11-19 07:53:27.849359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.184 with the state(6) to be set 00:29:36.184 [2024-11-19 07:53:27.849382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same [2024-11-19 07:53:27.849382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(6) to be set 00:29:36.184 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.184 [2024-11-19 07:53:27.849402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.184 [2024-11-19 07:53:27.849410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.184 [2024-11-19 07:53:27.849422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.184 [2024-11-19 07:53:27.849433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.184 [2024-11-19 07:53:27.849441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.184 [2024-11-19 07:53:27.849459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:12[2024-11-19 07:53:27.849461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.184 with the state(6) to be set 00:29:36.184 [2024-11-19 07:53:27.849489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-19 07:53:27.849489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.184 with the state(6) to be set 00:29:36.184 [2024-11-19 07:53:27.849512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.184 [2024-11-19 07:53:27.849517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.184 [2024-11-19 07:53:27.849531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.184 [2024-11-19 07:53:27.849539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.184 [2024-11-19 07:53:27.849551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.184 [2024-11-19 07:53:27.849565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.184 [2024-11-19 07:53:27.849570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.184 [2024-11-19 07:53:27.849588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-19 07:53:27.849589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.184 with the state(6) to be set 00:29:36.184 [2024-11-19 07:53:27.849611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.184 [2024-11-19 07:53:27.849617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.184 [2024-11-19 07:53:27.849630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.184 [2024-11-19 07:53:27.849639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.184 [2024-11-19 07:53:27.849649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.184 [2024-11-19 07:53:27.849668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.184 [2024-11-19 07:53:27.849665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.184 [2024-11-19 07:53:27.849703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.184 [2024-11-19 07:53:27.849707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.184 [2024-11-19 07:53:27.849725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.184 [2024-11-19 07:53:27.849735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.184 [2024-11-19 07:53:27.849745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.184 [2024-11-19 07:53:27.849759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.184 [2024-11-19 07:53:27.849764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.184 [2024-11-19 07:53:27.849786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:12[2024-11-19 07:53:27.849788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.184 with the state(6) to be set 00:29:36.184 [2024-11-19 07:53:27.849810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-19 07:53:27.849811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.184 with the state(6) to be set 00:29:36.184 [2024-11-19 07:53:27.849832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.184 [2024-11-19 07:53:27.849838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.184 [2024-11-19 07:53:27.849851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.184 [2024-11-19 07:53:27.849860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.184 [2024-11-19 07:53:27.849870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.184 [2024-11-19 07:53:27.849886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:12[2024-11-19 07:53:27.849889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.184 with the state(6) to be set 00:29:36.184 [2024-11-19 07:53:27.849909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same [2024-11-19 07:53:27.849910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(6) to be set 00:29:36.184 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.184 [2024-11-19 07:53:27.849930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.184 [2024-11-19 07:53:27.849938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.184 [2024-11-19 07:53:27.849949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.184 [2024-11-19 07:53:27.849960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.185 [2024-11-19 07:53:27.849967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.185 [2024-11-19 07:53:27.849996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.185 [2024-11-19 07:53:27.849997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.185 [2024-11-19 07:53:27.850015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.185 [2024-11-19 07:53:27.850020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.185 [2024-11-19 07:53:27.850034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.185 [2024-11-19 07:53:27.850046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.185 [2024-11-19 07:53:27.850052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.185 [2024-11-19 07:53:27.850068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-19 07:53:27.850071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.185 with the state(6) to be set 00:29:36.185 [2024-11-19 07:53:27.850094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.185 [2024-11-19 07:53:27.850101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.185 [2024-11-19 07:53:27.850114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.185 [2024-11-19 07:53:27.850124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.185 [2024-11-19 07:53:27.850133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.185 [2024-11-19 07:53:27.850150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:12[2024-11-19 07:53:27.850152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.185 with the state(6) to be set 00:29:36.185 [2024-11-19 07:53:27.850173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.185 [2024-11-19 07:53:27.850175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.185 [2024-11-19 07:53:27.850192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.185 [2024-11-19 07:53:27.850201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.185 [2024-11-19 07:53:27.850225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.185 [2024-11-19 07:53:27.850239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.185 [2024-11-19 07:53:27.850245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.185 [2024-11-19 07:53:27.850263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.185 [2024-11-19 07:53:27.850265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.185 [2024-11-19 07:53:27.850281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.185 [2024-11-19 07:53:27.850286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.185 [2024-11-19 07:53:27.850299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.185 [2024-11-19 07:53:27.850312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.185 [2024-11-19 07:53:27.850317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.185 [2024-11-19 07:53:27.850334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.185 [2024-11-19 07:53:27.850334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.185 [2024-11-19 07:53:27.850352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.185 [2024-11-19 07:53:27.850370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same [2024-11-19 07:53:27.850368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:12with the state(6) to be set 00:29:36.185 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.185 [2024-11-19 07:53:27.850394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.185 [2024-11-19 07:53:27.850396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.185 [2024-11-19 07:53:27.850413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.185 [2024-11-19 07:53:27.850423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.185 [2024-11-19 07:53:27.850430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.185 [2024-11-19 07:53:27.850446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.185 [2024-11-19 07:53:27.850449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.185 [2024-11-19 07:53:27.850471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.185 [2024-11-19 07:53:27.850468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:36.185 [2024-11-19 07:53:27.850500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.185 [2024-11-19 07:53:27.850527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.185 [2024-11-19 07:53:27.850549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.185 [2024-11-19 07:53:27.850589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.185 [2024-11-19 07:53:27.850612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.185 [2024-11-19 07:53:27.850639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.185 [2024-11-19 07:53:27.850661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.185 [2024-11-19 07:53:27.850702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.185 [2024-11-19 07:53:27.850725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.185 [2024-11-19 07:53:27.850751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.185 [2024-11-19 07:53:27.850773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.185 [2024-11-19 07:53:27.850798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.185 [2024-11-19 07:53:27.850820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.185 [2024-11-19 07:53:27.850845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.185 [2024-11-19 07:53:27.850867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.185 [2024-11-19 07:53:27.850898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.185 [2024-11-19 07:53:27.850921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.185 [2024-11-19 07:53:27.850946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.185 [2024-11-19 07:53:27.850969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.185 [2024-11-19 07:53:27.851023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.185 [2024-11-19 07:53:27.851046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.185 [2024-11-19 07:53:27.851072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.185 [2024-11-19 07:53:27.851094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.185 [2024-11-19 07:53:27.851119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.185 [2024-11-19 07:53:27.851141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.185 [2024-11-19 07:53:27.851167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.186 [2024-11-19 07:53:27.851189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.186 [2024-11-19 07:53:27.851214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.186 [2024-11-19 07:53:27.851251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.186 [2024-11-19 07:53:27.851277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.186 [2024-11-19 07:53:27.851298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.186 [2024-11-19 07:53:27.851323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.186 [2024-11-19 07:53:27.851350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.186 [2024-11-19 07:53:27.851374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.186 [2024-11-19 07:53:27.851396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.186 [2024-11-19 07:53:27.852843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.186 [2024-11-19 07:53:27.852880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.186 [2024-11-19 07:53:27.852901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.186 [2024-11-19 07:53:27.852921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.186 [2024-11-19 07:53:27.852940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.186 [2024-11-19 07:53:27.852965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.186 [2024-11-19 07:53:27.852995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.186 [2024-11-19 07:53:27.853014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.186 [2024-11-19 07:53:27.853033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.186 [2024-11-19 07:53:27.853051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.186 [2024-11-19 07:53:27.853127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.186 [2024-11-19 07:53:27.853147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.186 [2024-11-19 07:53:27.853166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.186 [2024-11-19 07:53:27.853184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.186 [2024-11-19 07:53:27.853203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.186 [2024-11-19 07:53:27.853221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.186 [2024-11-19 07:53:27.853255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.186 [2024-11-19 07:53:27.853273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.186 [2024-11-19 07:53:27.853291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.186 [2024-11-19 07:53:27.853309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.186 [2024-11-19 07:53:27.853328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.186 [2024-11-19 07:53:27.853346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.186 [2024-11-19 07:53:27.853364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.186 [2024-11-19 07:53:27.853382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.186 [2024-11-19 07:53:27.853400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.186 [2024-11-19 07:53:27.853418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.186 [2024-11-19 07:53:27.853436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.186 [2024-11-19 07:53:27.853454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.186 [2024-11-19 07:53:27.853471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.186 [2024-11-19 07:53:27.853490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.186 [2024-11-19 07:53:27.853507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.186 [2024-11-19 07:53:27.853525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.186 [2024-11-19 07:53:27.853548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.186 [2024-11-19 07:53:27.853567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.186 [2024-11-19 07:53:27.853585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.186 [2024-11-19 07:53:27.853602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.186 [2024-11-19 07:53:27.853620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.186 [2024-11-19 07:53:27.853638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.186 [2024-11-19 07:53:27.853656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.186 [2024-11-19 07:53:27.853696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.186 [2024-11-19 07:53:27.853718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.186 [2024-11-19 07:53:27.853736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.186 [2024-11-19 07:53:27.853755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.186 [2024-11-19 07:53:27.853773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.186 [2024-11-19 07:53:27.853791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.186 [2024-11-19 07:53:27.853809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.186 [2024-11-19 07:53:27.853827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.186 [2024-11-19 07:53:27.853845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.186 [2024-11-19 07:53:27.853864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.186 [2024-11-19 07:53:27.853882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.186 [2024-11-19 07:53:27.853900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.186 [2024-11-19 07:53:27.853919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.186 [2024-11-19 07:53:27.853937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.186 [2024-11-19 07:53:27.853955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.186 [2024-11-19 07:53:27.853984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.186 [2024-11-19 07:53:27.854002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.186 [2024-11-19 07:53:27.854036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.186 [2024-11-19 07:53:27.854055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.186 [2024-11-19 07:53:27.854077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.186 [2024-11-19 07:53:27.854095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.186 [2024-11-19 07:53:27.854113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.186 [2024-11-19 07:53:27.854130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.186 [2024-11-19 07:53:27.854147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:36.186 [2024-11-19 07:53:27.856950] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:29:36.186 [2024-11-19 07:53:27.857031] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7f00 (9): Bad file descriptor 00:29:36.186 [2024-11-19 07:53:27.857114] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.186 [2024-11-19 07:53:27.857144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.187 [2024-11-19 07:53:27.857169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.187 [2024-11-19 07:53:27.857190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.187 [2024-11-19 07:53:27.857213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.187 [2024-11-19 07:53:27.857234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.187 [2024-11-19 07:53:27.857257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.187 [2024-11-19 07:53:27.857279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.187 [2024-11-19 07:53:27.857299] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f5700 is same with the state(6) to be set 00:29:36.187 [2024-11-19 07:53:27.857402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.187 [2024-11-19 07:53:27.857434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.187 [2024-11-19 07:53:27.857458] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.187 [2024-11-19 07:53:27.857480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.187 [2024-11-19 07:53:27.857502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.187 [2024-11-19 07:53:27.857524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.187 [2024-11-19 07:53:27.857546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.187 [2024-11-19 07:53:27.857567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-11-19 07:53:27.857555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.187 with the state(6) to be set 00:29:36.187 [2024-11-19 07:53:27.857591] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6100 is same with the state(6) to be set 00:29:36.187 [2024-11-19 07:53:27.857644] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2f00 (9): Bad file descriptor 00:29:36.187 [2024-11-19 07:53:27.857716] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f3900 (9): Bad file descriptor 00:29:36.187 [2024-11-19 07:53:27.857765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4300 (9): Bad file descriptor 00:29:36.187 [2024-11-19 07:53:27.857838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.187 [2024-11-19 07:53:27.857868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.187 [2024-11-19 07:53:27.857892] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.187 [2024-11-19 07:53:27.857914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.187 [2024-11-19 07:53:27.857936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.187 [2024-11-19 07:53:27.857958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.187 [2024-11-19 07:53:27.857992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.187 [2024-11-19 07:53:27.858013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.187 [2024-11-19 07:53:27.858034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4d00 is same with the state(6) to be set 00:29:36.187 [2024-11-19 07:53:27.858077] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:29:36.187 [2024-11-19 07:53:27.860752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:36.187 [2024-11-19 07:53:27.860802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:36.187 [2024-11-19 07:53:27.860823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:36.187 [2024-11-19 07:53:27.860878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:36.187 [2024-11-19 07:53:27.860916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:36.187 [2024-11-19 07:53:27.860937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:36.187 [2024-11-19 07:53:27.860956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:36.187 [2024-11-19 07:53:27.860975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:36.187 [2024-11-19 07:53:27.860993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:36.187 [2024-11-19 07:53:27.861011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:36.187 [2024-11-19 07:53:27.861029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:36.187 [2024-11-19 07:53:27.861047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:36.187 [2024-11-19 07:53:27.861066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:36.187 [2024-11-19 07:53:27.861090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:36.187 [2024-11-19 07:53:27.861109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:36.187 [2024-11-19 07:53:27.861128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:36.187 [2024-11-19 07:53:27.861147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:36.187 [2024-11-19 07:53:27.861165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:36.187 [2024-11-19 07:53:27.861183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:36.187 [2024-11-19 07:53:27.861201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:36.187 [2024-11-19 07:53:27.861219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:36.187 [2024-11-19 07:53:27.861238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:36.187 [2024-11-19 07:53:27.861257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:36.187 [2024-11-19 07:53:27.861276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:36.187 [2024-11-19 07:53:27.861293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:36.187 [2024-11-19 07:53:27.861312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:36.187 [2024-11-19 07:53:27.861330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:36.187 [2024-11-19 07:53:27.861349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:36.187 [2024-11-19 07:53:27.861367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:36.187 [2024-11-19 07:53:27.861400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:36.187 [2024-11-19 07:53:27.861418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:36.187 [2024-11-19 07:53:27.861436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:36.187 [2024-11-19 07:53:27.861454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:36.187 [2024-11-19 07:53:27.861471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:36.187 [2024-11-19 07:53:27.861488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:36.187 [2024-11-19 07:53:27.861506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:36.187 [2024-11-19 07:53:27.861524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:36.187 [2024-11-19 07:53:27.861542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:36.187 [2024-11-19 07:53:27.861560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:36.187 [2024-11-19 07:53:27.861581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:36.187 [2024-11-19 07:53:27.861600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:36.187 [2024-11-19 07:53:27.861617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:36.187 [2024-11-19 07:53:27.861636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:36.187 [2024-11-19 07:53:27.861653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:36.187 [2024-11-19 07:53:27.861671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:36.187 [2024-11-19 07:53:27.861696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:36.187 [2024-11-19 07:53:27.861732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:36.187 [2024-11-19 07:53:27.861751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:36.187 [2024-11-19 07:53:27.861770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:36.187 [2024-11-19 07:53:27.861788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:36.187 [2024-11-19 07:53:27.861807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:36.188 [2024-11-19 07:53:27.861825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:36.188 [2024-11-19 07:53:27.861844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:36.188 [2024-11-19 07:53:27.861863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:36.188 [2024-11-19 07:53:27.861882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:36.188 [2024-11-19 07:53:27.861900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:36.188 [2024-11-19 07:53:27.861918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:36.188 [2024-11-19 07:53:27.861937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:36.188 [2024-11-19 07:53:27.861955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:36.188 [2024-11-19 07:53:27.861974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:36.188 [2024-11-19 07:53:27.861992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:36.188 [2024-11-19 07:53:27.862026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:36.188 [2024-11-19 07:53:27.862045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:36.188 [2024-11-19 07:53:27.873836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.188 [2024-11-19 07:53:27.873947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.188 [2024-11-19 07:53:27.874012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.188 [2024-11-19 07:53:27.874038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.188 [2024-11-19 07:53:27.874065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.188 [2024-11-19 07:53:27.874089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.188 [2024-11-19 07:53:27.874116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.188 [2024-11-19 07:53:27.874139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.188 [2024-11-19 07:53:27.874165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.188 [2024-11-19 07:53:27.874188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.188 [2024-11-19 07:53:27.874215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.188 [2024-11-19 07:53:27.874238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.188 [2024-11-19 07:53:27.874263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.188 [2024-11-19 07:53:27.874286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.188 [2024-11-19 07:53:27.874312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.188 [2024-11-19 07:53:27.874336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.188 [2024-11-19 07:53:27.874361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.188 [2024-11-19 07:53:27.874384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.188 [2024-11-19 07:53:27.874411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.188 [2024-11-19 07:53:27.874433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.188 [2024-11-19 07:53:27.874460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.188 [2024-11-19 07:53:27.874483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.188 [2024-11-19 07:53:27.874509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.188 [2024-11-19 07:53:27.874532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.188 [2024-11-19 07:53:27.874558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.188 [2024-11-19 07:53:27.874581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.188 [2024-11-19 07:53:27.874608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.188 [2024-11-19 07:53:27.874635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.188 [2024-11-19 07:53:27.874664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.188 [2024-11-19 07:53:27.874707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.188 [2024-11-19 07:53:27.874742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.188 [2024-11-19 07:53:27.874766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.188 [2024-11-19 07:53:27.874793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.188 [2024-11-19 07:53:27.874816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.188 [2024-11-19 07:53:27.874842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.188 [2024-11-19 07:53:27.874865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.188 [2024-11-19 07:53:27.874891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.188 [2024-11-19 07:53:27.874914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.188 [2024-11-19 07:53:27.874941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.188 [2024-11-19 07:53:27.874964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.188 [2024-11-19 07:53:27.874989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.188 [2024-11-19 07:53:27.875012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.188 [2024-11-19 07:53:27.875038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.188 [2024-11-19 07:53:27.875061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.188 [2024-11-19 07:53:27.875087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.188 [2024-11-19 07:53:27.875110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.188 [2024-11-19 07:53:27.875136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.188 [2024-11-19 07:53:27.875159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.188 [2024-11-19 07:53:27.875185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.188 [2024-11-19 07:53:27.875208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.188 [2024-11-19 07:53:27.875234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.188 [2024-11-19 07:53:27.875256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.188 [2024-11-19 07:53:27.875288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.188 [2024-11-19 07:53:27.875311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.188 [2024-11-19 07:53:27.875337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.188 [2024-11-19 07:53:27.875360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.188 [2024-11-19 07:53:27.875386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.188 [2024-11-19 07:53:27.875409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.188 [2024-11-19 07:53:27.875435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.188 [2024-11-19 07:53:27.875458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.188 [2024-11-19 07:53:27.875484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.188 [2024-11-19 07:53:27.875507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.188 [2024-11-19 07:53:27.875532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.188 [2024-11-19 07:53:27.875556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.188 [2024-11-19 07:53:27.875581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.189 [2024-11-19 07:53:27.875604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.189 [2024-11-19 07:53:27.875629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.189 [2024-11-19 07:53:27.875652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.189 [2024-11-19 07:53:27.875678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.189 [2024-11-19 07:53:27.875709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.189 [2024-11-19 07:53:27.875736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.189 [2024-11-19 07:53:27.875759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.189 [2024-11-19 07:53:27.875785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.189 [2024-11-19 07:53:27.875808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.189 [2024-11-19 07:53:27.875835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.189 [2024-11-19 07:53:27.875857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.189 [2024-11-19 07:53:27.875883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.189 [2024-11-19 07:53:27.875910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.189 [2024-11-19 07:53:27.875937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.189 [2024-11-19 07:53:27.875985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.189 [2024-11-19 07:53:27.876014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.189 [2024-11-19 07:53:27.876037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.189 [2024-11-19 07:53:27.876064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.189 [2024-11-19 07:53:27.876087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.189 [2024-11-19 07:53:27.876113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.189 [2024-11-19 07:53:27.876136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.189 [2024-11-19 07:53:27.876162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.189 [2024-11-19 07:53:27.876184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.189 [2024-11-19 07:53:27.876210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.189 [2024-11-19 07:53:27.876233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.189 [2024-11-19 07:53:27.876258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.189 [2024-11-19 07:53:27.876281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.189 [2024-11-19 07:53:27.876307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.189 [2024-11-19 07:53:27.876330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.189 [2024-11-19 07:53:27.876355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.189 [2024-11-19 07:53:27.876378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.189 [2024-11-19 07:53:27.876404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.189 [2024-11-19 07:53:27.876426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.189 [2024-11-19 07:53:27.876451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.189 [2024-11-19 07:53:27.876474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.189 [2024-11-19 07:53:27.876501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.189 [2024-11-19 07:53:27.876523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.189 [2024-11-19 07:53:27.876553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.189 [2024-11-19 07:53:27.876576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.189 [2024-11-19 07:53:27.876603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.189 [2024-11-19 07:53:27.876625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.189 [2024-11-19 07:53:27.876651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.189 [2024-11-19 07:53:27.876674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.189 [2024-11-19 07:53:27.876718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.189 [2024-11-19 07:53:27.876744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.189 [2024-11-19 07:53:27.876770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.189 [2024-11-19 07:53:27.876793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.189 [2024-11-19 07:53:27.876819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.189 [2024-11-19 07:53:27.876842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.189 [2024-11-19 07:53:27.876869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.189 [2024-11-19 07:53:27.876891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.189 [2024-11-19 07:53:27.876917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.189 [2024-11-19 07:53:27.876940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.189 [2024-11-19 07:53:27.876966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.189 [2024-11-19 07:53:27.876988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.189 [2024-11-19 07:53:27.877014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.189 [2024-11-19 07:53:27.877037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.189 [2024-11-19 07:53:27.877063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.189 [2024-11-19 07:53:27.877086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.189 [2024-11-19 07:53:27.877112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.189 [2024-11-19 07:53:27.877135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.189 [2024-11-19 07:53:27.877160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.189 [2024-11-19 07:53:27.877188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.189 task offset: 17408 on job bdev=Nvme10n1 fails 00:29:36.189 1373.15 IOPS, 85.82 MiB/s [2024-11-19T06:53:28.119Z] [2024-11-19 07:53:27.881824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.189 [2024-11-19 07:53:27.881907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7f00 with addr=10.0.0.2, port=4420 00:29:36.189 [2024-11-19 07:53:27.881938] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7f00 is same with the state(6) to be set 00:29:36.189 [2024-11-19 07:53:27.882022] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f5700 (9): Bad file descriptor 00:29:36.189 [2024-11-19 07:53:27.882138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.189 [2024-11-19 07:53:27.882186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.189 [2024-11-19 07:53:27.882213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.189 [2024-11-19 07:53:27.882235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.189 [2024-11-19 07:53:27.882259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.189 [2024-11-19 07:53:27.882281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.189 [2024-11-19 07:53:27.882303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.189 [2024-11-19 07:53:27.882325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.189 [2024-11-19 07:53:27.882345] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6b00 is same with the state(6) to be set 00:29:36.189 [2024-11-19 07:53:27.882406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.190 [2024-11-19 07:53:27.882434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.190 [2024-11-19 07:53:27.882458] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.190 [2024-11-19 07:53:27.882480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.190 [2024-11-19 07:53:27.882502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.190 [2024-11-19 07:53:27.882524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.190 [2024-11-19 07:53:27.882547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.190 [2024-11-19 07:53:27.882568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.190 [2024-11-19 07:53:27.882589] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(6) to be set 00:29:36.190 [2024-11-19 07:53:27.882635] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6100 (9): Bad file descriptor 00:29:36.190 [2024-11-19 07:53:27.882729] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4d00 (9): Bad file descriptor 00:29:36.190 [2024-11-19 07:53:27.882790] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7f00 (9): Bad file descriptor 00:29:36.190 [2024-11-19 07:53:27.882904] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:36.190 [2024-11-19 07:53:27.883013] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:36.190 [2024-11-19 07:53:27.883116] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:36.190 [2024-11-19 07:53:27.883217] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:36.190 [2024-11-19 07:53:27.883492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.190 [2024-11-19 07:53:27.883540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.190 [2024-11-19 07:53:27.883599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.190 [2024-11-19 07:53:27.883627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.190 [2024-11-19 07:53:27.883655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.190 [2024-11-19 07:53:27.883678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.190 [2024-11-19 07:53:27.883715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.190 [2024-11-19 07:53:27.883739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.190 [2024-11-19 07:53:27.883765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.190 [2024-11-19 07:53:27.883788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.190 [2024-11-19 07:53:27.883814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.190 [2024-11-19 07:53:27.883836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.190 [2024-11-19 07:53:27.883862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.190 [2024-11-19 07:53:27.883885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.190 [2024-11-19 07:53:27.883911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.190 [2024-11-19 07:53:27.883934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.190 [2024-11-19 07:53:27.883959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.190 [2024-11-19 07:53:27.883982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.190 [2024-11-19 07:53:27.884008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.190 [2024-11-19 07:53:27.884030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.190 [2024-11-19 07:53:27.885934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.190 [2024-11-19 07:53:27.885983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.190 [2024-11-19 07:53:27.886026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.190 [2024-11-19 07:53:27.886052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.190 [2024-11-19 07:53:27.886079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.190 [2024-11-19 07:53:27.886102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.190 [2024-11-19 07:53:27.886128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.190 [2024-11-19 07:53:27.886152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.190 [2024-11-19 07:53:27.886177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.190 [2024-11-19 07:53:27.886215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.190 [2024-11-19 07:53:27.886241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.190 [2024-11-19 07:53:27.886263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.190 [2024-11-19 07:53:27.886288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.190 [2024-11-19 07:53:27.886310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.190 [2024-11-19 07:53:27.886335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.190 [2024-11-19 07:53:27.886357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.190 [2024-11-19 07:53:27.886382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.190 [2024-11-19 07:53:27.886404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.190 [2024-11-19 07:53:27.886429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.190 [2024-11-19 07:53:27.886451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.190 [2024-11-19 07:53:27.886476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.190 [2024-11-19 07:53:27.886498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.190 [2024-11-19 07:53:27.886524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.190 [2024-11-19 07:53:27.886547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.190 [2024-11-19 07:53:27.886571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.190 [2024-11-19 07:53:27.886593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.190 [2024-11-19 07:53:27.886618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.190 [2024-11-19 07:53:27.886645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.190 [2024-11-19 07:53:27.886671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.190 [2024-11-19 07:53:27.886718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.190 [2024-11-19 07:53:27.886763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.190 [2024-11-19 07:53:27.886786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.190 [2024-11-19 07:53:27.886812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.190 [2024-11-19 07:53:27.886835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.190 [2024-11-19 07:53:27.886861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.191 [2024-11-19 07:53:27.886884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.191 [2024-11-19 07:53:27.886910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.191 [2024-11-19 07:53:27.886932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.191 [2024-11-19 07:53:27.886958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.191 [2024-11-19 07:53:27.886981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.191 [2024-11-19 07:53:27.887021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.191 [2024-11-19 07:53:27.887045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.191 [2024-11-19 07:53:27.887069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.191 [2024-11-19 07:53:27.887091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.191 [2024-11-19 07:53:27.887116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.191 [2024-11-19 07:53:27.887138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.191 [2024-11-19 07:53:27.887163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.191 [2024-11-19 07:53:27.887185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.191 [2024-11-19 07:53:27.887209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.191 [2024-11-19 07:53:27.887231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.191 [2024-11-19 07:53:27.887256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.191 [2024-11-19 07:53:27.887278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.191 [2024-11-19 07:53:27.887307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.191 [2024-11-19 07:53:27.887330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.191 [2024-11-19 07:53:27.887355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.191 [2024-11-19 07:53:27.887377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.191 [2024-11-19 07:53:27.887402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.191 [2024-11-19 07:53:27.887423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.191 [2024-11-19 07:53:27.887448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.191 [2024-11-19 07:53:27.887470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.191 [2024-11-19 07:53:27.887495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.191 [2024-11-19 07:53:27.887517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.191 [2024-11-19 07:53:27.887542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.191 [2024-11-19 07:53:27.887564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.191 [2024-11-19 07:53:27.887588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.191 [2024-11-19 07:53:27.887610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.191 [2024-11-19 07:53:27.887635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.191 [2024-11-19 07:53:27.887656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.191 [2024-11-19 07:53:27.887681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.191 [2024-11-19 07:53:27.887727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.191 [2024-11-19 07:53:27.887755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.191 [2024-11-19 07:53:27.887778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.191 [2024-11-19 07:53:27.887803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.191 [2024-11-19 07:53:27.887826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.191 [2024-11-19 07:53:27.887851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.191 [2024-11-19 07:53:27.887874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.191 [2024-11-19 07:53:27.887900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.191 [2024-11-19 07:53:27.887927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.191 [2024-11-19 07:53:27.887954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.191 [2024-11-19 07:53:27.887977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.191 [2024-11-19 07:53:27.888018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.191 [2024-11-19 07:53:27.888041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.191 [2024-11-19 07:53:27.888066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.191 [2024-11-19 07:53:27.888088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.191 [2024-11-19 07:53:27.888112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.191 [2024-11-19 07:53:27.888134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.191 [2024-11-19 07:53:27.888159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.191 [2024-11-19 07:53:27.888181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.191 [2024-11-19 07:53:27.888206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.191 [2024-11-19 07:53:27.888228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.191 [2024-11-19 07:53:27.888253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.191 [2024-11-19 07:53:27.888275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.191 [2024-11-19 07:53:27.888300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.191 [2024-11-19 07:53:27.888322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.191 [2024-11-19 07:53:27.888347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.191 [2024-11-19 07:53:27.888369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.191 [2024-11-19 07:53:27.888394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.191 [2024-11-19 07:53:27.888416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.191 [2024-11-19 07:53:27.888440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.191 [2024-11-19 07:53:27.888462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.191 [2024-11-19 07:53:27.888487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.191 [2024-11-19 07:53:27.888509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.191 [2024-11-19 07:53:27.888538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.191 [2024-11-19 07:53:27.888561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.191 [2024-11-19 07:53:27.888586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.191 [2024-11-19 07:53:27.888608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.191 [2024-11-19 07:53:27.888633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.191 [2024-11-19 07:53:27.888655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.191 [2024-11-19 07:53:27.888702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.191 [2024-11-19 07:53:27.888728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.191 [2024-11-19 07:53:27.888754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.191 [2024-11-19 07:53:27.888778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.191 [2024-11-19 07:53:27.888804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.191 [2024-11-19 07:53:27.888827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.192 [2024-11-19 07:53:27.888852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.192 [2024-11-19 07:53:27.888875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.192 [2024-11-19 07:53:27.888901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.192 [2024-11-19 07:53:27.888924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.192 [2024-11-19 07:53:27.888949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.192 [2024-11-19 07:53:27.888972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.192 [2024-11-19 07:53:27.889013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.192 [2024-11-19 07:53:27.889036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.192 [2024-11-19 07:53:27.889062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.192 [2024-11-19 07:53:27.889084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.192 [2024-11-19 07:53:27.889109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.192 [2024-11-19 07:53:27.889131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.192 [2024-11-19 07:53:27.889156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.192 [2024-11-19 07:53:27.889182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.192 [2024-11-19 07:53:27.889206] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f9f80 is same with the state(6) to be set 00:29:36.192 [2024-11-19 07:53:27.890766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.192 [2024-11-19 07:53:27.890799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.192 [2024-11-19 07:53:27.890832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.192 [2024-11-19 07:53:27.890856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.192 [2024-11-19 07:53:27.890882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.192 [2024-11-19 07:53:27.890905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.192 [2024-11-19 07:53:27.890931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.192 [2024-11-19 07:53:27.890953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.192 [2024-11-19 07:53:27.890979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.192 [2024-11-19 07:53:27.891001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.192 [2024-11-19 07:53:27.891027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.192 [2024-11-19 07:53:27.891050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.192 [2024-11-19 07:53:27.891092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.192 [2024-11-19 07:53:27.891114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.192 [2024-11-19 07:53:27.891139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.192 [2024-11-19 07:53:27.891161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.192 [2024-11-19 07:53:27.891186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.192 [2024-11-19 07:53:27.891208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.192 [2024-11-19 07:53:27.891232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.192 [2024-11-19 07:53:27.891253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.192 [2024-11-19 07:53:27.891279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.192 [2024-11-19 07:53:27.891301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.192 [2024-11-19 07:53:27.891325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.192 [2024-11-19 07:53:27.891347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.192 [2024-11-19 07:53:27.891377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.192 [2024-11-19 07:53:27.891401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.192 [2024-11-19 07:53:27.891426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.192 [2024-11-19 07:53:27.891448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.192 [2024-11-19 07:53:27.891474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.192 [2024-11-19 07:53:27.891509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.192 [2024-11-19 07:53:27.891536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.192 [2024-11-19 07:53:27.891558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.192 [2024-11-19 07:53:27.891583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.192 [2024-11-19 07:53:27.891604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.192 [2024-11-19 07:53:27.891629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.192 [2024-11-19 07:53:27.891651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.192 [2024-11-19 07:53:27.891675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.192 [2024-11-19 07:53:27.891718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.192 [2024-11-19 07:53:27.891747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.192 [2024-11-19 07:53:27.891770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.192 [2024-11-19 07:53:27.891795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.192 [2024-11-19 07:53:27.891818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.192 [2024-11-19 07:53:27.891844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.192 [2024-11-19 07:53:27.891867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.192 [2024-11-19 07:53:27.891892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.192 [2024-11-19 07:53:27.891914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.192 [2024-11-19 07:53:27.891940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.192 [2024-11-19 07:53:27.891962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.192 [2024-11-19 07:53:27.891987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.192 [2024-11-19 07:53:27.892031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.192 [2024-11-19 07:53:27.892058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.192 [2024-11-19 07:53:27.892081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.192 [2024-11-19 07:53:27.892106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.192 [2024-11-19 07:53:27.892128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.192 [2024-11-19 07:53:27.892152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.192 [2024-11-19 07:53:27.892174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.192 [2024-11-19 07:53:27.892199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.192 [2024-11-19 07:53:27.892222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.192 [2024-11-19 07:53:27.892247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.192 [2024-11-19 07:53:27.892269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.192 [2024-11-19 07:53:27.892294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.192 [2024-11-19 07:53:27.892317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.192 [2024-11-19 07:53:27.892342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.193 [2024-11-19 07:53:27.892364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.193 [2024-11-19 07:53:27.892389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.193 [2024-11-19 07:53:27.892410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.193 [2024-11-19 07:53:27.892435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.193 [2024-11-19 07:53:27.892457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.193 [2024-11-19 07:53:27.892482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.193 [2024-11-19 07:53:27.892504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.193 [2024-11-19 07:53:27.892529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.193 [2024-11-19 07:53:27.892551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.193 [2024-11-19 07:53:27.892577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.193 [2024-11-19 07:53:27.892599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.193 [2024-11-19 07:53:27.892628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.193 [2024-11-19 07:53:27.892651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.193 [2024-11-19 07:53:27.892699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.193 [2024-11-19 07:53:27.892724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.193 [2024-11-19 07:53:27.892751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.193 [2024-11-19 07:53:27.892773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.193 [2024-11-19 07:53:27.892799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.193 [2024-11-19 07:53:27.892822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.193 [2024-11-19 07:53:27.892848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.193 [2024-11-19 07:53:27.892871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.193 [2024-11-19 07:53:27.892897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.193 [2024-11-19 07:53:27.892919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.193 [2024-11-19 07:53:27.892945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.193 [2024-11-19 07:53:27.892967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.193 [2024-11-19 07:53:27.893010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.193 [2024-11-19 07:53:27.893033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.193 [2024-11-19 07:53:27.893058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.193 [2024-11-19 07:53:27.893080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.193 [2024-11-19 07:53:27.893105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.193 [2024-11-19 07:53:27.893127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.193 [2024-11-19 07:53:27.893153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.193 [2024-11-19 07:53:27.893175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.193 [2024-11-19 07:53:27.893200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.193 [2024-11-19 07:53:27.893222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.193 [2024-11-19 07:53:27.893247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.193 [2024-11-19 07:53:27.893274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.193 [2024-11-19 07:53:27.893300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.193 [2024-11-19 07:53:27.893322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.193 [2024-11-19 07:53:27.893357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.193 [2024-11-19 07:53:27.893379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.193 [2024-11-19 07:53:27.893404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.193 [2024-11-19 07:53:27.893427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.193 [2024-11-19 07:53:27.893452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.193 [2024-11-19 07:53:27.893475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.193 [2024-11-19 07:53:27.893500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.193 [2024-11-19 07:53:27.893523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.193 [2024-11-19 07:53:27.893548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.193 [2024-11-19 07:53:27.893570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.193 [2024-11-19 07:53:27.893595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.193 [2024-11-19 07:53:27.893617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.193 [2024-11-19 07:53:27.893642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.193 [2024-11-19 07:53:27.893698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.193 [2024-11-19 07:53:27.893728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.193 [2024-11-19 07:53:27.893751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.193 [2024-11-19 07:53:27.893777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.193 [2024-11-19 07:53:27.893800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.193 [2024-11-19 07:53:27.893825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.193 [2024-11-19 07:53:27.893848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.193 [2024-11-19 07:53:27.893874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.193 [2024-11-19 07:53:27.893897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.193 [2024-11-19 07:53:27.893928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.193 [2024-11-19 07:53:27.893951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.193 [2024-11-19 07:53:27.893992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.193 [2024-11-19 07:53:27.894015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.193 [2024-11-19 07:53:27.894036] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fa200 is same with the state(6) to be set 00:29:36.193 [2024-11-19 07:53:27.895581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.193 [2024-11-19 07:53:27.895614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.193 [2024-11-19 07:53:27.895651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.193 [2024-11-19 07:53:27.895676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.193 [2024-11-19 07:53:27.895720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.193 [2024-11-19 07:53:27.895746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.193 [2024-11-19 07:53:27.895774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.193 [2024-11-19 07:53:27.895797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.193 [2024-11-19 07:53:27.895823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.193 [2024-11-19 07:53:27.895845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.193 [2024-11-19 07:53:27.895870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.193 [2024-11-19 07:53:27.895893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.193 [2024-11-19 07:53:27.895919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.194 [2024-11-19 07:53:27.895942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.194 [2024-11-19 07:53:27.895968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.194 [2024-11-19 07:53:27.895990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.194 [2024-11-19 07:53:27.896016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.194 [2024-11-19 07:53:27.896039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.194 [2024-11-19 07:53:27.896065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.194 [2024-11-19 07:53:27.896102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.194 [2024-11-19 07:53:27.896133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.194 [2024-11-19 07:53:27.896156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.194 [2024-11-19 07:53:27.896181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.194 [2024-11-19 07:53:27.896204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.194 [2024-11-19 07:53:27.896228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.194 [2024-11-19 07:53:27.896250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.194 [2024-11-19 07:53:27.896275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.194 [2024-11-19 07:53:27.896298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.194 [2024-11-19 07:53:27.896360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.194 [2024-11-19 07:53:27.896384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.194 [2024-11-19 07:53:27.896409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.194 [2024-11-19 07:53:27.896431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.194 [2024-11-19 07:53:27.896456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.194 [2024-11-19 07:53:27.896478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.194 [2024-11-19 07:53:27.896503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.194 [2024-11-19 07:53:27.896525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.194 [2024-11-19 07:53:27.896550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.194 [2024-11-19 07:53:27.896572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.194 [2024-11-19 07:53:27.896598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.194 [2024-11-19 07:53:27.896620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.194 [2024-11-19 07:53:27.896645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.194 [2024-11-19 07:53:27.896667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.194 [2024-11-19 07:53:27.896729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.194 [2024-11-19 07:53:27.896755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.194 [2024-11-19 07:53:27.896781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.194 [2024-11-19 07:53:27.896808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.194 [2024-11-19 07:53:27.896835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.194 [2024-11-19 07:53:27.896858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.194 [2024-11-19 07:53:27.896885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.194 [2024-11-19 07:53:27.896908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.194 [2024-11-19 07:53:27.896934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.194 [2024-11-19 07:53:27.896956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.194 [2024-11-19 07:53:27.896982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.194 [2024-11-19 07:53:27.897019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.194 [2024-11-19 07:53:27.897046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.194 [2024-11-19 07:53:27.897068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.194 [2024-11-19 07:53:27.897093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.194 [2024-11-19 07:53:27.897115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.194 [2024-11-19 07:53:27.897140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.194 [2024-11-19 07:53:27.897162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.194 [2024-11-19 07:53:27.897187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.194 [2024-11-19 07:53:27.897209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.194 [2024-11-19 07:53:27.897234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.194 [2024-11-19 07:53:27.897256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.194 [2024-11-19 07:53:27.897280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.194 [2024-11-19 07:53:27.897303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.194 [2024-11-19 07:53:27.897327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.194 [2024-11-19 07:53:27.897349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.194 [2024-11-19 07:53:27.897374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.194 [2024-11-19 07:53:27.897397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.194 [2024-11-19 07:53:27.897426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.194 [2024-11-19 07:53:27.897449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.194 [2024-11-19 07:53:27.897474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.194 [2024-11-19 07:53:27.897496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.194 [2024-11-19 07:53:27.897521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.194 [2024-11-19 07:53:27.897543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.194 [2024-11-19 07:53:27.897569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.194 [2024-11-19 07:53:27.897591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.194 [2024-11-19 07:53:27.897616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.194 [2024-11-19 07:53:27.897637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.194 [2024-11-19 07:53:27.897662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.194 [2024-11-19 07:53:27.897727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.194 [2024-11-19 07:53:27.897761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.194 [2024-11-19 07:53:27.897784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.194 [2024-11-19 07:53:27.897811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.194 [2024-11-19 07:53:27.897843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.194 [2024-11-19 07:53:27.897869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.194 [2024-11-19 07:53:27.897892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.194 [2024-11-19 07:53:27.897918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.194 [2024-11-19 07:53:27.897940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.195 [2024-11-19 07:53:27.897967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.195 [2024-11-19 07:53:27.897989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.195 [2024-11-19 07:53:27.898031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.195 [2024-11-19 07:53:27.898053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.195 [2024-11-19 07:53:27.898078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.195 [2024-11-19 07:53:27.898105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.195 [2024-11-19 07:53:27.898131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.195 [2024-11-19 07:53:27.898161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.195 [2024-11-19 07:53:27.898186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.195 [2024-11-19 07:53:27.898208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.195 [2024-11-19 07:53:27.898233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.195 [2024-11-19 07:53:27.898255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.195 [2024-11-19 07:53:27.898280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.195 [2024-11-19 07:53:27.898302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.195 [2024-11-19 07:53:27.898327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.195 [2024-11-19 07:53:27.898348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.195 [2024-11-19 07:53:27.898373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.195 [2024-11-19 07:53:27.898395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.195 [2024-11-19 07:53:27.898420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.195 [2024-11-19 07:53:27.898443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.195 [2024-11-19 07:53:27.898468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.195 [2024-11-19 07:53:27.898490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.195 [2024-11-19 07:53:27.898515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.195 [2024-11-19 07:53:27.898537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.195 [2024-11-19 07:53:27.898563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.195 [2024-11-19 07:53:27.898586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.195 [2024-11-19 07:53:27.898611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.195 [2024-11-19 07:53:27.898633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.195 [2024-11-19 07:53:27.898657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.195 [2024-11-19 07:53:27.898703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.195 [2024-11-19 07:53:27.898736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.195 [2024-11-19 07:53:27.898760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.195 [2024-11-19 07:53:27.898786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.195 [2024-11-19 07:53:27.898808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.195 [2024-11-19 07:53:27.898834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.195 [2024-11-19 07:53:27.898857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.195 [2024-11-19 07:53:27.898883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.195 [2024-11-19 07:53:27.898905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.195 [2024-11-19 07:53:27.898928] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fa480 is same with the state(6) to be set 00:29:36.195 [2024-11-19 07:53:27.900501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.195 [2024-11-19 07:53:27.900534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.195 [2024-11-19 07:53:27.900572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.195 [2024-11-19 07:53:27.900596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.195 [2024-11-19 07:53:27.900622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.195 [2024-11-19 07:53:27.900644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.195 [2024-11-19 07:53:27.900670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.195 [2024-11-19 07:53:27.900700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.195 [2024-11-19 07:53:27.900728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.195 [2024-11-19 07:53:27.900751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.195 [2024-11-19 07:53:27.900777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.195 [2024-11-19 07:53:27.900800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.195 [2024-11-19 07:53:27.900826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.195 [2024-11-19 07:53:27.900848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.195 [2024-11-19 07:53:27.900873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.195 [2024-11-19 07:53:27.900896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.195 [2024-11-19 07:53:27.900927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.195 [2024-11-19 07:53:27.900950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.195 [2024-11-19 07:53:27.900977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.195 [2024-11-19 07:53:27.901016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.195 [2024-11-19 07:53:27.901042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.195 [2024-11-19 07:53:27.901063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.195 [2024-11-19 07:53:27.901088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.195 [2024-11-19 07:53:27.901110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.195 [2024-11-19 07:53:27.901136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.195 [2024-11-19 07:53:27.901158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.195 [2024-11-19 07:53:27.901183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.195 [2024-11-19 07:53:27.901217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.195 [2024-11-19 07:53:27.901245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.195 [2024-11-19 07:53:27.901267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.195 [2024-11-19 07:53:27.901291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.195 [2024-11-19 07:53:27.901313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.196 [2024-11-19 07:53:27.901337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.196 [2024-11-19 07:53:27.901360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.196 [2024-11-19 07:53:27.901385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.196 [2024-11-19 07:53:27.901407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.196 [2024-11-19 07:53:27.901431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.196 [2024-11-19 07:53:27.901453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.196 [2024-11-19 07:53:27.901478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.196 [2024-11-19 07:53:27.901500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.196 [2024-11-19 07:53:27.901525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.196 [2024-11-19 07:53:27.901551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.196 [2024-11-19 07:53:27.901577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.196 [2024-11-19 07:53:27.901599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.196 [2024-11-19 07:53:27.901624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.196 [2024-11-19 07:53:27.901646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.196 [2024-11-19 07:53:27.901686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.196 [2024-11-19 07:53:27.901721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.196 [2024-11-19 07:53:27.901748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.196 [2024-11-19 07:53:27.901771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.196 [2024-11-19 07:53:27.901797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.196 [2024-11-19 07:53:27.901820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.196 [2024-11-19 07:53:27.901846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.196 [2024-11-19 07:53:27.901868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.196 [2024-11-19 07:53:27.901894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.196 [2024-11-19 07:53:27.901916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.196 [2024-11-19 07:53:27.901943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.196 [2024-11-19 07:53:27.901965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.196 [2024-11-19 07:53:27.901991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.196 [2024-11-19 07:53:27.902030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.196 [2024-11-19 07:53:27.902055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.196 [2024-11-19 07:53:27.902077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.196 [2024-11-19 07:53:27.902102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.196 [2024-11-19 07:53:27.902124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.196 [2024-11-19 07:53:27.902149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.196 [2024-11-19 07:53:27.902171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.196 [2024-11-19 07:53:27.902200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.196 [2024-11-19 07:53:27.902223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.196 [2024-11-19 07:53:27.902248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.196 [2024-11-19 07:53:27.902269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.196 [2024-11-19 07:53:27.902293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.196 [2024-11-19 07:53:27.902315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.196 [2024-11-19 07:53:27.902340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.196 [2024-11-19 07:53:27.902361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.196 [2024-11-19 07:53:27.902385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.196 [2024-11-19 07:53:27.902406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.196 [2024-11-19 07:53:27.902432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.196 [2024-11-19 07:53:27.902453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.196 [2024-11-19 07:53:27.902478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.196 [2024-11-19 07:53:27.902500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.196 [2024-11-19 07:53:27.902525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.196 [2024-11-19 07:53:27.902547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.196 [2024-11-19 07:53:27.902571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.196 [2024-11-19 07:53:27.902593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.196 [2024-11-19 07:53:27.902618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.196 [2024-11-19 07:53:27.902639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.196 [2024-11-19 07:53:27.902664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.196 [2024-11-19 07:53:27.902717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.196 [2024-11-19 07:53:27.902745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.196 [2024-11-19 07:53:27.902775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.196 [2024-11-19 07:53:27.902801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.196 [2024-11-19 07:53:27.902827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.196 [2024-11-19 07:53:27.902854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.196 [2024-11-19 07:53:27.902876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.196 [2024-11-19 07:53:27.902901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.196 [2024-11-19 07:53:27.902924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.196 [2024-11-19 07:53:27.902949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.196 [2024-11-19 07:53:27.902988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.196 [2024-11-19 07:53:27.903023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.196 [2024-11-19 07:53:27.903045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.196 [2024-11-19 07:53:27.903069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.196 [2024-11-19 07:53:27.903091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.196 [2024-11-19 07:53:27.903116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.196 [2024-11-19 07:53:27.903139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.196 [2024-11-19 07:53:27.903163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.196 [2024-11-19 07:53:27.903185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.196 [2024-11-19 07:53:27.903209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.196 [2024-11-19 07:53:27.903230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.196 [2024-11-19 07:53:27.903255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.196 [2024-11-19 07:53:27.903277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.196 [2024-11-19 07:53:27.903301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.197 [2024-11-19 07:53:27.903322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.197 [2024-11-19 07:53:27.903346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.197 [2024-11-19 07:53:27.903368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.197 [2024-11-19 07:53:27.903393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.197 [2024-11-19 07:53:27.903415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.197 [2024-11-19 07:53:27.903444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.197 [2024-11-19 07:53:27.903466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.197 [2024-11-19 07:53:27.903491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.197 [2024-11-19 07:53:27.903513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.197 [2024-11-19 07:53:27.903538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.197 [2024-11-19 07:53:27.903559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.197 [2024-11-19 07:53:27.903584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.197 [2024-11-19 07:53:27.903606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.197 [2024-11-19 07:53:27.903631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.197 [2024-11-19 07:53:27.903653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.197 [2024-11-19 07:53:27.903701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.197 [2024-11-19 07:53:27.903726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.197 [2024-11-19 07:53:27.903749] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fa700 is same with the state(6) to be set 00:29:36.197 [2024-11-19 07:53:27.905323] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:36.197 [2024-11-19 07:53:27.906522] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:29:36.197 [2024-11-19 07:53:27.906594] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:29:36.197 [2024-11-19 07:53:27.906627] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:29:36.197 [2024-11-19 07:53:27.906685] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6b00 (9): Bad file descriptor 00:29:36.197 [2024-11-19 07:53:27.906728] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:29:36.197 [2024-11-19 07:53:27.906753] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:29:36.197 [2024-11-19 07:53:27.906777] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:29:36.197 [2024-11-19 07:53:27.906802] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:29:36.197 [2024-11-19 07:53:27.906864] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:29:36.197 [2024-11-19 07:53:27.906918] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:29:36.197 [2024-11-19 07:53:27.906958] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:29:36.197 [2024-11-19 07:53:27.907002] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:29:36.197 [2024-11-19 07:53:27.907207] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:36.197 [2024-11-19 07:53:27.907927] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:29:36.197 [2024-11-19 07:53:27.907976] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:29:36.197 [2024-11-19 07:53:27.908026] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:29:36.197 [2024-11-19 07:53:27.908212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.197 [2024-11-19 07:53:27.908251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:29:36.197 [2024-11-19 07:53:27.908276] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:29:36.197 [2024-11-19 07:53:27.908379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.197 [2024-11-19 07:53:27.908414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:29:36.197 [2024-11-19 07:53:27.908438] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2f00 is same with the state(6) to be set 00:29:36.197 [2024-11-19 07:53:27.910399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.197 [2024-11-19 07:53:27.910433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.197 [2024-11-19 07:53:27.910466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.197 [2024-11-19 07:53:27.910501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.197 [2024-11-19 07:53:27.910529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.197 [2024-11-19 07:53:27.910551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.197 [2024-11-19 07:53:27.910576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.197 [2024-11-19 07:53:27.910599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.197 [2024-11-19 07:53:27.910623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.197 [2024-11-19 07:53:27.910646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.197 [2024-11-19 07:53:27.910685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.197 [2024-11-19 07:53:27.910722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.197 [2024-11-19 07:53:27.910750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.197 [2024-11-19 07:53:27.910774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.197 [2024-11-19 07:53:27.910800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.197 [2024-11-19 07:53:27.910823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.197 [2024-11-19 07:53:27.910848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.197 [2024-11-19 07:53:27.910871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.197 [2024-11-19 07:53:27.910903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.197 [2024-11-19 07:53:27.910927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.197 [2024-11-19 07:53:27.910953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.197 [2024-11-19 07:53:27.910977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.197 [2024-11-19 07:53:27.911002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.197 [2024-11-19 07:53:27.911025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.197 [2024-11-19 07:53:27.911050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.197 [2024-11-19 07:53:27.911073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.197 [2024-11-19 07:53:27.911098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.197 [2024-11-19 07:53:27.911121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.197 [2024-11-19 07:53:27.911146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.197 [2024-11-19 07:53:27.911169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.197 [2024-11-19 07:53:27.911194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.197 [2024-11-19 07:53:27.911216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.197 [2024-11-19 07:53:27.911241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.197 [2024-11-19 07:53:27.911264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.197 [2024-11-19 07:53:27.911289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.197 [2024-11-19 07:53:27.911311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.197 [2024-11-19 07:53:27.911336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.197 [2024-11-19 07:53:27.911359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.197 [2024-11-19 07:53:27.911384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.197 [2024-11-19 07:53:27.911407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.198 [2024-11-19 07:53:27.911432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.198 [2024-11-19 07:53:27.911454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.198 [2024-11-19 07:53:27.911479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.198 [2024-11-19 07:53:27.911506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.198 [2024-11-19 07:53:27.911534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.198 [2024-11-19 07:53:27.911557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.198 [2024-11-19 07:53:27.911582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.198 [2024-11-19 07:53:27.911604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.198 [2024-11-19 07:53:27.911629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.198 [2024-11-19 07:53:27.911652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.198 [2024-11-19 07:53:27.911677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.198 [2024-11-19 07:53:27.911709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.198 [2024-11-19 07:53:27.911736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.198 [2024-11-19 07:53:27.911758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.198 [2024-11-19 07:53:27.911784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.198 [2024-11-19 07:53:27.911806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.198 [2024-11-19 07:53:27.911831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.198 [2024-11-19 07:53:27.911854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.198 [2024-11-19 07:53:27.911879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.198 [2024-11-19 07:53:27.911902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.198 [2024-11-19 07:53:27.911927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.198 [2024-11-19 07:53:27.911950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.198 [2024-11-19 07:53:27.911975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.198 [2024-11-19 07:53:27.911997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.198 [2024-11-19 07:53:27.912022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.198 [2024-11-19 07:53:27.912045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.198 [2024-11-19 07:53:27.912070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.198 [2024-11-19 07:53:27.912092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.198 [2024-11-19 07:53:27.912123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.198 [2024-11-19 07:53:27.912147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.198 [2024-11-19 07:53:27.912173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.198 [2024-11-19 07:53:27.912195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.198 [2024-11-19 07:53:27.912220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.198 [2024-11-19 07:53:27.912243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.198 [2024-11-19 07:53:27.912268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.198 [2024-11-19 07:53:27.912291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.198 [2024-11-19 07:53:27.912316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.198 [2024-11-19 07:53:27.912339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.198 [2024-11-19 07:53:27.912364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.198 [2024-11-19 07:53:27.912386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.198 [2024-11-19 07:53:27.912412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.198 [2024-11-19 07:53:27.912434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.198 [2024-11-19 07:53:27.912460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.198 [2024-11-19 07:53:27.912482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.198 [2024-11-19 07:53:27.912507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.198 [2024-11-19 07:53:27.912530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.198 [2024-11-19 07:53:27.912555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.198 [2024-11-19 07:53:27.912578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.198 [2024-11-19 07:53:27.912604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.198 [2024-11-19 07:53:27.912627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.198 [2024-11-19 07:53:27.912652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.198 [2024-11-19 07:53:27.912675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.198 [2024-11-19 07:53:27.912707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.198 [2024-11-19 07:53:27.912735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.198 [2024-11-19 07:53:27.912762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.198 [2024-11-19 07:53:27.912785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.198 [2024-11-19 07:53:27.912810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.198 [2024-11-19 07:53:27.912832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.198 [2024-11-19 07:53:27.912857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.198 [2024-11-19 07:53:27.912880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.198 [2024-11-19 07:53:27.912905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.198 [2024-11-19 07:53:27.912928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.198 [2024-11-19 07:53:27.912954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.198 [2024-11-19 07:53:27.912976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.198 [2024-11-19 07:53:27.913001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.198 [2024-11-19 07:53:27.913023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.198 [2024-11-19 07:53:27.913049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.198 [2024-11-19 07:53:27.913072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.198 [2024-11-19 07:53:27.913097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.198 [2024-11-19 07:53:27.913119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.198 [2024-11-19 07:53:27.913143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.198 [2024-11-19 07:53:27.913166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.198 [2024-11-19 07:53:27.913191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.198 [2024-11-19 07:53:27.913213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.198 [2024-11-19 07:53:27.913238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.198 [2024-11-19 07:53:27.913260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.198 [2024-11-19 07:53:27.913285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.198 [2024-11-19 07:53:27.913307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.198 [2024-11-19 07:53:27.913336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.198 [2024-11-19 07:53:27.913359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.199 [2024-11-19 07:53:27.913385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.199 [2024-11-19 07:53:27.913407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.199 [2024-11-19 07:53:27.913432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.199 [2024-11-19 07:53:27.913455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.199 [2024-11-19 07:53:27.913480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.199 [2024-11-19 07:53:27.913502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.199 [2024-11-19 07:53:27.913527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.199 [2024-11-19 07:53:27.913550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.199 [2024-11-19 07:53:27.913572] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fa980 is same with the state(6) to be set 00:29:36.199 [2024-11-19 07:53:27.915109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.199 [2024-11-19 07:53:27.915143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.199 [2024-11-19 07:53:27.915195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.199 [2024-11-19 07:53:27.915219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.199 [2024-11-19 07:53:27.915245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.199 [2024-11-19 07:53:27.915267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.199 [2024-11-19 07:53:27.915292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.199 [2024-11-19 07:53:27.915315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.199 [2024-11-19 07:53:27.915341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.199 [2024-11-19 07:53:27.915363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.199 [2024-11-19 07:53:27.915388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.199 [2024-11-19 07:53:27.915410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.199 [2024-11-19 07:53:27.915435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.199 [2024-11-19 07:53:27.915458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.199 [2024-11-19 07:53:27.915491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.199 [2024-11-19 07:53:27.915515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.199 [2024-11-19 07:53:27.915540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.199 [2024-11-19 07:53:27.915563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.199 [2024-11-19 07:53:27.915588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.199 [2024-11-19 07:53:27.915610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.199 [2024-11-19 07:53:27.915635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.199 [2024-11-19 07:53:27.915658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.199 [2024-11-19 07:53:27.915683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.199 [2024-11-19 07:53:27.915715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.199 [2024-11-19 07:53:27.915741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.199 [2024-11-19 07:53:27.915763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.199 [2024-11-19 07:53:27.915788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.199 [2024-11-19 07:53:27.915810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.199 [2024-11-19 07:53:27.915836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.199 [2024-11-19 07:53:27.915858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.199 [2024-11-19 07:53:27.915882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.199 [2024-11-19 07:53:27.915905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.199 [2024-11-19 07:53:27.915930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.199 [2024-11-19 07:53:27.915952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.199 [2024-11-19 07:53:27.915978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.199 [2024-11-19 07:53:27.916001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.199 [2024-11-19 07:53:27.916026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.199 [2024-11-19 07:53:27.916048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.199 [2024-11-19 07:53:27.916072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.199 [2024-11-19 07:53:27.916099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.199 [2024-11-19 07:53:27.916125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.199 [2024-11-19 07:53:27.916148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.199 [2024-11-19 07:53:27.916173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.199 [2024-11-19 07:53:27.916195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.199 [2024-11-19 07:53:27.916220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.199 [2024-11-19 07:53:27.916242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.199 [2024-11-19 07:53:27.916266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.199 [2024-11-19 07:53:27.916288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.199 [2024-11-19 07:53:27.916312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.199 [2024-11-19 07:53:27.916335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.199 [2024-11-19 07:53:27.916359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.199 [2024-11-19 07:53:27.916381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.199 [2024-11-19 07:53:27.916406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.199 [2024-11-19 07:53:27.916428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.199 [2024-11-19 07:53:27.916453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.199 [2024-11-19 07:53:27.916475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.199 [2024-11-19 07:53:27.916500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.199 [2024-11-19 07:53:27.916522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.199 [2024-11-19 07:53:27.916547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.199 [2024-11-19 07:53:27.916569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.200 [2024-11-19 07:53:27.916594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.200 [2024-11-19 07:53:27.916617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.200 [2024-11-19 07:53:27.916642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.200 [2024-11-19 07:53:27.916664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.200 [2024-11-19 07:53:27.916701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.200 [2024-11-19 07:53:27.916726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.200 [2024-11-19 07:53:27.916751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.200 [2024-11-19 07:53:27.916774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.200 [2024-11-19 07:53:27.916799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.200 [2024-11-19 07:53:27.916821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.200 [2024-11-19 07:53:27.916846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.200 [2024-11-19 07:53:27.916869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.200 [2024-11-19 07:53:27.916894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.200 [2024-11-19 07:53:27.916916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.200 [2024-11-19 07:53:27.916940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.200 [2024-11-19 07:53:27.916962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.200 [2024-11-19 07:53:27.916987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.200 [2024-11-19 07:53:27.917009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.200 [2024-11-19 07:53:27.917033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.200 [2024-11-19 07:53:27.917056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.200 [2024-11-19 07:53:27.917080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.200 [2024-11-19 07:53:27.917103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.200 [2024-11-19 07:53:27.917127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.200 [2024-11-19 07:53:27.917150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.200 [2024-11-19 07:53:27.917174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.200 [2024-11-19 07:53:27.917196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.200 [2024-11-19 07:53:27.917221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.200 [2024-11-19 07:53:27.917243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.200 [2024-11-19 07:53:27.917268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.200 [2024-11-19 07:53:27.917294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.200 [2024-11-19 07:53:27.917320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.200 [2024-11-19 07:53:27.917342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.200 [2024-11-19 07:53:27.917367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.200 [2024-11-19 07:53:27.917390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.200 [2024-11-19 07:53:27.917414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.200 [2024-11-19 07:53:27.917437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.200 [2024-11-19 07:53:27.917462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.200 [2024-11-19 07:53:27.917484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.200 [2024-11-19 07:53:27.917509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.200 [2024-11-19 07:53:27.917531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.200 [2024-11-19 07:53:27.917556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.200 [2024-11-19 07:53:27.917578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.200 [2024-11-19 07:53:27.917603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.200 [2024-11-19 07:53:27.917626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.200 [2024-11-19 07:53:27.917650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.200 [2024-11-19 07:53:27.917673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.200 [2024-11-19 07:53:27.917704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.200 [2024-11-19 07:53:27.917728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.200 [2024-11-19 07:53:27.917753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.200 [2024-11-19 07:53:27.917775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.200 [2024-11-19 07:53:27.917800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.200 [2024-11-19 07:53:27.917822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.200 [2024-11-19 07:53:27.917847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.200 [2024-11-19 07:53:27.917870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.200 [2024-11-19 07:53:27.917899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.200 [2024-11-19 07:53:27.917921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.200 [2024-11-19 07:53:27.917947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.200 [2024-11-19 07:53:27.917970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.200 [2024-11-19 07:53:27.918011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.200 [2024-11-19 07:53:27.918033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.200 [2024-11-19 07:53:27.918057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.200 [2024-11-19 07:53:27.918079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.200 [2024-11-19 07:53:27.918103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.200 [2024-11-19 07:53:27.918125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.200 [2024-11-19 07:53:27.918149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.200 [2024-11-19 07:53:27.918170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.200 [2024-11-19 07:53:27.918194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.200 [2024-11-19 07:53:27.918215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.200 [2024-11-19 07:53:27.918237] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fac00 is same with the state(6) to be set 00:29:36.200 [2024-11-19 07:53:27.920397] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:36.200 [2024-11-19 07:53:27.920469] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:29:36.200 [2024-11-19 07:53:27.920504] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:29:36.200 [2024-11-19 07:53:27.920533] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:29:36.200 [2024-11-19 07:53:27.920708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.200 [2024-11-19 07:53:27.920747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f6b00 with addr=10.0.0.2, port=4420 00:29:36.200 [2024-11-19 07:53:27.920773] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6b00 is same with the state(6) to be set 00:29:36.200 [2024-11-19 07:53:27.920935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.200 [2024-11-19 07:53:27.920971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f3900 with addr=10.0.0.2, port=4420 00:29:36.200 [2024-11-19 07:53:27.920994] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f3900 is same with the state(6) to be set 00:29:36.200 [2024-11-19 07:53:27.921088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.201 [2024-11-19 07:53:27.921123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f4300 with addr=10.0.0.2, port=4420 00:29:36.201 [2024-11-19 07:53:27.921147] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4300 is same with the state(6) to be set 00:29:36.201 [2024-11-19 07:53:27.921241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.201 [2024-11-19 07:53:27.921276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f6100 with addr=10.0.0.2, port=4420 00:29:36.201 [2024-11-19 07:53:27.921300] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6100 is same with the state(6) to be set 00:29:36.201 [2024-11-19 07:53:27.921329] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:29:36.201 [2024-11-19 07:53:27.921360] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2f00 (9): Bad file descriptor 00:29:36.201 [2024-11-19 07:53:27.921424] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:29:36.201 [2024-11-19 07:53:27.921461] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:29:36.201 [2024-11-19 07:53:27.921492] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6100 (9): Bad file descriptor 00:29:36.201 [2024-11-19 07:53:27.921542] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4300 (9): Bad file descriptor 00:29:36.201 [2024-11-19 07:53:27.921576] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f3900 (9): Bad file descriptor 00:29:36.201 [2024-11-19 07:53:27.921610] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6b00 (9): Bad file descriptor 00:29:36.201 [2024-11-19 07:53:27.922393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.201 [2024-11-19 07:53:27.922441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7f00 with addr=10.0.0.2, port=4420 00:29:36.201 [2024-11-19 07:53:27.922466] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7f00 is same with the state(6) to be set 00:29:36.201 [2024-11-19 07:53:27.922566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.201 [2024-11-19 07:53:27.922601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f4d00 with addr=10.0.0.2, port=4420 00:29:36.201 [2024-11-19 07:53:27.922625] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4d00 is same with the state(6) to be set 00:29:36.201 [2024-11-19 07:53:27.922749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.201 [2024-11-19 07:53:27.922784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f5700 with addr=10.0.0.2, port=4420 00:29:36.201 [2024-11-19 07:53:27.922808] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f5700 is same with the state(6) to be set 00:29:36.201 [2024-11-19 07:53:27.922838] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:29:36.201 [2024-11-19 07:53:27.922860] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:29:36.201 [2024-11-19 07:53:27.922881] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:29:36.201 [2024-11-19 07:53:27.922902] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:29:36.201 [2024-11-19 07:53:27.922933] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:29:36.201 [2024-11-19 07:53:27.922953] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:29:36.201 [2024-11-19 07:53:27.922973] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:29:36.201 [2024-11-19 07:53:27.922993] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:29:36.201 [2024-11-19 07:53:27.924050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.201 [2024-11-19 07:53:27.924083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.201 [2024-11-19 07:53:27.924117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.201 [2024-11-19 07:53:27.924141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.201 [2024-11-19 07:53:27.924167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.201 [2024-11-19 07:53:27.924201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.201 [2024-11-19 07:53:27.924225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.201 [2024-11-19 07:53:27.924247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.201 [2024-11-19 07:53:27.924272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.201 [2024-11-19 07:53:27.924294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.201 [2024-11-19 07:53:27.924318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.201 [2024-11-19 07:53:27.924340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.201 [2024-11-19 07:53:27.924365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.201 [2024-11-19 07:53:27.924387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.201 [2024-11-19 07:53:27.924412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.201 [2024-11-19 07:53:27.924434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.201 [2024-11-19 07:53:27.924459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.201 [2024-11-19 07:53:27.924497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.201 [2024-11-19 07:53:27.924524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.201 [2024-11-19 07:53:27.924546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.201 [2024-11-19 07:53:27.924571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.201 [2024-11-19 07:53:27.924594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.201 [2024-11-19 07:53:27.924620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.201 [2024-11-19 07:53:27.924642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.201 [2024-11-19 07:53:27.924668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.201 [2024-11-19 07:53:27.924708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.201 [2024-11-19 07:53:27.924738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.201 [2024-11-19 07:53:27.924761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.201 [2024-11-19 07:53:27.924787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.201 [2024-11-19 07:53:27.924809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.201 [2024-11-19 07:53:27.924834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.201 [2024-11-19 07:53:27.924857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.201 [2024-11-19 07:53:27.924882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.201 [2024-11-19 07:53:27.924904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.201 [2024-11-19 07:53:27.924929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.201 [2024-11-19 07:53:27.924952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.201 [2024-11-19 07:53:27.924977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.201 [2024-11-19 07:53:27.925015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.201 [2024-11-19 07:53:27.925041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.201 [2024-11-19 07:53:27.925063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.201 [2024-11-19 07:53:27.925087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.201 [2024-11-19 07:53:27.925109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.201 [2024-11-19 07:53:27.925134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.201 [2024-11-19 07:53:27.925156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.201 [2024-11-19 07:53:27.925180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.201 [2024-11-19 07:53:27.925202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.201 [2024-11-19 07:53:27.925227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.201 [2024-11-19 07:53:27.925249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.201 [2024-11-19 07:53:27.925274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.201 [2024-11-19 07:53:27.925295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.201 [2024-11-19 07:53:27.925324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.202 [2024-11-19 07:53:27.925346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.202 [2024-11-19 07:53:27.925371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.202 [2024-11-19 07:53:27.925392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.202 [2024-11-19 07:53:27.925417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.202 [2024-11-19 07:53:27.925438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.202 [2024-11-19 07:53:27.925462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.202 [2024-11-19 07:53:27.925484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.202 [2024-11-19 07:53:27.925508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.202 [2024-11-19 07:53:27.925529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.202 [2024-11-19 07:53:27.925553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.202 [2024-11-19 07:53:27.925575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.202 [2024-11-19 07:53:27.925599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.202 [2024-11-19 07:53:27.925621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.202 [2024-11-19 07:53:27.925646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.202 [2024-11-19 07:53:27.925683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.202 [2024-11-19 07:53:27.925719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.202 [2024-11-19 07:53:27.925742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.202 [2024-11-19 07:53:27.925767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.202 [2024-11-19 07:53:27.925789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.202 [2024-11-19 07:53:27.925815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.202 [2024-11-19 07:53:27.925837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.202 [2024-11-19 07:53:27.925862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.202 [2024-11-19 07:53:27.925885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.202 [2024-11-19 07:53:27.925910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.202 [2024-11-19 07:53:27.925937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.202 [2024-11-19 07:53:27.925963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.202 [2024-11-19 07:53:27.925986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.202 [2024-11-19 07:53:27.926011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.202 [2024-11-19 07:53:27.926049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.202 [2024-11-19 07:53:27.926074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.202 [2024-11-19 07:53:27.926096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.202 [2024-11-19 07:53:27.926120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.202 [2024-11-19 07:53:27.926142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.202 [2024-11-19 07:53:27.926167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.202 [2024-11-19 07:53:27.926189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.202 [2024-11-19 07:53:27.926213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.202 [2024-11-19 07:53:27.926235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.202 [2024-11-19 07:53:27.926274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.202 [2024-11-19 07:53:27.926297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.202 [2024-11-19 07:53:27.926322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.202 [2024-11-19 07:53:27.926343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.202 [2024-11-19 07:53:27.926368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.202 [2024-11-19 07:53:27.926390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.202 [2024-11-19 07:53:27.926414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.202 [2024-11-19 07:53:27.926436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.202 [2024-11-19 07:53:27.926460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.202 [2024-11-19 07:53:27.926482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.202 [2024-11-19 07:53:27.926506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.202 [2024-11-19 07:53:27.926528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.202 [2024-11-19 07:53:27.926558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.202 [2024-11-19 07:53:27.926581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.202 [2024-11-19 07:53:27.926606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.202 [2024-11-19 07:53:27.926628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.202 [2024-11-19 07:53:27.926653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.202 [2024-11-19 07:53:27.926699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.202 [2024-11-19 07:53:27.926728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.202 [2024-11-19 07:53:27.926751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.202 [2024-11-19 07:53:27.926776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.202 [2024-11-19 07:53:27.926798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.202 [2024-11-19 07:53:27.926824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.202 [2024-11-19 07:53:27.926846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.202 [2024-11-19 07:53:27.926870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.202 [2024-11-19 07:53:27.926893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.202 [2024-11-19 07:53:27.926918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.202 [2024-11-19 07:53:27.926941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.202 [2024-11-19 07:53:27.926966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.202 [2024-11-19 07:53:27.927004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.202 [2024-11-19 07:53:27.927029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.202 [2024-11-19 07:53:27.927051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.202 [2024-11-19 07:53:27.927076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.202 [2024-11-19 07:53:27.927097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.202 [2024-11-19 07:53:27.927122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.202 [2024-11-19 07:53:27.927143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.202 [2024-11-19 07:53:27.927167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.202 [2024-11-19 07:53:27.927196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.202 [2024-11-19 07:53:27.927222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.202 [2024-11-19 07:53:27.927244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.202 [2024-11-19 07:53:27.927266] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fb380 is same with the state(6) to be set 00:29:36.203 00:29:36.203 Latency(us) 00:29:36.203 [2024-11-19T06:53:28.133Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:36.203 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:36.203 Job: Nvme1n1 ended in about 1.03 seconds with error 00:29:36.203 Verification LBA range: start 0x0 length 0x400 00:29:36.203 Nvme1n1 : 1.03 127.87 7.99 62.00 0.00 333455.14 23204.60 341758.10 00:29:36.203 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:36.203 Job: Nvme2n1 ended in about 1.04 seconds with error 00:29:36.203 Verification LBA range: start 0x0 length 0x400 00:29:36.203 Nvme2n1 : 1.04 123.42 7.71 61.71 0.00 335348.12 21942.42 306028.85 00:29:36.203 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:36.203 Job: Nvme3n1 ended in about 1.04 seconds with error 00:29:36.203 Verification LBA range: start 0x0 length 0x400 00:29:36.203 Nvme3n1 : 1.04 122.83 7.68 61.42 0.00 330390.95 23787.14 315349.52 00:29:36.203 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:36.203 Job: Nvme4n1 ended in about 1.05 seconds with error 00:29:36.203 Verification LBA range: start 0x0 length 0x400 00:29:36.203 Nvme4n1 : 1.05 122.27 7.64 61.14 0.00 325303.81 28544.57 344865.00 00:29:36.203 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:36.203 Job: Nvme5n1 ended in about 1.06 seconds with error 00:29:36.203 Verification LBA range: start 0x0 length 0x400 00:29:36.203 Nvme5n1 : 1.06 121.14 7.57 60.57 0.00 321933.97 22427.88 312242.63 00:29:36.203 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:36.203 Job: Nvme6n1 ended in about 1.06 seconds with error 00:29:36.203 Verification LBA range: start 0x0 length 0x400 00:29:36.203 Nvme6n1 : 1.06 120.61 7.54 60.30 0.00 316799.30 33010.73 309135.74 00:29:36.203 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:36.203 Job: Nvme7n1 ended in about 1.05 seconds with error 00:29:36.203 Verification LBA range: start 0x0 length 0x400 00:29:36.203 Nvme7n1 : 1.05 183.17 11.45 9.54 0.00 281852.38 24175.50 292047.83 00:29:36.203 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:36.203 Job: Nvme8n1 ended in about 1.03 seconds with error 00:29:36.203 Verification LBA range: start 0x0 length 0x400 00:29:36.203 Nvme8n1 : 1.03 186.87 11.68 62.29 0.00 219254.33 20583.16 310689.19 00:29:36.203 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:36.203 Job: Nvme9n1 ended in about 1.07 seconds with error 00:29:36.203 Verification LBA range: start 0x0 length 0x400 00:29:36.203 Nvme9n1 : 1.07 126.13 7.88 59.80 0.00 289641.55 38253.61 290494.39 00:29:36.203 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:36.203 Job: Nvme10n1 ended in about 1.00 seconds with error 00:29:36.203 Verification LBA range: start 0x0 length 0x400 00:29:36.203 Nvme10n1 : 1.00 136.15 8.51 64.07 0.00 259079.22 10388.67 343311.55 00:29:36.203 [2024-11-19T06:53:28.133Z] =================================================================================================================== 00:29:36.203 [2024-11-19T06:53:28.133Z] Total : 1370.45 85.65 562.83 0.00 298456.05 10388.67 344865.00 00:29:36.203 [2024-11-19 07:53:28.019202] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:36.203 [2024-11-19 07:53:28.019319] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:29:36.203 [2024-11-19 07:53:28.019428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7f00 (9): Bad file descriptor 00:29:36.203 [2024-11-19 07:53:28.019472] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4d00 (9): Bad file descriptor 00:29:36.203 [2024-11-19 07:53:28.019503] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f5700 (9): Bad file descriptor 00:29:36.203 [2024-11-19 07:53:28.019528] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:29:36.203 [2024-11-19 07:53:28.019550] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:29:36.203 [2024-11-19 07:53:28.019574] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:29:36.203 [2024-11-19 07:53:28.019600] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:29:36.203 [2024-11-19 07:53:28.019625] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:29:36.203 [2024-11-19 07:53:28.019644] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:29:36.203 [2024-11-19 07:53:28.019664] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:29:36.203 [2024-11-19 07:53:28.019724] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:29:36.203 [2024-11-19 07:53:28.019749] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:29:36.203 [2024-11-19 07:53:28.019769] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:29:36.203 [2024-11-19 07:53:28.019789] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:29:36.203 [2024-11-19 07:53:28.019810] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:29:36.203 [2024-11-19 07:53:28.019832] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:29:36.203 [2024-11-19 07:53:28.019851] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:29:36.203 [2024-11-19 07:53:28.019871] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:29:36.203 [2024-11-19 07:53:28.019890] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:29:36.203 [2024-11-19 07:53:28.020029] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:29:36.203 [2024-11-19 07:53:28.020068] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:29:36.203 [2024-11-19 07:53:28.020097] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:29:36.203 [2024-11-19 07:53:28.020792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.203 [2024-11-19 07:53:28.020848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:29:36.203 [2024-11-19 07:53:28.020876] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(6) to be set 00:29:36.203 [2024-11-19 07:53:28.020900] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:29:36.203 [2024-11-19 07:53:28.020920] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:29:36.203 [2024-11-19 07:53:28.020946] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:29:36.203 [2024-11-19 07:53:28.020968] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:29:36.203 [2024-11-19 07:53:28.020990] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:29:36.203 [2024-11-19 07:53:28.021010] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:29:36.203 [2024-11-19 07:53:28.021029] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:29:36.203 [2024-11-19 07:53:28.021063] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:29:36.203 [2024-11-19 07:53:28.021085] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:29:36.203 [2024-11-19 07:53:28.021104] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:29:36.203 [2024-11-19 07:53:28.021123] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:29:36.203 [2024-11-19 07:53:28.021142] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:29:36.203 [2024-11-19 07:53:28.021185] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:29:36.203 [2024-11-19 07:53:28.021218] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:29:36.203 [2024-11-19 07:53:28.021250] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:29:36.203 [2024-11-19 07:53:28.021278] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:29:36.203 [2024-11-19 07:53:28.021306] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:29:36.203 [2024-11-19 07:53:28.021334] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:29:36.203 [2024-11-19 07:53:28.022014] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:29:36.203 [2024-11-19 07:53:28.022061] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:29:36.203 [2024-11-19 07:53:28.022088] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:29:36.203 [2024-11-19 07:53:28.022112] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:29:36.203 [2024-11-19 07:53:28.022136] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:29:36.203 [2024-11-19 07:53:28.022160] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:29:36.203 [2024-11-19 07:53:28.022307] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:29:36.203 [2024-11-19 07:53:28.022509] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:29:36.203 [2024-11-19 07:53:28.022684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.203 [2024-11-19 07:53:28.022727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:29:36.203 [2024-11-19 07:53:28.022753] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2f00 is same with the state(6) to be set 00:29:36.203 [2024-11-19 07:53:28.022890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.203 [2024-11-19 07:53:28.022926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:29:36.204 [2024-11-19 07:53:28.022955] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:29:36.204 [2024-11-19 07:53:28.023088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.204 [2024-11-19 07:53:28.023123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f6100 with addr=10.0.0.2, port=4420 00:29:36.204 [2024-11-19 07:53:28.023146] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6100 is same with the state(6) to be set 00:29:36.204 [2024-11-19 07:53:28.023253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.204 [2024-11-19 07:53:28.023287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f4300 with addr=10.0.0.2, port=4420 00:29:36.204 [2024-11-19 07:53:28.023310] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4300 is same with the state(6) to be set 00:29:36.204 [2024-11-19 07:53:28.023459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.204 [2024-11-19 07:53:28.023496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f3900 with addr=10.0.0.2, port=4420 00:29:36.204 [2024-11-19 07:53:28.023520] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f3900 is same with the state(6) to be set 00:29:36.204 [2024-11-19 07:53:28.023653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.204 [2024-11-19 07:53:28.023694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f6b00 with addr=10.0.0.2, port=4420 00:29:36.204 [2024-11-19 07:53:28.023732] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6b00 is same with the state(6) to be set 00:29:36.204 [2024-11-19 07:53:28.023757] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:29:36.204 [2024-11-19 07:53:28.023778] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:29:36.204 [2024-11-19 07:53:28.023798] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:29:36.204 [2024-11-19 07:53:28.023819] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:29:36.204 [2024-11-19 07:53:28.023878] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:29:36.204 [2024-11-19 07:53:28.023911] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:29:36.204 [2024-11-19 07:53:28.024100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.204 [2024-11-19 07:53:28.024137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f5700 with addr=10.0.0.2, port=4420 00:29:36.204 [2024-11-19 07:53:28.024161] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f5700 is same with the state(6) to be set 00:29:36.204 [2024-11-19 07:53:28.024189] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2f00 (9): Bad file descriptor 00:29:36.204 [2024-11-19 07:53:28.024220] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:29:36.204 [2024-11-19 07:53:28.024249] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6100 (9): Bad file descriptor 00:29:36.204 [2024-11-19 07:53:28.024278] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4300 (9): Bad file descriptor 00:29:36.204 [2024-11-19 07:53:28.024307] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f3900 (9): Bad file descriptor 00:29:36.204 [2024-11-19 07:53:28.024335] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6b00 (9): Bad file descriptor 00:29:36.204 [2024-11-19 07:53:28.024571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.204 [2024-11-19 07:53:28.024619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f4d00 with addr=10.0.0.2, port=4420 00:29:36.204 [2024-11-19 07:53:28.024644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4d00 is same with the state(6) to be set 00:29:36.204 [2024-11-19 07:53:28.024753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.204 [2024-11-19 07:53:28.024789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7f00 with addr=10.0.0.2, port=4420 00:29:36.204 [2024-11-19 07:53:28.024813] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7f00 is same with the state(6) to be set 00:29:36.204 [2024-11-19 07:53:28.024841] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f5700 (9): Bad file descriptor 00:29:36.204 [2024-11-19 07:53:28.024867] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:29:36.204 [2024-11-19 07:53:28.024889] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:29:36.204 [2024-11-19 07:53:28.024909] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:29:36.204 [2024-11-19 07:53:28.024930] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:29:36.204 [2024-11-19 07:53:28.024952] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:29:36.204 [2024-11-19 07:53:28.024972] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:29:36.204 [2024-11-19 07:53:28.024992] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:29:36.204 [2024-11-19 07:53:28.025011] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:29:36.204 [2024-11-19 07:53:28.025048] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:29:36.204 [2024-11-19 07:53:28.025067] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:29:36.204 [2024-11-19 07:53:28.025086] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:29:36.204 [2024-11-19 07:53:28.025105] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:29:36.204 [2024-11-19 07:53:28.025126] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:29:36.204 [2024-11-19 07:53:28.025144] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:29:36.204 [2024-11-19 07:53:28.025163] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:29:36.204 [2024-11-19 07:53:28.025181] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:29:36.204 [2024-11-19 07:53:28.025202] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:29:36.204 [2024-11-19 07:53:28.025220] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:29:36.204 [2024-11-19 07:53:28.025240] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:29:36.204 [2024-11-19 07:53:28.025258] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:29:36.204 [2024-11-19 07:53:28.025278] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:29:36.204 [2024-11-19 07:53:28.025297] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:29:36.204 [2024-11-19 07:53:28.025321] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:29:36.204 [2024-11-19 07:53:28.025341] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:29:36.204 [2024-11-19 07:53:28.025412] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4d00 (9): Bad file descriptor 00:29:36.204 [2024-11-19 07:53:28.025446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7f00 (9): Bad file descriptor 00:29:36.204 [2024-11-19 07:53:28.025471] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:29:36.204 [2024-11-19 07:53:28.025492] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:29:36.204 [2024-11-19 07:53:28.025512] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:29:36.204 [2024-11-19 07:53:28.025531] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:29:36.204 [2024-11-19 07:53:28.025605] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:29:36.204 [2024-11-19 07:53:28.025632] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:29:36.204 [2024-11-19 07:53:28.025652] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:29:36.204 [2024-11-19 07:53:28.025671] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:29:36.204 [2024-11-19 07:53:28.025716] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:29:36.204 [2024-11-19 07:53:28.025739] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:29:36.204 [2024-11-19 07:53:28.025760] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:29:36.204 [2024-11-19 07:53:28.025780] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:29:38.737 07:53:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:29:40.116 07:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 3053828 00:29:40.116 07:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:29:40.116 07:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3053828 00:29:40.116 07:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:29:40.116 07:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:40.116 07:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:29:40.116 07:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:40.116 07:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 3053828 00:29:40.116 07:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:29:40.116 07:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:40.116 07:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:29:40.116 07:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:29:40.116 07:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:29:40.116 07:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:40.116 07:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:29:40.116 07:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:40.116 07:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:40.116 07:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:40.116 07:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:40.116 07:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:40.116 07:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:29:40.116 07:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:40.116 07:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:29:40.116 07:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:40.116 07:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:40.116 rmmod nvme_tcp 00:29:40.116 rmmod nvme_fabrics 00:29:40.116 rmmod nvme_keyring 00:29:40.116 07:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:40.116 07:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:29:40.116 07:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:29:40.116 07:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 3053543 ']' 00:29:40.116 07:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 3053543 00:29:40.116 07:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 3053543 ']' 00:29:40.116 07:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 3053543 00:29:40.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3053543) - No such process 00:29:40.116 07:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 3053543 is not found' 00:29:40.116 Process with pid 3053543 is not found 00:29:40.116 07:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:40.116 07:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:40.116 07:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:40.116 07:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:29:40.116 07:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:29:40.116 07:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:40.116 07:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:29:40.116 07:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:40.116 07:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:40.116 07:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:40.116 07:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:40.116 07:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:42.018 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:42.018 00:29:42.018 real 0m11.548s 00:29:42.018 user 0m33.949s 00:29:42.018 sys 0m2.117s 00:29:42.018 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:42.019 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:42.019 ************************************ 00:29:42.019 END TEST nvmf_shutdown_tc3 00:29:42.019 ************************************ 00:29:42.019 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:29:42.019 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:29:42.019 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:29:42.019 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:42.019 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:42.019 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:42.019 ************************************ 00:29:42.019 START TEST nvmf_shutdown_tc4 00:29:42.019 ************************************ 00:29:42.019 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:29:42.019 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:29:42.019 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:42.019 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:42.019 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:42.019 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:42.019 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:42.019 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:42.019 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:42.019 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:42.019 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:42.019 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:42.019 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:42.019 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:42.019 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:42.019 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:42.019 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:42.019 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:42.019 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:42.019 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:42.019 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:42.019 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:42.019 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:29:42.019 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:42.019 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:29:42.019 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:29:42.019 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:29:42.019 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:29:42.019 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:29:42.019 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:42.019 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:42.019 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:42.019 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:42.019 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:42.019 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:42.019 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:42.019 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:42.019 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:42.019 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:42.019 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:42.019 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:42.019 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:42.019 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:42.019 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:42.019 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:42.019 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:42.019 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:42.019 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:42.019 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:42.019 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:42.019 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:42.019 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:42.019 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:42.019 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:42.019 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:42.019 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:42.019 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:42.019 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:42.019 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:42.019 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:42.019 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:42.019 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:42.019 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:42.019 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:42.020 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:42.020 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:42.020 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:42.020 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:42.020 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:42.020 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:42.020 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:42.020 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:42.020 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:42.020 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:42.020 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:42.020 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:42.020 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:42.020 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:42.020 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:42.020 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:42.020 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:42.020 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:42.020 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:42.020 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:42.020 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:42.020 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:42.020 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:42.020 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:42.020 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:42.020 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:42.020 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:42.020 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:42.020 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:42.020 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:42.020 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:42.020 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:42.020 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:42.020 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:42.020 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:42.020 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:42.020 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:42.020 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:42.020 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:42.020 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:42.020 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:42.020 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:42.020 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:42.020 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:42.020 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:42.020 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:42.020 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:42.020 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:42.020 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:42.020 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:42.020 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:42.020 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:42.020 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.301 ms 00:29:42.020 00:29:42.020 --- 10.0.0.2 ping statistics --- 00:29:42.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:42.020 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:29:42.020 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:42.020 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:42.020 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:29:42.020 00:29:42.020 --- 10.0.0.1 ping statistics --- 00:29:42.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:42.020 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:29:42.020 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:42.020 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:29:42.020 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:42.020 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:42.020 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:42.020 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:42.020 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:42.020 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:42.020 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:42.279 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:42.279 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:42.279 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:42.279 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:42.279 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=3055120 00:29:42.279 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:42.279 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 3055120 00:29:42.279 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 3055120 ']' 00:29:42.279 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:42.279 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:42.279 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:42.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:42.279 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:42.279 07:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:42.279 [2024-11-19 07:53:34.069612] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:29:42.279 [2024-11-19 07:53:34.069786] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:42.538 [2024-11-19 07:53:34.235838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:42.538 [2024-11-19 07:53:34.378017] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:42.538 [2024-11-19 07:53:34.378104] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:42.538 [2024-11-19 07:53:34.378130] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:42.538 [2024-11-19 07:53:34.378155] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:42.538 [2024-11-19 07:53:34.378175] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:42.538 [2024-11-19 07:53:34.381239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:42.538 [2024-11-19 07:53:34.381352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:42.538 [2024-11-19 07:53:34.381397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:42.538 [2024-11-19 07:53:34.381404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:43.472 07:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:43.472 07:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:29:43.472 07:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:43.472 07:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:43.472 07:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:43.472 07:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:43.472 07:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:43.472 07:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.472 07:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:43.472 [2024-11-19 07:53:35.074751] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:43.472 07:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.472 07:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:43.472 07:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:43.472 07:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:43.472 07:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:43.472 07:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:43.472 07:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:43.472 07:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:43.472 07:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:43.472 07:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:43.472 07:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:43.472 07:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:43.472 07:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:43.472 07:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:43.472 07:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:43.472 07:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:43.472 07:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:43.472 07:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:43.472 07:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:43.472 07:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:43.472 07:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:43.472 07:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:43.472 07:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:43.472 07:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:43.472 07:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:43.472 07:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:43.472 07:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:43.472 07:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.472 07:53:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:43.472 Malloc1 00:29:43.472 [2024-11-19 07:53:35.226619] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:43.472 Malloc2 00:29:43.472 Malloc3 00:29:43.730 Malloc4 00:29:43.730 Malloc5 00:29:43.988 Malloc6 00:29:43.988 Malloc7 00:29:43.988 Malloc8 00:29:44.247 Malloc9 00:29:44.247 Malloc10 00:29:44.247 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.247 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:44.247 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:44.247 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:44.247 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=3055410 00:29:44.247 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:29:44.247 07:53:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:29:44.510 [2024-11-19 07:53:36.256060] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:49.875 07:53:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:49.875 07:53:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 3055120 00:29:49.875 07:53:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 3055120 ']' 00:29:49.875 07:53:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 3055120 00:29:49.875 07:53:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:29:49.875 07:53:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:49.875 07:53:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3055120 00:29:49.875 07:53:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:49.875 07:53:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:49.875 07:53:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3055120' 00:29:49.875 killing process with pid 3055120 00:29:49.875 07:53:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 3055120 00:29:49.875 07:53:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 3055120 00:29:49.875 [2024-11-19 07:53:41.206470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000d480 is same with the state(6) to be set 00:29:49.875 [2024-11-19 07:53:41.206575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000d480 is same with the state(6) to be set 00:29:49.875 [2024-11-19 07:53:41.206599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000d480 is same with the state(6) to be set 00:29:49.875 [2024-11-19 07:53:41.206649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000d480 is same with the state(6) to be set 00:29:49.875 [2024-11-19 07:53:41.206670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000d480 is same with the state(6) to be set 00:29:49.875 [2024-11-19 07:53:41.206728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000d480 is same with the state(6) to be set 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 starting I/O failed: -6 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 starting I/O failed: -6 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 starting I/O failed: -6 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 starting I/O failed: -6 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 starting I/O failed: -6 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 starting I/O failed: -6 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 starting I/O failed: -6 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 starting I/O failed: -6 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 starting I/O failed: -6 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 starting I/O failed: -6 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 [2024-11-19 07:53:41.212493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 starting I/O failed: -6 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 starting I/O failed: -6 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 starting I/O failed: -6 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 starting I/O failed: -6 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 starting I/O failed: -6 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 starting I/O failed: -6 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 starting I/O failed: -6 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 starting I/O failed: -6 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 starting I/O failed: -6 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 starting I/O failed: -6 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 starting I/O failed: -6 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 starting I/O failed: -6 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 starting I/O failed: -6 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 starting I/O failed: -6 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 starting I/O failed: -6 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 starting I/O failed: -6 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 starting I/O failed: -6 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 starting I/O failed: -6 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 starting I/O failed: -6 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 starting I/O failed: -6 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 starting I/O failed: -6 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 starting I/O failed: -6 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 starting I/O failed: -6 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 [2024-11-19 07:53:41.214960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.875 starting I/O failed: -6 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 starting I/O failed: -6 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 starting I/O failed: -6 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 starting I/O failed: -6 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 starting I/O failed: -6 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 starting I/O failed: -6 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 starting I/O failed: -6 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 starting I/O failed: -6 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 starting I/O failed: -6 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 starting I/O failed: -6 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 starting I/O failed: -6 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 starting I/O failed: -6 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 starting I/O failed: -6 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 starting I/O failed: -6 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 starting I/O failed: -6 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 starting I/O failed: -6 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 starting I/O failed: -6 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 starting I/O failed: -6 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 starting I/O failed: -6 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 starting I/O failed: -6 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 starting I/O failed: -6 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 starting I/O failed: -6 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 starting I/O failed: -6 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 starting I/O failed: -6 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 starting I/O failed: -6 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 Write completed with error (sct=0, sc=8) 00:29:49.875 starting I/O failed: -6 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 starting I/O failed: -6 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 starting I/O failed: -6 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 starting I/O failed: -6 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 starting I/O failed: -6 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 starting I/O failed: -6 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 starting I/O failed: -6 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 starting I/O failed: -6 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 starting I/O failed: -6 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 starting I/O failed: -6 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 starting I/O failed: -6 00:29:49.876 [2024-11-19 07:53:41.217657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 starting I/O failed: -6 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 starting I/O failed: -6 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 starting I/O failed: -6 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 starting I/O failed: -6 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 starting I/O failed: -6 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 starting I/O failed: -6 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 starting I/O failed: -6 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 starting I/O failed: -6 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 starting I/O failed: -6 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 starting I/O failed: -6 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 starting I/O failed: -6 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 starting I/O failed: -6 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 starting I/O failed: -6 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 starting I/O failed: -6 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 starting I/O failed: -6 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 starting I/O failed: -6 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 starting I/O failed: -6 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 starting I/O failed: -6 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 starting I/O failed: -6 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 starting I/O failed: -6 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 starting I/O failed: -6 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 starting I/O failed: -6 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 starting I/O failed: -6 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 starting I/O failed: -6 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 starting I/O failed: -6 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 starting I/O failed: -6 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 starting I/O failed: -6 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 starting I/O failed: -6 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 starting I/O failed: -6 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 starting I/O failed: -6 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 starting I/O failed: -6 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 starting I/O failed: -6 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 starting I/O failed: -6 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 starting I/O failed: -6 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 starting I/O failed: -6 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 starting I/O failed: -6 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 starting I/O failed: -6 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 starting I/O failed: -6 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 starting I/O failed: -6 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 starting I/O failed: -6 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 starting I/O failed: -6 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 starting I/O failed: -6 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 starting I/O failed: -6 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 starting I/O failed: -6 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 starting I/O failed: -6 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 starting I/O failed: -6 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 starting I/O failed: -6 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 starting I/O failed: -6 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 starting I/O failed: -6 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 starting I/O failed: -6 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 starting I/O failed: -6 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 starting I/O failed: -6 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 starting I/O failed: -6 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 starting I/O failed: -6 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 starting I/O failed: -6 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 starting I/O failed: -6 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 starting I/O failed: -6 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 starting I/O failed: -6 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 starting I/O failed: -6 00:29:49.876 [2024-11-19 07:53:41.227351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:49.876 NVMe io qpair process completion error 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 starting I/O failed: -6 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 starting I/O failed: -6 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 starting I/O failed: -6 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 starting I/O failed: -6 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 starting I/O failed: -6 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 starting I/O failed: -6 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 starting I/O failed: -6 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 starting I/O failed: -6 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 starting I/O failed: -6 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 starting I/O failed: -6 00:29:49.876 [2024-11-19 07:53:41.229461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 starting I/O failed: -6 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 starting I/O failed: -6 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 starting I/O failed: -6 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 starting I/O failed: -6 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 starting I/O failed: -6 00:29:49.876 Write completed with error (sct=0, sc=8) 00:29:49.876 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 [2024-11-19 07:53:41.231817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 [2024-11-19 07:53:41.234512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.877 Write completed with error (sct=0, sc=8) 00:29:49.877 starting I/O failed: -6 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 starting I/O failed: -6 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 starting I/O failed: -6 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 starting I/O failed: -6 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 starting I/O failed: -6 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 starting I/O failed: -6 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 starting I/O failed: -6 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 starting I/O failed: -6 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 starting I/O failed: -6 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 starting I/O failed: -6 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 starting I/O failed: -6 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 starting I/O failed: -6 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 starting I/O failed: -6 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 starting I/O failed: -6 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 starting I/O failed: -6 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 starting I/O failed: -6 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 starting I/O failed: -6 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 starting I/O failed: -6 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 starting I/O failed: -6 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 starting I/O failed: -6 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 starting I/O failed: -6 00:29:49.878 [2024-11-19 07:53:41.247413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:49.878 NVMe io qpair process completion error 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 starting I/O failed: -6 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 starting I/O failed: -6 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 starting I/O failed: -6 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 starting I/O failed: -6 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 starting I/O failed: -6 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 starting I/O failed: -6 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 starting I/O failed: -6 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 starting I/O failed: -6 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 starting I/O failed: -6 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 starting I/O failed: -6 00:29:49.878 [2024-11-19 07:53:41.249758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 starting I/O failed: -6 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 starting I/O failed: -6 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 starting I/O failed: -6 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 starting I/O failed: -6 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 starting I/O failed: -6 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 starting I/O failed: -6 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 starting I/O failed: -6 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 starting I/O failed: -6 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 starting I/O failed: -6 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 starting I/O failed: -6 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 starting I/O failed: -6 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 starting I/O failed: -6 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 starting I/O failed: -6 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 starting I/O failed: -6 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 starting I/O failed: -6 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 starting I/O failed: -6 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 starting I/O failed: -6 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 starting I/O failed: -6 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 starting I/O failed: -6 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 starting I/O failed: -6 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 starting I/O failed: -6 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 starting I/O failed: -6 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 [2024-11-19 07:53:41.252021] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 starting I/O failed: -6 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 starting I/O failed: -6 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 starting I/O failed: -6 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 starting I/O failed: -6 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 starting I/O failed: -6 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 starting I/O failed: -6 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 starting I/O failed: -6 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 starting I/O failed: -6 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 starting I/O failed: -6 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 starting I/O failed: -6 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 starting I/O failed: -6 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 starting I/O failed: -6 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 starting I/O failed: -6 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 starting I/O failed: -6 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 starting I/O failed: -6 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 starting I/O failed: -6 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 starting I/O failed: -6 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 starting I/O failed: -6 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 starting I/O failed: -6 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 starting I/O failed: -6 00:29:49.878 Write completed with error (sct=0, sc=8) 00:29:49.878 starting I/O failed: -6 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 starting I/O failed: -6 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 starting I/O failed: -6 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 starting I/O failed: -6 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 starting I/O failed: -6 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 starting I/O failed: -6 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 starting I/O failed: -6 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 starting I/O failed: -6 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 starting I/O failed: -6 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 starting I/O failed: -6 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 starting I/O failed: -6 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 starting I/O failed: -6 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 starting I/O failed: -6 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 starting I/O failed: -6 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 starting I/O failed: -6 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 starting I/O failed: -6 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 [2024-11-19 07:53:41.254740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 starting I/O failed: -6 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 starting I/O failed: -6 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 starting I/O failed: -6 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 starting I/O failed: -6 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 starting I/O failed: -6 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 starting I/O failed: -6 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 starting I/O failed: -6 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 starting I/O failed: -6 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 starting I/O failed: -6 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 starting I/O failed: -6 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 starting I/O failed: -6 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 starting I/O failed: -6 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 starting I/O failed: -6 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 starting I/O failed: -6 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 starting I/O failed: -6 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 starting I/O failed: -6 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 starting I/O failed: -6 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 starting I/O failed: -6 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 starting I/O failed: -6 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 starting I/O failed: -6 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 starting I/O failed: -6 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 starting I/O failed: -6 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 starting I/O failed: -6 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 starting I/O failed: -6 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 starting I/O failed: -6 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 starting I/O failed: -6 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 starting I/O failed: -6 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 starting I/O failed: -6 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 starting I/O failed: -6 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 starting I/O failed: -6 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 starting I/O failed: -6 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 starting I/O failed: -6 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 starting I/O failed: -6 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 starting I/O failed: -6 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 starting I/O failed: -6 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 starting I/O failed: -6 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 starting I/O failed: -6 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 starting I/O failed: -6 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 starting I/O failed: -6 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 starting I/O failed: -6 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 starting I/O failed: -6 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 starting I/O failed: -6 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 starting I/O failed: -6 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 starting I/O failed: -6 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 starting I/O failed: -6 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 starting I/O failed: -6 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 starting I/O failed: -6 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 starting I/O failed: -6 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 starting I/O failed: -6 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 starting I/O failed: -6 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 starting I/O failed: -6 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 starting I/O failed: -6 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 starting I/O failed: -6 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 starting I/O failed: -6 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 starting I/O failed: -6 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 starting I/O failed: -6 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 starting I/O failed: -6 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 starting I/O failed: -6 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 starting I/O failed: -6 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 starting I/O failed: -6 00:29:49.879 [2024-11-19 07:53:41.268298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:49.879 NVMe io qpair process completion error 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 starting I/O failed: -6 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 starting I/O failed: -6 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 starting I/O failed: -6 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.879 Write completed with error (sct=0, sc=8) 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 starting I/O failed: -6 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 starting I/O failed: -6 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 starting I/O failed: -6 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 starting I/O failed: -6 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 starting I/O failed: -6 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 starting I/O failed: -6 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 starting I/O failed: -6 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 [2024-11-19 07:53:41.270514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:49.880 starting I/O failed: -6 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 starting I/O failed: -6 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 starting I/O failed: -6 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 starting I/O failed: -6 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 starting I/O failed: -6 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 starting I/O failed: -6 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 starting I/O failed: -6 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 starting I/O failed: -6 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 starting I/O failed: -6 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 starting I/O failed: -6 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 starting I/O failed: -6 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 starting I/O failed: -6 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 starting I/O failed: -6 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 starting I/O failed: -6 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 starting I/O failed: -6 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 starting I/O failed: -6 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 starting I/O failed: -6 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 starting I/O failed: -6 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 starting I/O failed: -6 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 starting I/O failed: -6 00:29:49.880 [2024-11-19 07:53:41.272531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 starting I/O failed: -6 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 starting I/O failed: -6 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 starting I/O failed: -6 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 starting I/O failed: -6 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 starting I/O failed: -6 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 starting I/O failed: -6 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 starting I/O failed: -6 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 starting I/O failed: -6 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 starting I/O failed: -6 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 starting I/O failed: -6 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 starting I/O failed: -6 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 starting I/O failed: -6 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 starting I/O failed: -6 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 starting I/O failed: -6 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 starting I/O failed: -6 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 starting I/O failed: -6 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 starting I/O failed: -6 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 starting I/O failed: -6 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 starting I/O failed: -6 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 starting I/O failed: -6 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 starting I/O failed: -6 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 starting I/O failed: -6 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 starting I/O failed: -6 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 starting I/O failed: -6 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 starting I/O failed: -6 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 starting I/O failed: -6 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 starting I/O failed: -6 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 starting I/O failed: -6 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 starting I/O failed: -6 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 starting I/O failed: -6 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 starting I/O failed: -6 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 starting I/O failed: -6 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 starting I/O failed: -6 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 starting I/O failed: -6 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 starting I/O failed: -6 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 starting I/O failed: -6 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 starting I/O failed: -6 00:29:49.880 [2024-11-19 07:53:41.275351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 starting I/O failed: -6 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 starting I/O failed: -6 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 starting I/O failed: -6 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 starting I/O failed: -6 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 starting I/O failed: -6 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 starting I/O failed: -6 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 starting I/O failed: -6 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 starting I/O failed: -6 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 starting I/O failed: -6 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 starting I/O failed: -6 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 starting I/O failed: -6 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 starting I/O failed: -6 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 starting I/O failed: -6 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 starting I/O failed: -6 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.880 starting I/O failed: -6 00:29:49.880 Write completed with error (sct=0, sc=8) 00:29:49.881 starting I/O failed: -6 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 starting I/O failed: -6 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 starting I/O failed: -6 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 starting I/O failed: -6 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 starting I/O failed: -6 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 starting I/O failed: -6 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 starting I/O failed: -6 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 starting I/O failed: -6 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 starting I/O failed: -6 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 starting I/O failed: -6 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 starting I/O failed: -6 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 starting I/O failed: -6 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 starting I/O failed: -6 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 starting I/O failed: -6 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 starting I/O failed: -6 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 starting I/O failed: -6 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 starting I/O failed: -6 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 starting I/O failed: -6 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 starting I/O failed: -6 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 starting I/O failed: -6 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 starting I/O failed: -6 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 starting I/O failed: -6 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 starting I/O failed: -6 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 starting I/O failed: -6 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 starting I/O failed: -6 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 starting I/O failed: -6 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 starting I/O failed: -6 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 starting I/O failed: -6 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 starting I/O failed: -6 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 starting I/O failed: -6 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 starting I/O failed: -6 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 starting I/O failed: -6 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 starting I/O failed: -6 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 starting I/O failed: -6 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 starting I/O failed: -6 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 starting I/O failed: -6 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 starting I/O failed: -6 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 starting I/O failed: -6 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 starting I/O failed: -6 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 starting I/O failed: -6 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 starting I/O failed: -6 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 starting I/O failed: -6 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 starting I/O failed: -6 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 starting I/O failed: -6 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 starting I/O failed: -6 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 starting I/O failed: -6 00:29:49.881 [2024-11-19 07:53:41.284980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:49.881 NVMe io qpair process completion error 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 starting I/O failed: -6 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 starting I/O failed: -6 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 starting I/O failed: -6 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 starting I/O failed: -6 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 starting I/O failed: -6 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 starting I/O failed: -6 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 starting I/O failed: -6 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 starting I/O failed: -6 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 starting I/O failed: -6 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 starting I/O failed: -6 00:29:49.881 [2024-11-19 07:53:41.286904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 starting I/O failed: -6 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 starting I/O failed: -6 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 starting I/O failed: -6 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 starting I/O failed: -6 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 starting I/O failed: -6 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 starting I/O failed: -6 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 starting I/O failed: -6 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 starting I/O failed: -6 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 starting I/O failed: -6 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 starting I/O failed: -6 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 starting I/O failed: -6 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 starting I/O failed: -6 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 starting I/O failed: -6 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 starting I/O failed: -6 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 starting I/O failed: -6 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 starting I/O failed: -6 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 starting I/O failed: -6 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 starting I/O failed: -6 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 starting I/O failed: -6 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 starting I/O failed: -6 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 starting I/O failed: -6 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 starting I/O failed: -6 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 Write completed with error (sct=0, sc=8) 00:29:49.881 starting I/O failed: -6 00:29:49.882 [2024-11-19 07:53:41.289175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 [2024-11-19 07:53:41.291825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 [2024-11-19 07:53:41.304143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:49.882 NVMe io qpair process completion error 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.882 starting I/O failed: -6 00:29:49.882 Write completed with error (sct=0, sc=8) 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 starting I/O failed: -6 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 starting I/O failed: -6 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 starting I/O failed: -6 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 starting I/O failed: -6 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 starting I/O failed: -6 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 starting I/O failed: -6 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 starting I/O failed: -6 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 [2024-11-19 07:53:41.306050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 starting I/O failed: -6 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 starting I/O failed: -6 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 starting I/O failed: -6 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 starting I/O failed: -6 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 starting I/O failed: -6 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 starting I/O failed: -6 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 starting I/O failed: -6 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 starting I/O failed: -6 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 starting I/O failed: -6 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 starting I/O failed: -6 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 starting I/O failed: -6 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 starting I/O failed: -6 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 starting I/O failed: -6 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 starting I/O failed: -6 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 starting I/O failed: -6 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 starting I/O failed: -6 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 starting I/O failed: -6 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 starting I/O failed: -6 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 starting I/O failed: -6 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 starting I/O failed: -6 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 starting I/O failed: -6 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 starting I/O failed: -6 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 [2024-11-19 07:53:41.308215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.883 starting I/O failed: -6 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 starting I/O failed: -6 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 starting I/O failed: -6 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 starting I/O failed: -6 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 starting I/O failed: -6 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 starting I/O failed: -6 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 starting I/O failed: -6 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 starting I/O failed: -6 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 starting I/O failed: -6 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 starting I/O failed: -6 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 starting I/O failed: -6 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 starting I/O failed: -6 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 starting I/O failed: -6 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 starting I/O failed: -6 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 starting I/O failed: -6 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 starting I/O failed: -6 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 starting I/O failed: -6 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 starting I/O failed: -6 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 starting I/O failed: -6 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 starting I/O failed: -6 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 starting I/O failed: -6 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 starting I/O failed: -6 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 starting I/O failed: -6 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 starting I/O failed: -6 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 starting I/O failed: -6 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 starting I/O failed: -6 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 starting I/O failed: -6 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 starting I/O failed: -6 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 starting I/O failed: -6 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 starting I/O failed: -6 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 starting I/O failed: -6 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 starting I/O failed: -6 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 starting I/O failed: -6 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 starting I/O failed: -6 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 starting I/O failed: -6 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 starting I/O failed: -6 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 starting I/O failed: -6 00:29:49.883 [2024-11-19 07:53:41.310915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 starting I/O failed: -6 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 starting I/O failed: -6 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 starting I/O failed: -6 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 starting I/O failed: -6 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 starting I/O failed: -6 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 starting I/O failed: -6 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 starting I/O failed: -6 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 starting I/O failed: -6 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.883 starting I/O failed: -6 00:29:49.883 Write completed with error (sct=0, sc=8) 00:29:49.884 starting I/O failed: -6 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 starting I/O failed: -6 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 starting I/O failed: -6 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 starting I/O failed: -6 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 starting I/O failed: -6 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 starting I/O failed: -6 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 starting I/O failed: -6 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 starting I/O failed: -6 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 starting I/O failed: -6 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 starting I/O failed: -6 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 starting I/O failed: -6 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 starting I/O failed: -6 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 starting I/O failed: -6 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 starting I/O failed: -6 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 starting I/O failed: -6 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 starting I/O failed: -6 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 starting I/O failed: -6 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 starting I/O failed: -6 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 starting I/O failed: -6 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 starting I/O failed: -6 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 starting I/O failed: -6 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 starting I/O failed: -6 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 starting I/O failed: -6 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 starting I/O failed: -6 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 starting I/O failed: -6 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 starting I/O failed: -6 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 starting I/O failed: -6 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 starting I/O failed: -6 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 starting I/O failed: -6 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 starting I/O failed: -6 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 starting I/O failed: -6 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 starting I/O failed: -6 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 starting I/O failed: -6 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 starting I/O failed: -6 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 starting I/O failed: -6 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 starting I/O failed: -6 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 starting I/O failed: -6 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 starting I/O failed: -6 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 starting I/O failed: -6 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 starting I/O failed: -6 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 starting I/O failed: -6 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 starting I/O failed: -6 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 starting I/O failed: -6 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 starting I/O failed: -6 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 starting I/O failed: -6 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 starting I/O failed: -6 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 starting I/O failed: -6 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 starting I/O failed: -6 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 starting I/O failed: -6 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 starting I/O failed: -6 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 starting I/O failed: -6 00:29:49.884 [2024-11-19 07:53:41.324804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:49.884 NVMe io qpair process completion error 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 starting I/O failed: -6 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 starting I/O failed: -6 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 starting I/O failed: -6 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 starting I/O failed: -6 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 starting I/O failed: -6 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 starting I/O failed: -6 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 starting I/O failed: -6 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 starting I/O failed: -6 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 starting I/O failed: -6 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 starting I/O failed: -6 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 [2024-11-19 07:53:41.326951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:49.884 starting I/O failed: -6 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 starting I/O failed: -6 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 starting I/O failed: -6 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 starting I/O failed: -6 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 starting I/O failed: -6 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 starting I/O failed: -6 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 starting I/O failed: -6 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 starting I/O failed: -6 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 starting I/O failed: -6 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 starting I/O failed: -6 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 starting I/O failed: -6 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.884 starting I/O failed: -6 00:29:49.884 Write completed with error (sct=0, sc=8) 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 [2024-11-19 07:53:41.329196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 [2024-11-19 07:53:41.331786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.885 starting I/O failed: -6 00:29:49.885 Write completed with error (sct=0, sc=8) 00:29:49.886 starting I/O failed: -6 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 starting I/O failed: -6 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 starting I/O failed: -6 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 starting I/O failed: -6 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 starting I/O failed: -6 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 starting I/O failed: -6 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 starting I/O failed: -6 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 starting I/O failed: -6 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 starting I/O failed: -6 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 starting I/O failed: -6 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 starting I/O failed: -6 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 starting I/O failed: -6 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 starting I/O failed: -6 00:29:49.886 [2024-11-19 07:53:41.347528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:49.886 NVMe io qpair process completion error 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 starting I/O failed: -6 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 starting I/O failed: -6 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 starting I/O failed: -6 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 starting I/O failed: -6 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 starting I/O failed: -6 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 starting I/O failed: -6 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 starting I/O failed: -6 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 starting I/O failed: -6 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 [2024-11-19 07:53:41.349267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 starting I/O failed: -6 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 starting I/O failed: -6 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 starting I/O failed: -6 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 starting I/O failed: -6 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 starting I/O failed: -6 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 starting I/O failed: -6 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 starting I/O failed: -6 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 starting I/O failed: -6 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 starting I/O failed: -6 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 starting I/O failed: -6 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 starting I/O failed: -6 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 starting I/O failed: -6 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 starting I/O failed: -6 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 starting I/O failed: -6 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 starting I/O failed: -6 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 starting I/O failed: -6 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 starting I/O failed: -6 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 starting I/O failed: -6 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 starting I/O failed: -6 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 starting I/O failed: -6 00:29:49.886 [2024-11-19 07:53:41.351193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 starting I/O failed: -6 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 starting I/O failed: -6 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 starting I/O failed: -6 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 starting I/O failed: -6 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 starting I/O failed: -6 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 starting I/O failed: -6 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 starting I/O failed: -6 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 starting I/O failed: -6 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 starting I/O failed: -6 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 starting I/O failed: -6 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 starting I/O failed: -6 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 starting I/O failed: -6 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 starting I/O failed: -6 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 starting I/O failed: -6 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 starting I/O failed: -6 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 starting I/O failed: -6 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 starting I/O failed: -6 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 starting I/O failed: -6 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 starting I/O failed: -6 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 starting I/O failed: -6 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 starting I/O failed: -6 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 starting I/O failed: -6 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 starting I/O failed: -6 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 starting I/O failed: -6 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 starting I/O failed: -6 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 starting I/O failed: -6 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 starting I/O failed: -6 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 starting I/O failed: -6 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 starting I/O failed: -6 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 starting I/O failed: -6 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 starting I/O failed: -6 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 starting I/O failed: -6 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 starting I/O failed: -6 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 starting I/O failed: -6 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 starting I/O failed: -6 00:29:49.886 Write completed with error (sct=0, sc=8) 00:29:49.886 starting I/O failed: -6 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 starting I/O failed: -6 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 starting I/O failed: -6 00:29:49.887 [2024-11-19 07:53:41.353994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 starting I/O failed: -6 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 starting I/O failed: -6 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 starting I/O failed: -6 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 starting I/O failed: -6 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 starting I/O failed: -6 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 starting I/O failed: -6 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 starting I/O failed: -6 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 starting I/O failed: -6 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 starting I/O failed: -6 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 starting I/O failed: -6 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 starting I/O failed: -6 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 starting I/O failed: -6 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 starting I/O failed: -6 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 starting I/O failed: -6 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 starting I/O failed: -6 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 starting I/O failed: -6 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 starting I/O failed: -6 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 starting I/O failed: -6 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 starting I/O failed: -6 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 starting I/O failed: -6 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 starting I/O failed: -6 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 starting I/O failed: -6 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 starting I/O failed: -6 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 starting I/O failed: -6 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 starting I/O failed: -6 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 starting I/O failed: -6 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 starting I/O failed: -6 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 starting I/O failed: -6 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 starting I/O failed: -6 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 starting I/O failed: -6 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 starting I/O failed: -6 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 starting I/O failed: -6 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 starting I/O failed: -6 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 starting I/O failed: -6 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 starting I/O failed: -6 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 starting I/O failed: -6 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 starting I/O failed: -6 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 starting I/O failed: -6 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 starting I/O failed: -6 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 starting I/O failed: -6 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 starting I/O failed: -6 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 starting I/O failed: -6 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 starting I/O failed: -6 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 starting I/O failed: -6 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 starting I/O failed: -6 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 starting I/O failed: -6 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 starting I/O failed: -6 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 starting I/O failed: -6 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 starting I/O failed: -6 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 starting I/O failed: -6 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 starting I/O failed: -6 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 starting I/O failed: -6 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 starting I/O failed: -6 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 starting I/O failed: -6 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 starting I/O failed: -6 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 starting I/O failed: -6 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 starting I/O failed: -6 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 starting I/O failed: -6 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 starting I/O failed: -6 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 starting I/O failed: -6 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 starting I/O failed: -6 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 starting I/O failed: -6 00:29:49.887 [2024-11-19 07:53:41.363468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.887 NVMe io qpair process completion error 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 starting I/O failed: -6 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 starting I/O failed: -6 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 starting I/O failed: -6 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 starting I/O failed: -6 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 starting I/O failed: -6 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 starting I/O failed: -6 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 starting I/O failed: -6 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 starting I/O failed: -6 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 starting I/O failed: -6 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 [2024-11-19 07:53:41.365365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 starting I/O failed: -6 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 starting I/O failed: -6 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 starting I/O failed: -6 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 starting I/O failed: -6 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 starting I/O failed: -6 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 starting I/O failed: -6 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 starting I/O failed: -6 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 starting I/O failed: -6 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 starting I/O failed: -6 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 starting I/O failed: -6 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.887 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 [2024-11-19 07:53:41.367570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.888 starting I/O failed: -6 00:29:49.888 starting I/O failed: -6 00:29:49.888 starting I/O failed: -6 00:29:49.888 starting I/O failed: -6 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 [2024-11-19 07:53:41.370486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.888 starting I/O failed: -6 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.888 starting I/O failed: -6 00:29:49.888 Write completed with error (sct=0, sc=8) 00:29:49.889 starting I/O failed: -6 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 starting I/O failed: -6 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 starting I/O failed: -6 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 starting I/O failed: -6 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 starting I/O failed: -6 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 starting I/O failed: -6 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 starting I/O failed: -6 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 starting I/O failed: -6 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 starting I/O failed: -6 00:29:49.889 [2024-11-19 07:53:41.379961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:49.889 NVMe io qpair process completion error 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 starting I/O failed: -6 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 starting I/O failed: -6 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 starting I/O failed: -6 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 starting I/O failed: -6 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 starting I/O failed: -6 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 starting I/O failed: -6 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 starting I/O failed: -6 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 starting I/O failed: -6 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 starting I/O failed: -6 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 starting I/O failed: -6 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 [2024-11-19 07:53:41.381944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:49.889 starting I/O failed: -6 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 starting I/O failed: -6 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 starting I/O failed: -6 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 starting I/O failed: -6 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 starting I/O failed: -6 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 starting I/O failed: -6 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 starting I/O failed: -6 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 starting I/O failed: -6 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 starting I/O failed: -6 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 starting I/O failed: -6 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 starting I/O failed: -6 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 starting I/O failed: -6 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 starting I/O failed: -6 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 starting I/O failed: -6 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 starting I/O failed: -6 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 starting I/O failed: -6 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 starting I/O failed: -6 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 starting I/O failed: -6 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 starting I/O failed: -6 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 starting I/O failed: -6 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 starting I/O failed: -6 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 starting I/O failed: -6 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 starting I/O failed: -6 00:29:49.889 [2024-11-19 07:53:41.384156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 starting I/O failed: -6 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 starting I/O failed: -6 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 starting I/O failed: -6 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 starting I/O failed: -6 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 starting I/O failed: -6 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 starting I/O failed: -6 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 starting I/O failed: -6 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 starting I/O failed: -6 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 starting I/O failed: -6 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 starting I/O failed: -6 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 starting I/O failed: -6 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 starting I/O failed: -6 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 starting I/O failed: -6 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 starting I/O failed: -6 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 starting I/O failed: -6 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 starting I/O failed: -6 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 starting I/O failed: -6 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 starting I/O failed: -6 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 starting I/O failed: -6 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 starting I/O failed: -6 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 starting I/O failed: -6 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.889 Write completed with error (sct=0, sc=8) 00:29:49.890 starting I/O failed: -6 00:29:49.890 Write completed with error (sct=0, sc=8) 00:29:49.890 starting I/O failed: -6 00:29:49.890 Write completed with error (sct=0, sc=8) 00:29:49.890 starting I/O failed: -6 00:29:49.890 Write completed with error (sct=0, sc=8) 00:29:49.890 Write completed with error (sct=0, sc=8) 00:29:49.890 starting I/O failed: -6 00:29:49.890 Write completed with error (sct=0, sc=8) 00:29:49.890 starting I/O failed: -6 00:29:49.890 Write completed with error (sct=0, sc=8) 00:29:49.890 starting I/O failed: -6 00:29:49.890 Write completed with error (sct=0, sc=8) 00:29:49.890 Write completed with error (sct=0, sc=8) 00:29:49.890 starting I/O failed: -6 00:29:49.890 Write completed with error (sct=0, sc=8) 00:29:49.890 starting I/O failed: -6 00:29:49.890 Write completed with error (sct=0, sc=8) 00:29:49.890 starting I/O failed: -6 00:29:49.890 Write completed with error (sct=0, sc=8) 00:29:49.890 Write completed with error (sct=0, sc=8) 00:29:49.890 starting I/O failed: -6 00:29:49.890 Write completed with error (sct=0, sc=8) 00:29:49.890 starting I/O failed: -6 00:29:49.890 Write completed with error (sct=0, sc=8) 00:29:49.890 starting I/O failed: -6 00:29:49.890 Write completed with error (sct=0, sc=8) 00:29:49.890 Write completed with error (sct=0, sc=8) 00:29:49.890 starting I/O failed: -6 00:29:49.890 Write completed with error (sct=0, sc=8) 00:29:49.890 starting I/O failed: -6 00:29:49.890 Write completed with error (sct=0, sc=8) 00:29:49.890 starting I/O failed: -6 00:29:49.890 [2024-11-19 07:53:41.386769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.890 Write completed with error (sct=0, sc=8) 00:29:49.890 starting I/O failed: -6 00:29:49.890 Write completed with error (sct=0, sc=8) 00:29:49.890 starting I/O failed: -6 00:29:49.890 Write completed with error (sct=0, sc=8) 00:29:49.890 starting I/O failed: -6 00:29:49.890 Write completed with error (sct=0, sc=8) 00:29:49.890 starting I/O failed: -6 00:29:49.890 Write completed with error (sct=0, sc=8) 00:29:49.890 starting I/O failed: -6 00:29:49.890 Write completed with error (sct=0, sc=8) 00:29:49.890 starting I/O failed: -6 00:29:49.890 Write completed with error (sct=0, sc=8) 00:29:49.890 starting I/O failed: -6 00:29:49.890 Write completed with error (sct=0, sc=8) 00:29:49.890 starting I/O failed: -6 00:29:49.890 Write completed with error (sct=0, sc=8) 00:29:49.890 starting I/O failed: -6 00:29:49.890 Write completed with error (sct=0, sc=8) 00:29:49.890 starting I/O failed: -6 00:29:49.890 Write completed with error (sct=0, sc=8) 00:29:49.890 starting I/O failed: -6 00:29:49.890 Write completed with error (sct=0, sc=8) 00:29:49.890 starting I/O failed: -6 00:29:49.890 Write completed with error (sct=0, sc=8) 00:29:49.890 starting I/O failed: -6 00:29:49.890 Write completed with error (sct=0, sc=8) 00:29:49.890 starting I/O failed: -6 00:29:49.890 Write completed with error (sct=0, sc=8) 00:29:49.890 starting I/O failed: -6 00:29:49.890 Write completed with error (sct=0, sc=8) 00:29:49.890 starting I/O failed: -6 00:29:49.890 Write completed with error (sct=0, sc=8) 00:29:49.890 starting I/O failed: -6 00:29:49.890 Write completed with error (sct=0, sc=8) 00:29:49.890 starting I/O failed: -6 00:29:49.890 Write completed with error (sct=0, sc=8) 00:29:49.890 starting I/O failed: -6 00:29:49.890 Write completed with error (sct=0, sc=8) 00:29:49.890 starting I/O failed: -6 00:29:49.890 Write completed with error (sct=0, sc=8) 00:29:49.890 starting I/O failed: -6 00:29:49.890 Write completed with error (sct=0, sc=8) 00:29:49.890 starting I/O failed: -6 00:29:49.890 Write completed with error (sct=0, sc=8) 00:29:49.890 starting I/O failed: -6 00:29:49.890 Write completed with error (sct=0, sc=8) 00:29:49.890 starting I/O failed: -6 00:29:49.890 Write completed with error (sct=0, sc=8) 00:29:49.890 starting I/O failed: -6 00:29:49.890 Write completed with error (sct=0, sc=8) 00:29:49.890 starting I/O failed: -6 00:29:49.890 Write completed with error (sct=0, sc=8) 00:29:49.890 starting I/O failed: -6 00:29:49.890 Write completed with error (sct=0, sc=8) 00:29:49.890 starting I/O failed: -6 00:29:49.890 Write completed with error (sct=0, sc=8) 00:29:49.890 starting I/O failed: -6 00:29:49.890 Write completed with error (sct=0, sc=8) 00:29:49.890 starting I/O failed: -6 00:29:49.890 Write completed with error (sct=0, sc=8) 00:29:49.890 starting I/O failed: -6 00:29:49.890 Write completed with error (sct=0, sc=8) 00:29:49.890 starting I/O failed: -6 00:29:49.890 Write completed with error (sct=0, sc=8) 00:29:49.890 starting I/O failed: -6 00:29:49.890 Write completed with error (sct=0, sc=8) 00:29:49.890 starting I/O failed: -6 00:29:49.890 Write completed with error (sct=0, sc=8) 00:29:49.890 starting I/O failed: -6 00:29:49.890 Write completed with error (sct=0, sc=8) 00:29:49.890 starting I/O failed: -6 00:29:49.890 Write completed with error (sct=0, sc=8) 00:29:49.890 starting I/O failed: -6 00:29:49.890 Write completed with error (sct=0, sc=8) 00:29:49.890 starting I/O failed: -6 00:29:49.890 Write completed with error (sct=0, sc=8) 00:29:49.890 starting I/O failed: -6 00:29:49.890 Write completed with error (sct=0, sc=8) 00:29:49.890 starting I/O failed: -6 00:29:49.890 Write completed with error (sct=0, sc=8) 00:29:49.890 starting I/O failed: -6 00:29:49.890 Write completed with error (sct=0, sc=8) 00:29:49.890 starting I/O failed: -6 00:29:49.890 Write completed with error (sct=0, sc=8) 00:29:49.890 starting I/O failed: -6 00:29:49.890 Write completed with error (sct=0, sc=8) 00:29:49.890 starting I/O failed: -6 00:29:49.890 Write completed with error (sct=0, sc=8) 00:29:49.890 starting I/O failed: -6 00:29:49.890 Write completed with error (sct=0, sc=8) 00:29:49.890 starting I/O failed: -6 00:29:49.890 Write completed with error (sct=0, sc=8) 00:29:49.890 starting I/O failed: -6 00:29:49.890 Write completed with error (sct=0, sc=8) 00:29:49.890 starting I/O failed: -6 00:29:49.890 Write completed with error (sct=0, sc=8) 00:29:49.890 starting I/O failed: -6 00:29:49.890 Write completed with error (sct=0, sc=8) 00:29:49.890 starting I/O failed: -6 00:29:49.890 Write completed with error (sct=0, sc=8) 00:29:49.890 starting I/O failed: -6 00:29:49.890 Write completed with error (sct=0, sc=8) 00:29:49.890 starting I/O failed: -6 00:29:49.890 Write completed with error (sct=0, sc=8) 00:29:49.890 starting I/O failed: -6 00:29:49.890 Write completed with error (sct=0, sc=8) 00:29:49.890 starting I/O failed: -6 00:29:49.890 Write completed with error (sct=0, sc=8) 00:29:49.890 starting I/O failed: -6 00:29:49.890 Write completed with error (sct=0, sc=8) 00:29:49.890 starting I/O failed: -6 00:29:49.890 Write completed with error (sct=0, sc=8) 00:29:49.890 starting I/O failed: -6 00:29:49.890 Write completed with error (sct=0, sc=8) 00:29:49.890 starting I/O failed: -6 00:29:49.890 Write completed with error (sct=0, sc=8) 00:29:49.890 starting I/O failed: -6 00:29:49.890 [2024-11-19 07:53:41.401941] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:49.890 NVMe io qpair process completion error 00:29:49.890 Initializing NVMe Controllers 00:29:49.890 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:29:49.890 Controller IO queue size 128, less than required. 00:29:49.890 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:49.890 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:29:49.890 Controller IO queue size 128, less than required. 00:29:49.890 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:49.890 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:29:49.890 Controller IO queue size 128, less than required. 00:29:49.890 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:49.890 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:49.890 Controller IO queue size 128, less than required. 00:29:49.890 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:49.890 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:29:49.890 Controller IO queue size 128, less than required. 00:29:49.890 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:49.890 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:29:49.890 Controller IO queue size 128, less than required. 00:29:49.890 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:49.890 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:29:49.890 Controller IO queue size 128, less than required. 00:29:49.890 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:49.890 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:29:49.890 Controller IO queue size 128, less than required. 00:29:49.890 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:49.890 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:29:49.890 Controller IO queue size 128, less than required. 00:29:49.890 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:49.890 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:29:49.890 Controller IO queue size 128, less than required. 00:29:49.890 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:49.890 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:29:49.890 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:29:49.890 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:29:49.890 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:49.891 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:29:49.891 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:29:49.891 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:29:49.891 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:29:49.891 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:29:49.891 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:29:49.891 Initialization complete. Launching workers. 00:29:49.891 ======================================================== 00:29:49.891 Latency(us) 00:29:49.891 Device Information : IOPS MiB/s Average min max 00:29:49.891 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1449.26 62.27 88345.93 1300.96 219233.39 00:29:49.891 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1388.55 59.66 92343.86 1524.23 263895.72 00:29:49.891 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1461.61 62.80 87869.29 2177.79 277044.18 00:29:49.891 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1373.48 59.02 90017.62 2487.62 176242.83 00:29:49.891 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1387.71 59.63 89240.03 2211.26 170849.39 00:29:49.891 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1390.02 59.73 89302.13 1574.40 170601.50 00:29:49.891 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1409.28 60.55 88294.73 1984.05 161618.11 00:29:49.891 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1426.86 61.31 87348.60 2137.68 168241.74 00:29:49.891 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1453.87 62.47 85908.21 1530.30 183313.30 00:29:49.891 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1401.74 60.23 89313.39 2261.49 232739.33 00:29:49.891 ======================================================== 00:29:49.891 Total : 14142.37 607.68 88772.83 1300.96 277044.18 00:29:49.891 00:29:49.891 [2024-11-19 07:53:41.430637] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000017c80 is same with the state(6) to be set 00:29:49.891 [2024-11-19 07:53:41.430790] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016d80 is same with the state(6) to be set 00:29:49.891 [2024-11-19 07:53:41.430876] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000018180 is same with the state(6) to be set 00:29:49.891 [2024-11-19 07:53:41.430964] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000015980 is same with the state(6) to be set 00:29:49.891 [2024-11-19 07:53:41.431049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016880 is same with the state(6) to be set 00:29:49.891 [2024-11-19 07:53:41.431132] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000017780 is same with the state(6) to be set 00:29:49.891 [2024-11-19 07:53:41.431215] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000018680 is same with the state(6) to be set 00:29:49.891 [2024-11-19 07:53:41.431302] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016380 is same with the state(6) to be set 00:29:49.891 [2024-11-19 07:53:41.431384] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000015e80 is same with the state(6) to be set 00:29:49.891 [2024-11-19 07:53:41.431478] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000017280 is same with the state(6) to be set 00:29:49.891 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:29:52.423 07:53:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:29:53.363 07:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 3055410 00:29:53.363 07:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:29:53.363 07:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3055410 00:29:53.363 07:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:29:53.363 07:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:53.363 07:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:29:53.363 07:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:53.363 07:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 3055410 00:29:53.363 07:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:29:53.363 07:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:53.363 07:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:53.363 07:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:53.363 07:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:29:53.363 07:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:53.363 07:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:53.363 07:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:53.363 07:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:53.363 07:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:53.363 07:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:29:53.363 07:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:53.363 07:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:29:53.363 07:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:53.363 07:53:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:53.363 rmmod nvme_tcp 00:29:53.363 rmmod nvme_fabrics 00:29:53.363 rmmod nvme_keyring 00:29:53.363 07:53:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:53.363 07:53:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:29:53.363 07:53:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:29:53.363 07:53:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 3055120 ']' 00:29:53.363 07:53:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 3055120 00:29:53.363 07:53:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 3055120 ']' 00:29:53.363 07:53:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 3055120 00:29:53.363 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3055120) - No such process 00:29:53.363 07:53:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 3055120 is not found' 00:29:53.363 Process with pid 3055120 is not found 00:29:53.363 07:53:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:53.363 07:53:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:53.363 07:53:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:53.363 07:53:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:29:53.363 07:53:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:29:53.363 07:53:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:53.363 07:53:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:29:53.363 07:53:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:53.363 07:53:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:53.363 07:53:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:53.363 07:53:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:53.363 07:53:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:55.273 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:55.273 00:29:55.273 real 0m13.286s 00:29:55.273 user 0m36.225s 00:29:55.273 sys 0m5.551s 00:29:55.273 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:55.273 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:55.273 ************************************ 00:29:55.273 END TEST nvmf_shutdown_tc4 00:29:55.273 ************************************ 00:29:55.273 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:29:55.273 00:29:55.273 real 0m54.774s 00:29:55.273 user 2m47.695s 00:29:55.273 sys 0m13.850s 00:29:55.273 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:55.273 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:55.273 ************************************ 00:29:55.273 END TEST nvmf_shutdown 00:29:55.273 ************************************ 00:29:55.273 07:53:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:29:55.273 07:53:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:55.273 07:53:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:55.273 07:53:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:55.273 ************************************ 00:29:55.273 START TEST nvmf_nsid 00:29:55.273 ************************************ 00:29:55.273 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:29:55.273 * Looking for test storage... 00:29:55.532 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:55.532 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:55.532 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:29:55.532 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:55.532 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:55.532 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:55.532 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:55.532 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:55.532 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:29:55.532 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:29:55.532 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:29:55.533 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:29:55.533 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:29:55.533 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:29:55.533 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:29:55.533 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:55.533 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:29:55.533 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:29:55.533 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:55.533 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:55.533 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:29:55.533 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:29:55.533 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:55.533 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:29:55.533 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:29:55.533 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:29:55.533 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:29:55.533 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:55.533 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:29:55.533 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:29:55.533 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:55.533 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:55.533 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:29:55.533 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:55.533 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:55.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:55.533 --rc genhtml_branch_coverage=1 00:29:55.533 --rc genhtml_function_coverage=1 00:29:55.533 --rc genhtml_legend=1 00:29:55.533 --rc geninfo_all_blocks=1 00:29:55.533 --rc geninfo_unexecuted_blocks=1 00:29:55.533 00:29:55.533 ' 00:29:55.533 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:55.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:55.533 --rc genhtml_branch_coverage=1 00:29:55.533 --rc genhtml_function_coverage=1 00:29:55.533 --rc genhtml_legend=1 00:29:55.533 --rc geninfo_all_blocks=1 00:29:55.533 --rc geninfo_unexecuted_blocks=1 00:29:55.533 00:29:55.533 ' 00:29:55.533 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:55.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:55.533 --rc genhtml_branch_coverage=1 00:29:55.533 --rc genhtml_function_coverage=1 00:29:55.533 --rc genhtml_legend=1 00:29:55.533 --rc geninfo_all_blocks=1 00:29:55.533 --rc geninfo_unexecuted_blocks=1 00:29:55.533 00:29:55.533 ' 00:29:55.533 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:55.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:55.533 --rc genhtml_branch_coverage=1 00:29:55.533 --rc genhtml_function_coverage=1 00:29:55.533 --rc genhtml_legend=1 00:29:55.533 --rc geninfo_all_blocks=1 00:29:55.533 --rc geninfo_unexecuted_blocks=1 00:29:55.533 00:29:55.533 ' 00:29:55.533 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:55.533 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:29:55.533 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:55.533 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:55.533 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:55.533 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:55.533 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:55.533 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:55.533 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:55.533 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:55.533 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:55.533 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:55.533 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:55.533 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:55.533 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:55.533 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:55.533 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:55.533 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:55.533 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:55.533 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:29:55.533 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:55.533 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:55.533 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:55.533 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:55.533 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:55.533 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:55.533 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:29:55.533 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:55.533 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:29:55.533 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:55.533 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:55.533 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:55.533 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:55.533 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:55.533 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:55.533 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:55.533 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:55.533 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:55.533 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:55.533 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:29:55.533 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:29:55.533 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:29:55.533 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:29:55.533 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:29:55.533 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:29:55.533 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:55.533 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:55.533 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:55.533 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:55.534 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:55.534 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:55.534 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:55.534 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:55.534 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:55.534 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:55.534 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:29:55.534 07:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:57.439 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:57.439 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:29:57.439 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:57.440 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:57.440 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:57.440 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:57.440 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:57.440 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:29:57.440 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:57.440 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:29:57.440 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:29:57.440 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:29:57.440 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:29:57.440 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:29:57.440 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:29:57.440 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:57.440 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:57.440 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:57.440 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:57.440 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:57.440 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:57.440 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:57.440 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:57.440 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:57.440 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:57.440 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:57.440 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:57.440 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:57.440 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:57.440 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:57.440 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:57.440 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:57.440 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:57.440 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:57.440 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:57.440 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:57.440 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:57.440 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:57.440 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:57.440 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:57.440 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:57.440 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:57.440 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:57.440 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:57.440 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:57.440 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:57.440 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:57.440 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:57.440 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:57.440 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:57.440 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:57.440 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:57.440 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:57.440 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:57.440 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:57.440 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:57.440 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:57.440 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:57.440 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:57.440 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:57.440 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:57.440 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:57.440 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:57.440 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:57.440 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:57.440 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:57.440 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:57.440 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:57.440 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:57.440 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:57.440 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:57.440 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:57.440 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:57.440 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:29:57.440 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:57.440 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:57.440 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:57.440 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:57.440 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:57.440 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:57.440 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:57.440 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:57.440 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:57.440 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:57.440 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:57.440 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:57.440 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:57.440 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:57.440 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:57.440 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:57.440 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:57.704 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:57.704 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:57.704 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:57.704 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:57.704 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:57.704 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:57.704 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:57.704 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:57.704 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:57.704 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:57.704 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.318 ms 00:29:57.704 00:29:57.704 --- 10.0.0.2 ping statistics --- 00:29:57.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:57.704 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:29:57.704 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:57.704 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:57.704 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:29:57.704 00:29:57.704 --- 10.0.0.1 ping statistics --- 00:29:57.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:57.704 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:29:57.704 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:57.704 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:29:57.704 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:57.704 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:57.704 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:57.704 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:57.704 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:57.704 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:57.704 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:57.704 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:29:57.704 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:57.704 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:57.704 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:57.704 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=3058315 00:29:57.704 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:29:57.704 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 3058315 00:29:57.704 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 3058315 ']' 00:29:57.704 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:57.704 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:57.704 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:57.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:57.704 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:57.704 07:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:57.704 [2024-11-19 07:53:49.619995] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:29:57.704 [2024-11-19 07:53:49.620141] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:57.963 [2024-11-19 07:53:49.769655] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:57.963 [2024-11-19 07:53:49.892270] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:57.963 [2024-11-19 07:53:49.892362] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:57.963 [2024-11-19 07:53:49.892382] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:57.963 [2024-11-19 07:53:49.892403] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:57.963 [2024-11-19 07:53:49.892418] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:57.963 [2024-11-19 07:53:49.893900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:58.900 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:58.900 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:29:58.900 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:58.900 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:58.900 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:58.900 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:58.900 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:58.900 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=3058469 00:29:58.900 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:29:58.900 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:29:58.900 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:29:58.900 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:29:58.900 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:58.900 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:58.900 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:58.900 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:58.900 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:58.900 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:58.900 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:58.900 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:58.900 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:58.900 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:29:58.900 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:29:58.900 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=4c8f361f-9b0f-4ea6-b3f8-5b2c555943ce 00:29:58.900 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:29:58.900 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=242100ef-b05b-45a3-955c-0f7dd8014351 00:29:58.900 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:29:58.900 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=7f8bf381-1f88-4a70-9f10-3c21a035c213 00:29:58.900 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:29:58.900 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.900 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:58.900 null0 00:29:58.900 null1 00:29:58.900 null2 00:29:58.900 [2024-11-19 07:53:50.700862] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:58.900 [2024-11-19 07:53:50.725157] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:58.900 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.900 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 3058469 /var/tmp/tgt2.sock 00:29:58.900 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 3058469 ']' 00:29:58.900 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:29:58.900 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:58.900 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:29:58.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:29:58.900 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:58.900 07:53:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:58.900 [2024-11-19 07:53:50.771507] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:29:58.901 [2024-11-19 07:53:50.771662] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3058469 ] 00:29:59.159 [2024-11-19 07:53:50.906542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:59.159 [2024-11-19 07:53:51.031495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:00.099 07:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:00.099 07:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:30:00.099 07:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:30:00.665 [2024-11-19 07:53:52.384427] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:00.665 [2024-11-19 07:53:52.400775] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:30:00.665 nvme0n1 nvme0n2 00:30:00.665 nvme1n1 00:30:00.665 07:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:30:00.665 07:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:30:00.665 07:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:01.234 07:53:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:30:01.234 07:53:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:30:01.234 07:53:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:30:01.234 07:53:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:30:01.234 07:53:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:30:01.234 07:53:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:30:01.234 07:53:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:30:01.234 07:53:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:30:01.234 07:53:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:30:01.234 07:53:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:30:01.234 07:53:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:30:01.234 07:53:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:30:01.234 07:53:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:30:02.172 07:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:30:02.172 07:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:30:02.172 07:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:30:02.172 07:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:30:02.172 07:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:30:02.172 07:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 4c8f361f-9b0f-4ea6-b3f8-5b2c555943ce 00:30:02.172 07:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:30:02.172 07:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:30:02.172 07:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:30:02.172 07:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:30:02.172 07:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:30:02.172 07:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=4c8f361f9b0f4ea6b3f85b2c555943ce 00:30:02.172 07:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 4C8F361F9B0F4EA6B3F85B2C555943CE 00:30:02.172 07:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 4C8F361F9B0F4EA6B3F85B2C555943CE == \4\C\8\F\3\6\1\F\9\B\0\F\4\E\A\6\B\3\F\8\5\B\2\C\5\5\5\9\4\3\C\E ]] 00:30:02.172 07:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:30:02.172 07:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:30:02.172 07:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:30:02.172 07:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:30:02.172 07:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:30:02.172 07:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:30:02.172 07:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:30:02.172 07:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 242100ef-b05b-45a3-955c-0f7dd8014351 00:30:02.172 07:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:30:02.432 07:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:30:02.432 07:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:30:02.432 07:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:30:02.432 07:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:30:02.432 07:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=242100efb05b45a3955c0f7dd8014351 00:30:02.432 07:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 242100EFB05B45A3955C0F7DD8014351 00:30:02.432 07:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 242100EFB05B45A3955C0F7DD8014351 == \2\4\2\1\0\0\E\F\B\0\5\B\4\5\A\3\9\5\5\C\0\F\7\D\D\8\0\1\4\3\5\1 ]] 00:30:02.432 07:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:30:02.432 07:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:30:02.433 07:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:30:02.433 07:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:30:02.433 07:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:30:02.433 07:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:30:02.433 07:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:30:02.433 07:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 7f8bf381-1f88-4a70-9f10-3c21a035c213 00:30:02.433 07:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:30:02.433 07:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:30:02.433 07:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:30:02.433 07:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:30:02.433 07:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:30:02.433 07:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=7f8bf3811f884a709f103c21a035c213 00:30:02.433 07:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 7F8BF3811F884A709F103C21A035C213 00:30:02.433 07:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 7F8BF3811F884A709F103C21A035C213 == \7\F\8\B\F\3\8\1\1\F\8\8\4\A\7\0\9\F\1\0\3\C\2\1\A\0\3\5\C\2\1\3 ]] 00:30:02.433 07:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:30:02.692 07:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:30:02.692 07:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:30:02.692 07:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 3058469 00:30:02.692 07:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 3058469 ']' 00:30:02.692 07:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 3058469 00:30:02.692 07:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:30:02.692 07:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:02.692 07:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3058469 00:30:02.692 07:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:02.692 07:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:02.692 07:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3058469' 00:30:02.692 killing process with pid 3058469 00:30:02.692 07:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 3058469 00:30:02.692 07:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 3058469 00:30:05.230 07:53:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:30:05.230 07:53:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:05.230 07:53:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:30:05.230 07:53:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:05.230 07:53:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:30:05.230 07:53:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:05.230 07:53:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:05.230 rmmod nvme_tcp 00:30:05.230 rmmod nvme_fabrics 00:30:05.230 rmmod nvme_keyring 00:30:05.230 07:53:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:05.230 07:53:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:30:05.230 07:53:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:30:05.230 07:53:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 3058315 ']' 00:30:05.230 07:53:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 3058315 00:30:05.230 07:53:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 3058315 ']' 00:30:05.230 07:53:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 3058315 00:30:05.230 07:53:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:30:05.230 07:53:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:05.230 07:53:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3058315 00:30:05.230 07:53:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:05.230 07:53:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:05.230 07:53:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3058315' 00:30:05.230 killing process with pid 3058315 00:30:05.230 07:53:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 3058315 00:30:05.230 07:53:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 3058315 00:30:06.194 07:53:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:06.194 07:53:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:06.194 07:53:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:06.194 07:53:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:30:06.194 07:53:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:30:06.194 07:53:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:06.194 07:53:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:30:06.194 07:53:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:06.194 07:53:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:06.194 07:53:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:06.194 07:53:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:06.194 07:53:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:08.099 07:54:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:08.099 00:30:08.099 real 0m12.876s 00:30:08.099 user 0m15.770s 00:30:08.099 sys 0m3.024s 00:30:08.099 07:54:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:08.099 07:54:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:08.099 ************************************ 00:30:08.099 END TEST nvmf_nsid 00:30:08.099 ************************************ 00:30:08.357 07:54:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:30:08.357 00:30:08.357 real 18m38.058s 00:30:08.357 user 51m13.779s 00:30:08.357 sys 3m37.004s 00:30:08.357 07:54:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:08.357 07:54:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:30:08.357 ************************************ 00:30:08.357 END TEST nvmf_target_extra 00:30:08.357 ************************************ 00:30:08.357 07:54:00 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:30:08.357 07:54:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:08.357 07:54:00 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:08.357 07:54:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:08.357 ************************************ 00:30:08.357 START TEST nvmf_host 00:30:08.357 ************************************ 00:30:08.357 07:54:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:30:08.357 * Looking for test storage... 00:30:08.357 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:30:08.357 07:54:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:08.357 07:54:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:30:08.357 07:54:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:08.357 07:54:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:08.357 07:54:00 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:08.357 07:54:00 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:08.357 07:54:00 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:08.357 07:54:00 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:30:08.357 07:54:00 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:30:08.357 07:54:00 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:30:08.357 07:54:00 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:30:08.357 07:54:00 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:30:08.357 07:54:00 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:30:08.357 07:54:00 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:30:08.357 07:54:00 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:08.357 07:54:00 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:30:08.357 07:54:00 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:30:08.357 07:54:00 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:08.357 07:54:00 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:08.357 07:54:00 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:30:08.357 07:54:00 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:30:08.357 07:54:00 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:08.357 07:54:00 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:30:08.357 07:54:00 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:30:08.357 07:54:00 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:30:08.357 07:54:00 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:30:08.357 07:54:00 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:08.357 07:54:00 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:30:08.357 07:54:00 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:30:08.357 07:54:00 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:08.357 07:54:00 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:08.357 07:54:00 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:30:08.357 07:54:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:08.357 07:54:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:08.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:08.357 --rc genhtml_branch_coverage=1 00:30:08.357 --rc genhtml_function_coverage=1 00:30:08.357 --rc genhtml_legend=1 00:30:08.357 --rc geninfo_all_blocks=1 00:30:08.357 --rc geninfo_unexecuted_blocks=1 00:30:08.357 00:30:08.357 ' 00:30:08.357 07:54:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:08.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:08.357 --rc genhtml_branch_coverage=1 00:30:08.357 --rc genhtml_function_coverage=1 00:30:08.357 --rc genhtml_legend=1 00:30:08.357 --rc geninfo_all_blocks=1 00:30:08.358 --rc geninfo_unexecuted_blocks=1 00:30:08.358 00:30:08.358 ' 00:30:08.358 07:54:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:08.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:08.358 --rc genhtml_branch_coverage=1 00:30:08.358 --rc genhtml_function_coverage=1 00:30:08.358 --rc genhtml_legend=1 00:30:08.358 --rc geninfo_all_blocks=1 00:30:08.358 --rc geninfo_unexecuted_blocks=1 00:30:08.358 00:30:08.358 ' 00:30:08.358 07:54:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:08.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:08.358 --rc genhtml_branch_coverage=1 00:30:08.358 --rc genhtml_function_coverage=1 00:30:08.358 --rc genhtml_legend=1 00:30:08.358 --rc geninfo_all_blocks=1 00:30:08.358 --rc geninfo_unexecuted_blocks=1 00:30:08.358 00:30:08.358 ' 00:30:08.358 07:54:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:08.358 07:54:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:30:08.358 07:54:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:08.358 07:54:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:08.358 07:54:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:08.358 07:54:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:08.358 07:54:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:08.358 07:54:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:08.358 07:54:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:08.358 07:54:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:08.358 07:54:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:08.358 07:54:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:08.358 07:54:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:08.358 07:54:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:08.358 07:54:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:08.358 07:54:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:08.358 07:54:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:08.358 07:54:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:08.358 07:54:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:08.358 07:54:00 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:30:08.358 07:54:00 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:08.358 07:54:00 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:08.358 07:54:00 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:08.358 07:54:00 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.358 07:54:00 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.358 07:54:00 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.358 07:54:00 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:30:08.358 07:54:00 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.358 07:54:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:30:08.358 07:54:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:08.358 07:54:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:08.358 07:54:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:08.358 07:54:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:08.358 07:54:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:08.358 07:54:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:08.358 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:08.358 07:54:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:08.358 07:54:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:08.358 07:54:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:08.358 07:54:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:30:08.358 07:54:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:30:08.358 07:54:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:30:08.358 07:54:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:30:08.358 07:54:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:08.358 07:54:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:08.358 07:54:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:08.358 ************************************ 00:30:08.358 START TEST nvmf_multicontroller 00:30:08.358 ************************************ 00:30:08.358 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:30:08.618 * Looking for test storage... 00:30:08.618 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:08.618 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:08.618 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:30:08.618 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:08.618 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:08.618 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:08.618 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:08.618 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:08.618 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:30:08.618 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:30:08.618 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:30:08.618 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:30:08.618 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:30:08.618 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:30:08.618 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:30:08.618 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:08.618 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:30:08.618 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:30:08.618 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:08.618 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:08.618 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:30:08.618 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:30:08.618 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:08.618 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:30:08.618 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:30:08.618 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:30:08.618 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:30:08.618 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:08.618 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:30:08.618 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:30:08.618 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:08.618 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:08.618 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:30:08.618 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:08.618 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:08.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:08.618 --rc genhtml_branch_coverage=1 00:30:08.618 --rc genhtml_function_coverage=1 00:30:08.618 --rc genhtml_legend=1 00:30:08.618 --rc geninfo_all_blocks=1 00:30:08.618 --rc geninfo_unexecuted_blocks=1 00:30:08.618 00:30:08.618 ' 00:30:08.618 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:08.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:08.618 --rc genhtml_branch_coverage=1 00:30:08.618 --rc genhtml_function_coverage=1 00:30:08.618 --rc genhtml_legend=1 00:30:08.618 --rc geninfo_all_blocks=1 00:30:08.618 --rc geninfo_unexecuted_blocks=1 00:30:08.618 00:30:08.618 ' 00:30:08.618 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:08.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:08.618 --rc genhtml_branch_coverage=1 00:30:08.618 --rc genhtml_function_coverage=1 00:30:08.618 --rc genhtml_legend=1 00:30:08.618 --rc geninfo_all_blocks=1 00:30:08.618 --rc geninfo_unexecuted_blocks=1 00:30:08.618 00:30:08.618 ' 00:30:08.618 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:08.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:08.618 --rc genhtml_branch_coverage=1 00:30:08.618 --rc genhtml_function_coverage=1 00:30:08.618 --rc genhtml_legend=1 00:30:08.618 --rc geninfo_all_blocks=1 00:30:08.618 --rc geninfo_unexecuted_blocks=1 00:30:08.618 00:30:08.618 ' 00:30:08.618 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:08.618 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:30:08.618 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:08.618 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:08.618 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:08.618 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:08.618 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:08.618 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:08.618 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:08.618 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:08.618 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:08.618 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:08.618 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:08.618 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:08.618 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:08.618 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:08.618 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:08.618 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:08.618 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:08.618 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:30:08.618 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:08.618 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:08.618 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:08.618 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.619 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.619 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.619 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:30:08.619 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.619 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:30:08.619 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:08.619 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:08.619 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:08.619 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:08.619 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:08.619 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:08.619 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:08.619 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:08.619 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:08.619 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:08.619 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:08.619 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:08.619 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:30:08.619 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:30:08.619 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:08.619 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:30:08.619 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:30:08.619 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:08.619 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:08.619 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:08.619 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:08.619 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:08.619 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:08.619 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:08.619 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:08.619 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:08.619 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:08.619 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:30:08.619 07:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:10.522 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:10.522 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:30:10.522 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:10.522 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:10.522 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:10.522 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:10.522 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:10.522 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:30:10.522 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:10.522 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:30:10.522 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:30:10.522 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:30:10.522 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:30:10.522 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:30:10.522 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:30:10.522 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:10.522 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:10.522 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:10.522 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:10.522 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:10.522 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:10.522 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:10.522 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:10.522 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:10.522 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:10.522 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:10.522 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:10.522 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:10.522 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:10.522 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:10.522 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:10.522 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:10.522 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:10.522 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:10.522 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:10.522 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:10.522 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:10.522 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:10.522 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:10.522 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:10.522 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:10.522 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:10.522 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:10.522 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:10.522 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:10.522 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:10.522 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:10.522 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:10.522 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:10.522 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:10.522 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:10.522 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:10.522 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:10.522 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:10.522 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:10.522 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:10.522 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:10.522 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:10.522 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:10.522 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:10.522 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:10.522 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:10.522 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:10.522 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:10.522 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:10.522 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:10.522 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:10.522 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:10.522 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:10.522 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:10.522 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:10.522 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:10.522 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:10.522 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:30:10.522 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:10.522 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:10.522 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:10.522 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:10.522 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:10.522 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:10.522 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:10.523 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:10.523 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:10.523 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:10.523 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:10.523 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:10.523 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:10.523 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:10.523 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:10.523 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:10.523 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:10.523 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:10.523 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:10.523 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:10.523 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:10.523 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:10.781 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:10.781 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:10.781 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:10.781 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:10.781 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:10.781 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:30:10.781 00:30:10.781 --- 10.0.0.2 ping statistics --- 00:30:10.781 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:10.781 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:30:10.781 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:10.781 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:10.781 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:30:10.781 00:30:10.781 --- 10.0.0.1 ping statistics --- 00:30:10.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:10.782 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:30:10.782 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:10.782 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:30:10.782 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:10.782 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:10.782 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:10.782 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:10.782 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:10.782 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:10.782 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:10.782 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:30:10.782 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:10.782 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:10.782 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:10.782 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=3061460 00:30:10.782 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:10.782 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 3061460 00:30:10.782 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 3061460 ']' 00:30:10.782 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:10.782 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:10.782 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:10.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:10.782 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:10.782 07:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:10.782 [2024-11-19 07:54:02.620106] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:30:10.782 [2024-11-19 07:54:02.620246] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:11.042 [2024-11-19 07:54:02.768522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:11.042 [2024-11-19 07:54:02.893366] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:11.042 [2024-11-19 07:54:02.893440] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:11.042 [2024-11-19 07:54:02.893461] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:11.042 [2024-11-19 07:54:02.893482] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:11.042 [2024-11-19 07:54:02.893498] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:11.042 [2024-11-19 07:54:02.895954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:11.042 [2024-11-19 07:54:02.895995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:11.042 [2024-11-19 07:54:02.896018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:11.979 07:54:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:11.979 07:54:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:30:11.979 07:54:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:11.979 07:54:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:11.979 07:54:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:11.979 07:54:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:11.979 07:54:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:11.979 07:54:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.979 07:54:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:11.979 [2024-11-19 07:54:03.653262] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:11.979 07:54:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.979 07:54:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:11.979 07:54:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.979 07:54:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:11.979 Malloc0 00:30:11.979 07:54:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.979 07:54:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:11.979 07:54:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.979 07:54:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:11.979 07:54:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.979 07:54:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:11.979 07:54:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.979 07:54:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:11.979 07:54:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.979 07:54:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:11.979 07:54:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.979 07:54:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:11.979 [2024-11-19 07:54:03.766167] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:11.979 07:54:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.979 07:54:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:11.979 07:54:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.979 07:54:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:11.979 [2024-11-19 07:54:03.773959] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:11.979 07:54:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.979 07:54:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:30:11.979 07:54:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.979 07:54:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:11.979 Malloc1 00:30:11.979 07:54:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.979 07:54:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:30:11.979 07:54:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.979 07:54:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:11.979 07:54:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.979 07:54:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:30:11.979 07:54:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.979 07:54:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:11.979 07:54:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.979 07:54:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:11.979 07:54:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.979 07:54:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:11.979 07:54:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.979 07:54:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:30:11.979 07:54:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.979 07:54:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:11.979 07:54:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.979 07:54:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3061699 00:30:11.979 07:54:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:11.979 07:54:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:30:11.979 07:54:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3061699 /var/tmp/bdevperf.sock 00:30:11.979 07:54:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 3061699 ']' 00:30:11.979 07:54:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:11.979 07:54:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:11.979 07:54:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:11.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:11.979 07:54:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:11.979 07:54:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:13.356 07:54:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:13.356 07:54:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:30:13.356 07:54:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:30:13.356 07:54:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.356 07:54:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:13.356 NVMe0n1 00:30:13.356 07:54:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.356 07:54:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:13.356 07:54:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:30:13.356 07:54:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.356 07:54:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:13.356 07:54:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.356 1 00:30:13.356 07:54:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:30:13.356 07:54:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:30:13.357 07:54:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:30:13.357 07:54:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:13.357 07:54:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:13.357 07:54:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:13.357 07:54:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:13.357 07:54:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:30:13.357 07:54:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.357 07:54:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:13.357 request: 00:30:13.357 { 00:30:13.357 "name": "NVMe0", 00:30:13.357 "trtype": "tcp", 00:30:13.357 "traddr": "10.0.0.2", 00:30:13.357 "adrfam": "ipv4", 00:30:13.357 "trsvcid": "4420", 00:30:13.357 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:13.357 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:30:13.357 "hostaddr": "10.0.0.1", 00:30:13.357 "prchk_reftag": false, 00:30:13.357 "prchk_guard": false, 00:30:13.357 "hdgst": false, 00:30:13.357 "ddgst": false, 00:30:13.357 "allow_unrecognized_csi": false, 00:30:13.357 "method": "bdev_nvme_attach_controller", 00:30:13.357 "req_id": 1 00:30:13.357 } 00:30:13.357 Got JSON-RPC error response 00:30:13.357 response: 00:30:13.357 { 00:30:13.357 "code": -114, 00:30:13.357 "message": "A controller named NVMe0 already exists with the specified network path" 00:30:13.357 } 00:30:13.357 07:54:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:13.357 07:54:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:30:13.357 07:54:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:13.357 07:54:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:13.357 07:54:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:13.357 07:54:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:30:13.357 07:54:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:30:13.357 07:54:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:30:13.357 07:54:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:13.357 07:54:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:13.357 07:54:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:13.357 07:54:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:13.357 07:54:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:30:13.357 07:54:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.357 07:54:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:13.357 request: 00:30:13.357 { 00:30:13.357 "name": "NVMe0", 00:30:13.357 "trtype": "tcp", 00:30:13.357 "traddr": "10.0.0.2", 00:30:13.357 "adrfam": "ipv4", 00:30:13.357 "trsvcid": "4420", 00:30:13.357 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:13.357 "hostaddr": "10.0.0.1", 00:30:13.357 "prchk_reftag": false, 00:30:13.357 "prchk_guard": false, 00:30:13.357 "hdgst": false, 00:30:13.357 "ddgst": false, 00:30:13.357 "allow_unrecognized_csi": false, 00:30:13.357 "method": "bdev_nvme_attach_controller", 00:30:13.357 "req_id": 1 00:30:13.357 } 00:30:13.357 Got JSON-RPC error response 00:30:13.357 response: 00:30:13.357 { 00:30:13.357 "code": -114, 00:30:13.357 "message": "A controller named NVMe0 already exists with the specified network path" 00:30:13.357 } 00:30:13.357 07:54:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:13.357 07:54:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:30:13.357 07:54:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:13.357 07:54:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:13.357 07:54:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:13.357 07:54:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:30:13.357 07:54:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:30:13.357 07:54:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:30:13.357 07:54:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:13.357 07:54:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:13.357 07:54:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:13.357 07:54:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:13.357 07:54:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:30:13.357 07:54:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.357 07:54:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:13.357 request: 00:30:13.357 { 00:30:13.357 "name": "NVMe0", 00:30:13.357 "trtype": "tcp", 00:30:13.357 "traddr": "10.0.0.2", 00:30:13.357 "adrfam": "ipv4", 00:30:13.357 "trsvcid": "4420", 00:30:13.357 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:13.357 "hostaddr": "10.0.0.1", 00:30:13.357 "prchk_reftag": false, 00:30:13.357 "prchk_guard": false, 00:30:13.357 "hdgst": false, 00:30:13.357 "ddgst": false, 00:30:13.357 "multipath": "disable", 00:30:13.357 "allow_unrecognized_csi": false, 00:30:13.357 "method": "bdev_nvme_attach_controller", 00:30:13.357 "req_id": 1 00:30:13.357 } 00:30:13.357 Got JSON-RPC error response 00:30:13.357 response: 00:30:13.357 { 00:30:13.357 "code": -114, 00:30:13.357 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:30:13.357 } 00:30:13.357 07:54:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:13.357 07:54:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:30:13.357 07:54:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:13.357 07:54:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:13.357 07:54:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:13.357 07:54:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:30:13.357 07:54:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:30:13.357 07:54:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:30:13.357 07:54:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:13.357 07:54:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:13.357 07:54:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:13.357 07:54:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:13.357 07:54:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:30:13.357 07:54:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.357 07:54:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:13.357 request: 00:30:13.357 { 00:30:13.357 "name": "NVMe0", 00:30:13.357 "trtype": "tcp", 00:30:13.357 "traddr": "10.0.0.2", 00:30:13.357 "adrfam": "ipv4", 00:30:13.357 "trsvcid": "4420", 00:30:13.357 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:13.357 "hostaddr": "10.0.0.1", 00:30:13.357 "prchk_reftag": false, 00:30:13.357 "prchk_guard": false, 00:30:13.357 "hdgst": false, 00:30:13.357 "ddgst": false, 00:30:13.357 "multipath": "failover", 00:30:13.357 "allow_unrecognized_csi": false, 00:30:13.357 "method": "bdev_nvme_attach_controller", 00:30:13.357 "req_id": 1 00:30:13.357 } 00:30:13.357 Got JSON-RPC error response 00:30:13.357 response: 00:30:13.357 { 00:30:13.357 "code": -114, 00:30:13.357 "message": "A controller named NVMe0 already exists with the specified network path" 00:30:13.357 } 00:30:13.357 07:54:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:13.357 07:54:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:30:13.357 07:54:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:13.357 07:54:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:13.358 07:54:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:13.358 07:54:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:13.358 07:54:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.358 07:54:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:13.618 NVMe0n1 00:30:13.618 07:54:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.618 07:54:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:13.618 07:54:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.618 07:54:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:13.618 07:54:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.618 07:54:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:30:13.618 07:54:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.618 07:54:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:13.618 00:30:13.618 07:54:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.618 07:54:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:13.618 07:54:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.618 07:54:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:30:13.618 07:54:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:13.618 07:54:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.618 07:54:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:30:13.618 07:54:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:14.996 { 00:30:14.996 "results": [ 00:30:14.996 { 00:30:14.996 "job": "NVMe0n1", 00:30:14.996 "core_mask": "0x1", 00:30:14.996 "workload": "write", 00:30:14.996 "status": "finished", 00:30:14.996 "queue_depth": 128, 00:30:14.996 "io_size": 4096, 00:30:14.996 "runtime": 1.008505, 00:30:14.996 "iops": 12658.340811399052, 00:30:14.996 "mibps": 49.446643794527546, 00:30:14.996 "io_failed": 0, 00:30:14.996 "io_timeout": 0, 00:30:14.996 "avg_latency_us": 10079.098531399957, 00:30:14.996 "min_latency_us": 2669.9851851851854, 00:30:14.996 "max_latency_us": 19709.345185185186 00:30:14.996 } 00:30:14.996 ], 00:30:14.996 "core_count": 1 00:30:14.996 } 00:30:14.996 07:54:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:30:14.996 07:54:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.996 07:54:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:14.996 07:54:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.996 07:54:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:30:14.996 07:54:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 3061699 00:30:14.996 07:54:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 3061699 ']' 00:30:14.996 07:54:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 3061699 00:30:14.996 07:54:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:30:14.996 07:54:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:14.996 07:54:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3061699 00:30:14.996 07:54:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:14.996 07:54:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:14.996 07:54:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3061699' 00:30:14.996 killing process with pid 3061699 00:30:14.996 07:54:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 3061699 00:30:14.996 07:54:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 3061699 00:30:15.564 07:54:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:15.564 07:54:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.564 07:54:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:15.564 07:54:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.564 07:54:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:15.564 07:54:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.564 07:54:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:15.564 07:54:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.564 07:54:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:30:15.564 07:54:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:15.564 07:54:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:30:15.564 07:54:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:30:15.564 07:54:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:30:15.564 07:54:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:30:15.564 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:30:15.564 [2024-11-19 07:54:03.970011] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:30:15.564 [2024-11-19 07:54:03.970184] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3061699 ] 00:30:15.564 [2024-11-19 07:54:04.109193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:15.564 [2024-11-19 07:54:04.234963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:15.564 [2024-11-19 07:54:05.409267] bdev.c:4686:bdev_name_add: *ERROR*: Bdev name c0a61205-1fa2-4b41-8839-832e0900cc63 already exists 00:30:15.564 [2024-11-19 07:54:05.409328] bdev.c:7824:bdev_register: *ERROR*: Unable to add uuid:c0a61205-1fa2-4b41-8839-832e0900cc63 alias for bdev NVMe1n1 00:30:15.564 [2024-11-19 07:54:05.409362] bdev_nvme.c:4658:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:30:15.564 Running I/O for 1 seconds... 00:30:15.564 12574.00 IOPS, 49.12 MiB/s 00:30:15.564 Latency(us) 00:30:15.564 [2024-11-19T06:54:07.494Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:15.564 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:30:15.564 NVMe0n1 : 1.01 12658.34 49.45 0.00 0.00 10079.10 2669.99 19709.35 00:30:15.564 [2024-11-19T06:54:07.494Z] =================================================================================================================== 00:30:15.564 [2024-11-19T06:54:07.494Z] Total : 12658.34 49.45 0.00 0.00 10079.10 2669.99 19709.35 00:30:15.564 Received shutdown signal, test time was about 1.000000 seconds 00:30:15.564 00:30:15.564 Latency(us) 00:30:15.564 [2024-11-19T06:54:07.494Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:15.564 [2024-11-19T06:54:07.494Z] =================================================================================================================== 00:30:15.564 [2024-11-19T06:54:07.494Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:15.564 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:30:15.564 07:54:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:15.564 07:54:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:30:15.564 07:54:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:30:15.564 07:54:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:15.564 07:54:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:30:15.564 07:54:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:15.564 07:54:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:30:15.564 07:54:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:15.564 07:54:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:15.564 rmmod nvme_tcp 00:30:15.564 rmmod nvme_fabrics 00:30:15.822 rmmod nvme_keyring 00:30:15.822 07:54:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:15.823 07:54:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:30:15.823 07:54:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:30:15.823 07:54:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 3061460 ']' 00:30:15.823 07:54:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 3061460 00:30:15.823 07:54:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 3061460 ']' 00:30:15.823 07:54:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 3061460 00:30:15.823 07:54:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:30:15.823 07:54:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:15.823 07:54:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3061460 00:30:15.823 07:54:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:15.823 07:54:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:15.823 07:54:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3061460' 00:30:15.823 killing process with pid 3061460 00:30:15.823 07:54:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 3061460 00:30:15.823 07:54:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 3061460 00:30:17.232 07:54:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:17.232 07:54:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:17.233 07:54:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:17.233 07:54:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:30:17.233 07:54:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:30:17.233 07:54:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:17.233 07:54:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:30:17.233 07:54:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:17.233 07:54:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:17.233 07:54:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:17.233 07:54:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:17.233 07:54:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:19.160 07:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:19.160 00:30:19.160 real 0m10.687s 00:30:19.160 user 0m22.197s 00:30:19.160 sys 0m2.642s 00:30:19.160 07:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:19.160 07:54:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:19.160 ************************************ 00:30:19.160 END TEST nvmf_multicontroller 00:30:19.160 ************************************ 00:30:19.160 07:54:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:30:19.160 07:54:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:19.160 07:54:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:19.160 07:54:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:19.160 ************************************ 00:30:19.160 START TEST nvmf_aer 00:30:19.160 ************************************ 00:30:19.160 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:30:19.160 * Looking for test storage... 00:30:19.160 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:19.160 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:19.160 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:30:19.160 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:19.420 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:19.420 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:19.420 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:19.420 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:19.420 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:30:19.420 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:30:19.420 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:30:19.420 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:30:19.420 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:30:19.420 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:30:19.420 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:30:19.420 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:19.420 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:30:19.420 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:30:19.420 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:19.420 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:19.420 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:30:19.420 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:30:19.420 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:19.420 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:30:19.420 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:30:19.421 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:30:19.421 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:30:19.421 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:19.421 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:30:19.421 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:30:19.421 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:19.421 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:19.421 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:30:19.421 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:19.421 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:19.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:19.421 --rc genhtml_branch_coverage=1 00:30:19.421 --rc genhtml_function_coverage=1 00:30:19.421 --rc genhtml_legend=1 00:30:19.421 --rc geninfo_all_blocks=1 00:30:19.421 --rc geninfo_unexecuted_blocks=1 00:30:19.421 00:30:19.421 ' 00:30:19.421 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:19.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:19.421 --rc genhtml_branch_coverage=1 00:30:19.421 --rc genhtml_function_coverage=1 00:30:19.421 --rc genhtml_legend=1 00:30:19.421 --rc geninfo_all_blocks=1 00:30:19.421 --rc geninfo_unexecuted_blocks=1 00:30:19.421 00:30:19.421 ' 00:30:19.421 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:19.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:19.421 --rc genhtml_branch_coverage=1 00:30:19.421 --rc genhtml_function_coverage=1 00:30:19.421 --rc genhtml_legend=1 00:30:19.421 --rc geninfo_all_blocks=1 00:30:19.421 --rc geninfo_unexecuted_blocks=1 00:30:19.421 00:30:19.421 ' 00:30:19.421 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:19.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:19.421 --rc genhtml_branch_coverage=1 00:30:19.421 --rc genhtml_function_coverage=1 00:30:19.421 --rc genhtml_legend=1 00:30:19.421 --rc geninfo_all_blocks=1 00:30:19.421 --rc geninfo_unexecuted_blocks=1 00:30:19.421 00:30:19.421 ' 00:30:19.421 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:19.421 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:30:19.421 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:19.421 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:19.421 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:19.421 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:19.421 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:19.421 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:19.421 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:19.421 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:19.421 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:19.421 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:19.421 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:19.421 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:19.421 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:19.421 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:19.421 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:19.421 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:19.421 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:19.421 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:30:19.421 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:19.421 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:19.421 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:19.421 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:19.421 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:19.421 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:19.421 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:30:19.421 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:19.421 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:30:19.421 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:19.421 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:19.421 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:19.421 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:19.421 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:19.421 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:19.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:19.421 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:19.421 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:19.421 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:19.421 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:30:19.421 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:19.421 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:19.421 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:19.421 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:19.421 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:19.421 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:19.421 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:19.421 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:19.421 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:19.421 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:19.421 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:30:19.421 07:54:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:21.328 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:21.328 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:30:21.328 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:21.328 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:21.328 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:21.328 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:21.328 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:21.328 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:30:21.328 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:21.328 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:30:21.328 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:30:21.328 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:30:21.328 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:30:21.328 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:30:21.328 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:30:21.328 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:21.328 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:21.328 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:21.328 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:21.328 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:21.328 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:21.328 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:21.328 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:21.328 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:21.328 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:21.328 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:21.328 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:21.328 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:21.328 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:21.328 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:21.328 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:21.328 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:21.328 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:21.328 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:21.328 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:21.328 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:21.328 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:21.328 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:21.328 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:21.328 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:21.328 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:21.328 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:21.328 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:21.328 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:21.328 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:21.328 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:21.329 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:21.329 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:21.329 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:21.329 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:21.329 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:21.329 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:21.329 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:21.329 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:21.329 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:21.329 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:21.329 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:21.329 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:21.329 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:21.329 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:21.329 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:21.329 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:21.329 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:21.329 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:21.329 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:21.329 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:21.329 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:21.329 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:21.329 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:21.329 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:21.329 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:21.329 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:21.329 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:21.329 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:30:21.329 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:21.329 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:21.329 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:21.329 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:21.329 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:21.329 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:21.329 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:21.329 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:21.329 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:21.329 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:21.329 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:21.329 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:21.329 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:21.329 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:21.329 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:21.329 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:21.329 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:21.329 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:21.329 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:21.329 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:21.329 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:21.329 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:21.329 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:21.329 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:21.329 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:21.329 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:21.329 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:21.329 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:30:21.329 00:30:21.329 --- 10.0.0.2 ping statistics --- 00:30:21.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:21.329 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:30:21.329 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:21.329 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:21.329 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:30:21.329 00:30:21.329 --- 10.0.0.1 ping statistics --- 00:30:21.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:21.329 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:30:21.329 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:21.329 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:30:21.329 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:21.329 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:21.329 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:21.329 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:21.329 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:21.329 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:21.329 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:21.329 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:30:21.329 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:21.329 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:21.329 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:21.329 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=3064694 00:30:21.329 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:21.329 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 3064694 00:30:21.329 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 3064694 ']' 00:30:21.329 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:21.329 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:21.329 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:21.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:21.329 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:21.329 07:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:21.589 [2024-11-19 07:54:13.340654] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:30:21.589 [2024-11-19 07:54:13.340827] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:21.589 [2024-11-19 07:54:13.486722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:21.849 [2024-11-19 07:54:13.625915] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:21.849 [2024-11-19 07:54:13.626009] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:21.849 [2024-11-19 07:54:13.626036] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:21.850 [2024-11-19 07:54:13.626060] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:21.850 [2024-11-19 07:54:13.626080] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:21.850 [2024-11-19 07:54:13.628927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:21.850 [2024-11-19 07:54:13.628997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:21.850 [2024-11-19 07:54:13.629081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:21.850 [2024-11-19 07:54:13.629087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:22.417 07:54:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:22.417 07:54:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:30:22.417 07:54:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:22.417 07:54:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:22.417 07:54:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:22.676 07:54:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:22.676 07:54:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:22.676 07:54:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.676 07:54:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:22.676 [2024-11-19 07:54:14.364602] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:22.676 07:54:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:22.676 07:54:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:30:22.676 07:54:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.676 07:54:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:22.676 Malloc0 00:30:22.676 07:54:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:22.676 07:54:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:30:22.676 07:54:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.676 07:54:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:22.676 07:54:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:22.676 07:54:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:22.676 07:54:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.676 07:54:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:22.676 07:54:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:22.676 07:54:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:22.676 07:54:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.676 07:54:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:22.676 [2024-11-19 07:54:14.491545] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:22.676 07:54:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:22.676 07:54:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:30:22.676 07:54:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.676 07:54:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:22.676 [ 00:30:22.676 { 00:30:22.676 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:22.676 "subtype": "Discovery", 00:30:22.676 "listen_addresses": [], 00:30:22.676 "allow_any_host": true, 00:30:22.676 "hosts": [] 00:30:22.676 }, 00:30:22.676 { 00:30:22.676 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:22.676 "subtype": "NVMe", 00:30:22.676 "listen_addresses": [ 00:30:22.676 { 00:30:22.676 "trtype": "TCP", 00:30:22.676 "adrfam": "IPv4", 00:30:22.676 "traddr": "10.0.0.2", 00:30:22.676 "trsvcid": "4420" 00:30:22.676 } 00:30:22.676 ], 00:30:22.676 "allow_any_host": true, 00:30:22.676 "hosts": [], 00:30:22.676 "serial_number": "SPDK00000000000001", 00:30:22.676 "model_number": "SPDK bdev Controller", 00:30:22.676 "max_namespaces": 2, 00:30:22.676 "min_cntlid": 1, 00:30:22.676 "max_cntlid": 65519, 00:30:22.676 "namespaces": [ 00:30:22.676 { 00:30:22.676 "nsid": 1, 00:30:22.676 "bdev_name": "Malloc0", 00:30:22.676 "name": "Malloc0", 00:30:22.676 "nguid": "95C30404F93D49CD9BC44063465C234E", 00:30:22.676 "uuid": "95c30404-f93d-49cd-9bc4-4063465c234e" 00:30:22.676 } 00:30:22.676 ] 00:30:22.676 } 00:30:22.676 ] 00:30:22.676 07:54:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:22.676 07:54:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:30:22.676 07:54:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:30:22.676 07:54:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=3064858 00:30:22.676 07:54:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:30:22.676 07:54:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:30:22.676 07:54:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:30:22.676 07:54:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:22.676 07:54:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:30:22.676 07:54:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:30:22.676 07:54:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:30:22.935 07:54:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:22.935 07:54:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:30:22.935 07:54:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:30:22.935 07:54:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:30:22.935 07:54:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:22.935 07:54:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 2 -lt 200 ']' 00:30:22.935 07:54:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=3 00:30:22.935 07:54:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:30:22.935 07:54:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:22.935 07:54:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:22.935 07:54:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:30:22.935 07:54:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:30:22.935 07:54:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.935 07:54:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:23.193 Malloc1 00:30:23.193 07:54:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:23.193 07:54:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:30:23.193 07:54:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:23.193 07:54:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:23.193 07:54:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:23.193 07:54:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:30:23.193 07:54:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:23.193 07:54:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:23.193 [ 00:30:23.193 { 00:30:23.194 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:23.194 "subtype": "Discovery", 00:30:23.194 "listen_addresses": [], 00:30:23.194 "allow_any_host": true, 00:30:23.194 "hosts": [] 00:30:23.194 }, 00:30:23.194 { 00:30:23.194 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:23.194 "subtype": "NVMe", 00:30:23.194 "listen_addresses": [ 00:30:23.194 { 00:30:23.194 "trtype": "TCP", 00:30:23.194 "adrfam": "IPv4", 00:30:23.194 "traddr": "10.0.0.2", 00:30:23.194 "trsvcid": "4420" 00:30:23.194 } 00:30:23.194 ], 00:30:23.194 "allow_any_host": true, 00:30:23.194 "hosts": [], 00:30:23.194 "serial_number": "SPDK00000000000001", 00:30:23.194 "model_number": "SPDK bdev Controller", 00:30:23.194 "max_namespaces": 2, 00:30:23.194 "min_cntlid": 1, 00:30:23.194 "max_cntlid": 65519, 00:30:23.194 "namespaces": [ 00:30:23.194 { 00:30:23.194 "nsid": 1, 00:30:23.194 "bdev_name": "Malloc0", 00:30:23.194 "name": "Malloc0", 00:30:23.194 "nguid": "95C30404F93D49CD9BC44063465C234E", 00:30:23.194 "uuid": "95c30404-f93d-49cd-9bc4-4063465c234e" 00:30:23.194 }, 00:30:23.194 { 00:30:23.194 "nsid": 2, 00:30:23.194 "bdev_name": "Malloc1", 00:30:23.194 "name": "Malloc1", 00:30:23.194 "nguid": "6F18C309CAC74CF3A8CC25D072F704D5", 00:30:23.194 "uuid": "6f18c309-cac7-4cf3-a8cc-25d072f704d5" 00:30:23.194 } 00:30:23.194 ] 00:30:23.194 } 00:30:23.194 ] 00:30:23.194 07:54:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:23.194 07:54:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 3064858 00:30:23.194 Asynchronous Event Request test 00:30:23.194 Attaching to 10.0.0.2 00:30:23.194 Attached to 10.0.0.2 00:30:23.194 Registering asynchronous event callbacks... 00:30:23.194 Starting namespace attribute notice tests for all controllers... 00:30:23.194 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:30:23.194 aer_cb - Changed Namespace 00:30:23.194 Cleaning up... 00:30:23.194 07:54:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:30:23.194 07:54:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:23.194 07:54:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:23.452 07:54:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:23.452 07:54:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:30:23.452 07:54:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:23.452 07:54:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:23.452 07:54:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:23.452 07:54:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:23.452 07:54:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:23.452 07:54:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:23.452 07:54:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:23.452 07:54:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:30:23.452 07:54:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:30:23.452 07:54:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:23.452 07:54:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:30:23.713 07:54:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:23.713 07:54:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:30:23.713 07:54:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:23.713 07:54:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:23.713 rmmod nvme_tcp 00:30:23.713 rmmod nvme_fabrics 00:30:23.713 rmmod nvme_keyring 00:30:23.713 07:54:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:23.713 07:54:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:30:23.713 07:54:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:30:23.713 07:54:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 3064694 ']' 00:30:23.713 07:54:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 3064694 00:30:23.713 07:54:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 3064694 ']' 00:30:23.713 07:54:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 3064694 00:30:23.713 07:54:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:30:23.713 07:54:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:23.713 07:54:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3064694 00:30:23.713 07:54:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:23.713 07:54:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:23.713 07:54:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3064694' 00:30:23.713 killing process with pid 3064694 00:30:23.713 07:54:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 3064694 00:30:23.713 07:54:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 3064694 00:30:24.650 07:54:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:24.650 07:54:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:24.650 07:54:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:24.911 07:54:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:30:24.911 07:54:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:30:24.911 07:54:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:24.911 07:54:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:30:24.911 07:54:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:24.911 07:54:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:24.911 07:54:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:24.911 07:54:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:24.911 07:54:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:26.819 07:54:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:26.819 00:30:26.819 real 0m7.614s 00:30:26.819 user 0m11.623s 00:30:26.819 sys 0m2.181s 00:30:26.819 07:54:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:26.819 07:54:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:26.819 ************************************ 00:30:26.819 END TEST nvmf_aer 00:30:26.819 ************************************ 00:30:26.819 07:54:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:30:26.819 07:54:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:26.819 07:54:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:26.819 07:54:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:26.819 ************************************ 00:30:26.819 START TEST nvmf_async_init 00:30:26.819 ************************************ 00:30:26.819 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:30:26.819 * Looking for test storage... 00:30:26.819 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:26.819 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:26.819 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:30:26.819 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:27.078 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:27.078 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:27.078 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:27.078 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:27.078 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:30:27.078 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:30:27.078 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:30:27.078 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:30:27.078 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:30:27.078 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:30:27.078 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:30:27.078 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:27.078 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:30:27.078 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:30:27.078 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:27.078 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:27.078 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:30:27.078 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:30:27.078 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:27.078 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:30:27.078 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:30:27.078 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:30:27.078 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:30:27.078 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:27.078 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:30:27.078 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:30:27.078 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:27.078 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:27.078 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:30:27.078 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:27.078 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:27.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:27.079 --rc genhtml_branch_coverage=1 00:30:27.079 --rc genhtml_function_coverage=1 00:30:27.079 --rc genhtml_legend=1 00:30:27.079 --rc geninfo_all_blocks=1 00:30:27.079 --rc geninfo_unexecuted_blocks=1 00:30:27.079 00:30:27.079 ' 00:30:27.079 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:27.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:27.079 --rc genhtml_branch_coverage=1 00:30:27.079 --rc genhtml_function_coverage=1 00:30:27.079 --rc genhtml_legend=1 00:30:27.079 --rc geninfo_all_blocks=1 00:30:27.079 --rc geninfo_unexecuted_blocks=1 00:30:27.079 00:30:27.079 ' 00:30:27.079 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:27.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:27.079 --rc genhtml_branch_coverage=1 00:30:27.079 --rc genhtml_function_coverage=1 00:30:27.079 --rc genhtml_legend=1 00:30:27.079 --rc geninfo_all_blocks=1 00:30:27.079 --rc geninfo_unexecuted_blocks=1 00:30:27.079 00:30:27.079 ' 00:30:27.079 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:27.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:27.079 --rc genhtml_branch_coverage=1 00:30:27.079 --rc genhtml_function_coverage=1 00:30:27.079 --rc genhtml_legend=1 00:30:27.079 --rc geninfo_all_blocks=1 00:30:27.079 --rc geninfo_unexecuted_blocks=1 00:30:27.079 00:30:27.079 ' 00:30:27.079 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:27.079 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:30:27.079 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:27.079 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:27.079 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:27.079 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:27.079 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:27.079 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:27.079 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:27.079 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:27.079 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:27.079 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:27.079 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:27.079 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:27.079 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:27.079 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:27.079 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:27.079 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:27.079 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:27.079 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:30:27.079 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:27.079 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:27.079 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:27.079 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.079 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.079 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.079 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:30:27.079 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.079 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:30:27.079 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:27.079 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:27.079 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:27.079 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:27.079 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:27.079 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:27.079 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:27.079 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:27.079 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:27.079 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:27.079 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:30:27.079 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:30:27.079 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:30:27.079 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:30:27.079 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:30:27.079 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:30:27.079 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=f1f3682be92247aab63b90d91b6065a0 00:30:27.079 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:30:27.079 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:27.079 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:27.079 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:27.079 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:27.079 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:27.079 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:27.079 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:27.079 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:27.079 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:27.079 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:27.079 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:30:27.079 07:54:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:28.985 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:28.985 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:30:28.985 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:28.985 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:28.985 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:28.985 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:28.985 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:28.985 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:30:28.985 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:28.985 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:30:28.985 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:30:28.985 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:30:28.985 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:30:28.985 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:30:28.985 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:30:28.985 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:28.985 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:28.985 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:28.985 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:28.985 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:28.985 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:28.985 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:28.985 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:28.985 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:28.985 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:28.985 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:28.985 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:28.985 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:28.985 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:28.985 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:28.985 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:28.985 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:28.985 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:28.985 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:28.985 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:28.985 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:28.985 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:28.985 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:28.985 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:28.986 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:28.986 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:28.986 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:28.986 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:28.986 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:28.986 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:28.986 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:28.986 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:28.986 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:28.986 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:28.986 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:28.986 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:28.986 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:28.986 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:28.986 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:28.986 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:28.986 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:28.986 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:28.986 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:28.986 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:28.986 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:28.986 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:28.986 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:28.986 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:28.986 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:28.986 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:28.986 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:28.986 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:28.986 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:28.986 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:28.986 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:28.986 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:28.986 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:28.986 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:28.986 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:30:28.986 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:28.986 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:28.986 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:28.986 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:28.986 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:28.986 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:28.986 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:28.986 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:28.986 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:28.986 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:28.986 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:28.986 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:28.986 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:28.986 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:28.986 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:28.986 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:28.986 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:28.986 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:28.986 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:28.986 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:28.986 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:28.986 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:28.986 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:28.986 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:28.986 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:28.986 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:28.986 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:28.986 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.286 ms 00:30:28.986 00:30:28.986 --- 10.0.0.2 ping statistics --- 00:30:28.986 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:28.986 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:30:28.986 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:28.986 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:28.986 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:30:28.986 00:30:28.986 --- 10.0.0.1 ping statistics --- 00:30:28.986 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:28.986 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:30:28.986 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:28.986 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:30:28.986 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:28.986 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:28.986 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:28.986 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:28.986 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:28.986 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:28.986 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:29.244 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:30:29.244 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:29.244 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:29.244 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:29.244 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=3067014 00:30:29.244 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:30:29.244 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 3067014 00:30:29.244 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 3067014 ']' 00:30:29.244 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:29.244 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:29.244 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:29.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:29.244 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:29.244 07:54:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:29.244 [2024-11-19 07:54:21.030551] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:30:29.244 [2024-11-19 07:54:21.030727] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:29.244 [2024-11-19 07:54:21.176079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:29.504 [2024-11-19 07:54:21.298362] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:29.504 [2024-11-19 07:54:21.298447] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:29.504 [2024-11-19 07:54:21.298480] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:29.504 [2024-11-19 07:54:21.298511] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:29.504 [2024-11-19 07:54:21.298536] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:29.504 [2024-11-19 07:54:21.300102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:30.443 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:30.443 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:30:30.443 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:30.443 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:30.443 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:30.443 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:30.443 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:30.443 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.443 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:30.443 [2024-11-19 07:54:22.070309] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:30.443 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.443 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:30:30.443 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.443 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:30.443 null0 00:30:30.443 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.443 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:30:30.443 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.443 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:30.443 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.443 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:30:30.443 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.443 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:30.443 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.443 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g f1f3682be92247aab63b90d91b6065a0 00:30:30.443 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.443 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:30.443 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.443 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:30.443 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.443 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:30.443 [2024-11-19 07:54:22.110639] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:30.443 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.443 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:30:30.443 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.443 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:30.443 nvme0n1 00:30:30.443 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.443 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:30.443 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.443 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:30.443 [ 00:30:30.443 { 00:30:30.443 "name": "nvme0n1", 00:30:30.443 "aliases": [ 00:30:30.443 "f1f3682b-e922-47aa-b63b-90d91b6065a0" 00:30:30.443 ], 00:30:30.443 "product_name": "NVMe disk", 00:30:30.443 "block_size": 512, 00:30:30.443 "num_blocks": 2097152, 00:30:30.443 "uuid": "f1f3682b-e922-47aa-b63b-90d91b6065a0", 00:30:30.443 "numa_id": 0, 00:30:30.443 "assigned_rate_limits": { 00:30:30.443 "rw_ios_per_sec": 0, 00:30:30.443 "rw_mbytes_per_sec": 0, 00:30:30.443 "r_mbytes_per_sec": 0, 00:30:30.443 "w_mbytes_per_sec": 0 00:30:30.443 }, 00:30:30.443 "claimed": false, 00:30:30.443 "zoned": false, 00:30:30.443 "supported_io_types": { 00:30:30.443 "read": true, 00:30:30.443 "write": true, 00:30:30.443 "unmap": false, 00:30:30.443 "flush": true, 00:30:30.443 "reset": true, 00:30:30.443 "nvme_admin": true, 00:30:30.443 "nvme_io": true, 00:30:30.443 "nvme_io_md": false, 00:30:30.443 "write_zeroes": true, 00:30:30.443 "zcopy": false, 00:30:30.443 "get_zone_info": false, 00:30:30.443 "zone_management": false, 00:30:30.443 "zone_append": false, 00:30:30.443 "compare": true, 00:30:30.443 "compare_and_write": true, 00:30:30.443 "abort": true, 00:30:30.443 "seek_hole": false, 00:30:30.443 "seek_data": false, 00:30:30.443 "copy": true, 00:30:30.443 "nvme_iov_md": false 00:30:30.443 }, 00:30:30.443 "memory_domains": [ 00:30:30.443 { 00:30:30.443 "dma_device_id": "system", 00:30:30.443 "dma_device_type": 1 00:30:30.443 } 00:30:30.443 ], 00:30:30.443 "driver_specific": { 00:30:30.443 "nvme": [ 00:30:30.443 { 00:30:30.443 "trid": { 00:30:30.443 "trtype": "TCP", 00:30:30.443 "adrfam": "IPv4", 00:30:30.443 "traddr": "10.0.0.2", 00:30:30.443 "trsvcid": "4420", 00:30:30.443 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:30.444 }, 00:30:30.444 "ctrlr_data": { 00:30:30.444 "cntlid": 1, 00:30:30.444 "vendor_id": "0x8086", 00:30:30.444 "model_number": "SPDK bdev Controller", 00:30:30.444 "serial_number": "00000000000000000000", 00:30:30.444 "firmware_revision": "25.01", 00:30:30.444 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:30.444 "oacs": { 00:30:30.444 "security": 0, 00:30:30.444 "format": 0, 00:30:30.444 "firmware": 0, 00:30:30.444 "ns_manage": 0 00:30:30.444 }, 00:30:30.444 "multi_ctrlr": true, 00:30:30.444 "ana_reporting": false 00:30:30.444 }, 00:30:30.444 "vs": { 00:30:30.444 "nvme_version": "1.3" 00:30:30.444 }, 00:30:30.444 "ns_data": { 00:30:30.444 "id": 1, 00:30:30.444 "can_share": true 00:30:30.444 } 00:30:30.444 } 00:30:30.444 ], 00:30:30.444 "mp_policy": "active_passive" 00:30:30.444 } 00:30:30.444 } 00:30:30.444 ] 00:30:30.444 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.444 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:30:30.444 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.444 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:30.444 [2024-11-19 07:54:22.367390] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:30.444 [2024-11-19 07:54:22.367543] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:30:30.704 [2024-11-19 07:54:22.499931] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:30:30.704 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.704 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:30.704 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.704 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:30.704 [ 00:30:30.704 { 00:30:30.704 "name": "nvme0n1", 00:30:30.704 "aliases": [ 00:30:30.704 "f1f3682b-e922-47aa-b63b-90d91b6065a0" 00:30:30.704 ], 00:30:30.704 "product_name": "NVMe disk", 00:30:30.704 "block_size": 512, 00:30:30.704 "num_blocks": 2097152, 00:30:30.704 "uuid": "f1f3682b-e922-47aa-b63b-90d91b6065a0", 00:30:30.704 "numa_id": 0, 00:30:30.704 "assigned_rate_limits": { 00:30:30.704 "rw_ios_per_sec": 0, 00:30:30.704 "rw_mbytes_per_sec": 0, 00:30:30.704 "r_mbytes_per_sec": 0, 00:30:30.704 "w_mbytes_per_sec": 0 00:30:30.704 }, 00:30:30.704 "claimed": false, 00:30:30.704 "zoned": false, 00:30:30.704 "supported_io_types": { 00:30:30.704 "read": true, 00:30:30.704 "write": true, 00:30:30.704 "unmap": false, 00:30:30.704 "flush": true, 00:30:30.704 "reset": true, 00:30:30.704 "nvme_admin": true, 00:30:30.704 "nvme_io": true, 00:30:30.704 "nvme_io_md": false, 00:30:30.704 "write_zeroes": true, 00:30:30.704 "zcopy": false, 00:30:30.704 "get_zone_info": false, 00:30:30.704 "zone_management": false, 00:30:30.704 "zone_append": false, 00:30:30.704 "compare": true, 00:30:30.704 "compare_and_write": true, 00:30:30.704 "abort": true, 00:30:30.704 "seek_hole": false, 00:30:30.704 "seek_data": false, 00:30:30.704 "copy": true, 00:30:30.705 "nvme_iov_md": false 00:30:30.705 }, 00:30:30.705 "memory_domains": [ 00:30:30.705 { 00:30:30.705 "dma_device_id": "system", 00:30:30.705 "dma_device_type": 1 00:30:30.705 } 00:30:30.705 ], 00:30:30.705 "driver_specific": { 00:30:30.705 "nvme": [ 00:30:30.705 { 00:30:30.705 "trid": { 00:30:30.705 "trtype": "TCP", 00:30:30.705 "adrfam": "IPv4", 00:30:30.705 "traddr": "10.0.0.2", 00:30:30.705 "trsvcid": "4420", 00:30:30.705 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:30.705 }, 00:30:30.705 "ctrlr_data": { 00:30:30.705 "cntlid": 2, 00:30:30.705 "vendor_id": "0x8086", 00:30:30.705 "model_number": "SPDK bdev Controller", 00:30:30.705 "serial_number": "00000000000000000000", 00:30:30.705 "firmware_revision": "25.01", 00:30:30.705 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:30.705 "oacs": { 00:30:30.705 "security": 0, 00:30:30.705 "format": 0, 00:30:30.705 "firmware": 0, 00:30:30.705 "ns_manage": 0 00:30:30.705 }, 00:30:30.705 "multi_ctrlr": true, 00:30:30.705 "ana_reporting": false 00:30:30.705 }, 00:30:30.705 "vs": { 00:30:30.705 "nvme_version": "1.3" 00:30:30.705 }, 00:30:30.705 "ns_data": { 00:30:30.705 "id": 1, 00:30:30.705 "can_share": true 00:30:30.705 } 00:30:30.705 } 00:30:30.705 ], 00:30:30.705 "mp_policy": "active_passive" 00:30:30.705 } 00:30:30.705 } 00:30:30.705 ] 00:30:30.705 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.705 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:30.705 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.705 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:30.705 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.705 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:30:30.705 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.sbkftd8PYO 00:30:30.705 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:30:30.705 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.sbkftd8PYO 00:30:30.705 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.sbkftd8PYO 00:30:30.705 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.705 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:30.705 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.705 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:30:30.705 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.705 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:30.705 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.705 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:30:30.705 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.705 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:30.705 [2024-11-19 07:54:22.560218] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:30.705 [2024-11-19 07:54:22.560460] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:30.705 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.705 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:30:30.705 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.705 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:30.705 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.705 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:30:30.705 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.705 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:30.705 [2024-11-19 07:54:22.576269] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:30.964 nvme0n1 00:30:30.964 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.964 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:30.964 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.964 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:30.964 [ 00:30:30.964 { 00:30:30.964 "name": "nvme0n1", 00:30:30.964 "aliases": [ 00:30:30.964 "f1f3682b-e922-47aa-b63b-90d91b6065a0" 00:30:30.964 ], 00:30:30.964 "product_name": "NVMe disk", 00:30:30.964 "block_size": 512, 00:30:30.964 "num_blocks": 2097152, 00:30:30.964 "uuid": "f1f3682b-e922-47aa-b63b-90d91b6065a0", 00:30:30.964 "numa_id": 0, 00:30:30.964 "assigned_rate_limits": { 00:30:30.964 "rw_ios_per_sec": 0, 00:30:30.964 "rw_mbytes_per_sec": 0, 00:30:30.964 "r_mbytes_per_sec": 0, 00:30:30.964 "w_mbytes_per_sec": 0 00:30:30.964 }, 00:30:30.964 "claimed": false, 00:30:30.964 "zoned": false, 00:30:30.964 "supported_io_types": { 00:30:30.964 "read": true, 00:30:30.964 "write": true, 00:30:30.964 "unmap": false, 00:30:30.964 "flush": true, 00:30:30.964 "reset": true, 00:30:30.964 "nvme_admin": true, 00:30:30.964 "nvme_io": true, 00:30:30.964 "nvme_io_md": false, 00:30:30.964 "write_zeroes": true, 00:30:30.964 "zcopy": false, 00:30:30.964 "get_zone_info": false, 00:30:30.964 "zone_management": false, 00:30:30.964 "zone_append": false, 00:30:30.964 "compare": true, 00:30:30.964 "compare_and_write": true, 00:30:30.964 "abort": true, 00:30:30.964 "seek_hole": false, 00:30:30.964 "seek_data": false, 00:30:30.964 "copy": true, 00:30:30.964 "nvme_iov_md": false 00:30:30.964 }, 00:30:30.964 "memory_domains": [ 00:30:30.964 { 00:30:30.964 "dma_device_id": "system", 00:30:30.964 "dma_device_type": 1 00:30:30.964 } 00:30:30.964 ], 00:30:30.964 "driver_specific": { 00:30:30.964 "nvme": [ 00:30:30.964 { 00:30:30.964 "trid": { 00:30:30.964 "trtype": "TCP", 00:30:30.964 "adrfam": "IPv4", 00:30:30.964 "traddr": "10.0.0.2", 00:30:30.964 "trsvcid": "4421", 00:30:30.964 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:30.964 }, 00:30:30.964 "ctrlr_data": { 00:30:30.964 "cntlid": 3, 00:30:30.964 "vendor_id": "0x8086", 00:30:30.964 "model_number": "SPDK bdev Controller", 00:30:30.964 "serial_number": "00000000000000000000", 00:30:30.964 "firmware_revision": "25.01", 00:30:30.964 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:30.964 "oacs": { 00:30:30.964 "security": 0, 00:30:30.964 "format": 0, 00:30:30.964 "firmware": 0, 00:30:30.964 "ns_manage": 0 00:30:30.964 }, 00:30:30.964 "multi_ctrlr": true, 00:30:30.964 "ana_reporting": false 00:30:30.964 }, 00:30:30.964 "vs": { 00:30:30.964 "nvme_version": "1.3" 00:30:30.964 }, 00:30:30.964 "ns_data": { 00:30:30.964 "id": 1, 00:30:30.964 "can_share": true 00:30:30.964 } 00:30:30.964 } 00:30:30.964 ], 00:30:30.964 "mp_policy": "active_passive" 00:30:30.964 } 00:30:30.964 } 00:30:30.964 ] 00:30:30.964 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.964 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:30.964 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.964 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:30.964 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.964 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.sbkftd8PYO 00:30:30.964 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:30:30.964 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:30:30.964 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:30.964 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:30:30.964 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:30.964 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:30:30.964 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:30.964 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:30.964 rmmod nvme_tcp 00:30:30.964 rmmod nvme_fabrics 00:30:30.964 rmmod nvme_keyring 00:30:30.964 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:30.964 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:30:30.964 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:30:30.964 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 3067014 ']' 00:30:30.964 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 3067014 00:30:30.964 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 3067014 ']' 00:30:30.964 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 3067014 00:30:30.964 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:30:30.964 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:30.964 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3067014 00:30:30.964 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:30.964 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:30.964 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3067014' 00:30:30.964 killing process with pid 3067014 00:30:30.964 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 3067014 00:30:30.964 07:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 3067014 00:30:32.338 07:54:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:32.338 07:54:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:32.338 07:54:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:32.338 07:54:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:30:32.338 07:54:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:30:32.338 07:54:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:32.338 07:54:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:30:32.338 07:54:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:32.338 07:54:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:32.338 07:54:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:32.338 07:54:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:32.338 07:54:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:34.241 07:54:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:34.241 00:30:34.241 real 0m7.293s 00:30:34.241 user 0m3.997s 00:30:34.241 sys 0m1.960s 00:30:34.241 07:54:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:34.242 07:54:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:34.242 ************************************ 00:30:34.242 END TEST nvmf_async_init 00:30:34.242 ************************************ 00:30:34.242 07:54:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:30:34.242 07:54:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:34.242 07:54:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:34.242 07:54:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:34.242 ************************************ 00:30:34.242 START TEST dma 00:30:34.242 ************************************ 00:30:34.242 07:54:26 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:30:34.242 * Looking for test storage... 00:30:34.242 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:34.242 07:54:26 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:34.242 07:54:26 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:30:34.242 07:54:26 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:34.242 07:54:26 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:34.242 07:54:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:34.242 07:54:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:34.242 07:54:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:34.242 07:54:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:30:34.242 07:54:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:30:34.242 07:54:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:30:34.242 07:54:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:30:34.242 07:54:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:30:34.242 07:54:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:30:34.242 07:54:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:30:34.242 07:54:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:34.242 07:54:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:30:34.242 07:54:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:30:34.242 07:54:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:34.242 07:54:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:34.242 07:54:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:30:34.242 07:54:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:30:34.242 07:54:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:34.242 07:54:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:30:34.242 07:54:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:30:34.242 07:54:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:30:34.242 07:54:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:30:34.242 07:54:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:34.242 07:54:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:30:34.242 07:54:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:30:34.242 07:54:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:34.242 07:54:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:34.242 07:54:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:30:34.242 07:54:26 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:34.242 07:54:26 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:34.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:34.242 --rc genhtml_branch_coverage=1 00:30:34.242 --rc genhtml_function_coverage=1 00:30:34.242 --rc genhtml_legend=1 00:30:34.242 --rc geninfo_all_blocks=1 00:30:34.242 --rc geninfo_unexecuted_blocks=1 00:30:34.242 00:30:34.242 ' 00:30:34.242 07:54:26 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:34.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:34.242 --rc genhtml_branch_coverage=1 00:30:34.242 --rc genhtml_function_coverage=1 00:30:34.242 --rc genhtml_legend=1 00:30:34.242 --rc geninfo_all_blocks=1 00:30:34.242 --rc geninfo_unexecuted_blocks=1 00:30:34.242 00:30:34.242 ' 00:30:34.242 07:54:26 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:34.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:34.242 --rc genhtml_branch_coverage=1 00:30:34.242 --rc genhtml_function_coverage=1 00:30:34.242 --rc genhtml_legend=1 00:30:34.242 --rc geninfo_all_blocks=1 00:30:34.242 --rc geninfo_unexecuted_blocks=1 00:30:34.242 00:30:34.242 ' 00:30:34.242 07:54:26 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:34.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:34.242 --rc genhtml_branch_coverage=1 00:30:34.242 --rc genhtml_function_coverage=1 00:30:34.242 --rc genhtml_legend=1 00:30:34.242 --rc geninfo_all_blocks=1 00:30:34.242 --rc geninfo_unexecuted_blocks=1 00:30:34.242 00:30:34.242 ' 00:30:34.242 07:54:26 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:34.242 07:54:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:30:34.242 07:54:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:34.242 07:54:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:34.242 07:54:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:34.242 07:54:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:34.242 07:54:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:34.242 07:54:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:34.242 07:54:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:34.242 07:54:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:34.242 07:54:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:34.242 07:54:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:34.242 07:54:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:34.242 07:54:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:34.242 07:54:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:34.242 07:54:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:34.242 07:54:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:34.242 07:54:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:34.242 07:54:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:34.242 07:54:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:30:34.502 07:54:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:34.502 07:54:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:34.502 07:54:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:34.502 07:54:26 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.502 07:54:26 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.502 07:54:26 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.502 07:54:26 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:30:34.502 07:54:26 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.502 07:54:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:30:34.502 07:54:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:34.502 07:54:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:34.502 07:54:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:34.502 07:54:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:34.502 07:54:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:34.502 07:54:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:34.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:34.502 07:54:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:34.502 07:54:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:34.502 07:54:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:34.502 07:54:26 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:30:34.502 07:54:26 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:30:34.502 00:30:34.502 real 0m0.159s 00:30:34.502 user 0m0.109s 00:30:34.502 sys 0m0.059s 00:30:34.502 07:54:26 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:34.502 07:54:26 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:30:34.502 ************************************ 00:30:34.502 END TEST dma 00:30:34.502 ************************************ 00:30:34.503 07:54:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:30:34.503 07:54:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:34.503 07:54:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:34.503 07:54:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:34.503 ************************************ 00:30:34.503 START TEST nvmf_identify 00:30:34.503 ************************************ 00:30:34.503 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:30:34.503 * Looking for test storage... 00:30:34.503 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:34.503 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:34.503 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:30:34.503 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:34.503 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:34.503 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:34.503 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:34.503 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:34.503 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:30:34.503 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:30:34.503 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:30:34.503 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:30:34.503 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:30:34.503 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:30:34.503 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:30:34.503 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:34.503 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:30:34.503 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:30:34.503 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:34.503 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:34.503 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:30:34.503 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:30:34.503 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:34.503 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:30:34.503 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:30:34.503 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:30:34.503 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:30:34.503 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:34.503 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:30:34.503 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:30:34.503 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:34.503 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:34.503 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:30:34.503 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:34.503 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:34.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:34.503 --rc genhtml_branch_coverage=1 00:30:34.503 --rc genhtml_function_coverage=1 00:30:34.503 --rc genhtml_legend=1 00:30:34.503 --rc geninfo_all_blocks=1 00:30:34.503 --rc geninfo_unexecuted_blocks=1 00:30:34.503 00:30:34.503 ' 00:30:34.503 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:34.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:34.503 --rc genhtml_branch_coverage=1 00:30:34.503 --rc genhtml_function_coverage=1 00:30:34.503 --rc genhtml_legend=1 00:30:34.503 --rc geninfo_all_blocks=1 00:30:34.503 --rc geninfo_unexecuted_blocks=1 00:30:34.503 00:30:34.503 ' 00:30:34.503 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:34.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:34.503 --rc genhtml_branch_coverage=1 00:30:34.503 --rc genhtml_function_coverage=1 00:30:34.503 --rc genhtml_legend=1 00:30:34.503 --rc geninfo_all_blocks=1 00:30:34.503 --rc geninfo_unexecuted_blocks=1 00:30:34.503 00:30:34.503 ' 00:30:34.503 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:34.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:34.503 --rc genhtml_branch_coverage=1 00:30:34.503 --rc genhtml_function_coverage=1 00:30:34.503 --rc genhtml_legend=1 00:30:34.503 --rc geninfo_all_blocks=1 00:30:34.503 --rc geninfo_unexecuted_blocks=1 00:30:34.503 00:30:34.503 ' 00:30:34.503 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:34.503 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:30:34.503 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:34.503 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:34.503 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:34.503 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:34.503 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:34.503 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:34.503 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:34.503 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:34.503 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:34.503 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:34.503 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:34.503 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:34.503 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:34.503 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:34.503 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:34.503 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:34.503 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:34.503 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:30:34.503 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:34.503 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:34.503 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:34.503 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.503 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.503 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.504 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:30:34.504 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.504 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:30:34.504 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:34.504 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:34.504 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:34.504 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:34.504 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:34.504 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:34.504 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:34.504 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:34.504 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:34.504 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:34.504 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:34.504 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:34.504 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:30:34.504 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:34.504 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:34.504 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:34.504 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:34.504 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:34.504 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:34.504 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:34.504 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:34.504 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:34.504 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:34.504 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:30:34.504 07:54:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:36.406 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:36.406 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:30:36.406 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:36.406 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:36.406 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:36.406 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:36.406 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:36.406 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:30:36.406 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:36.406 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:30:36.406 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:30:36.406 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:30:36.406 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:30:36.406 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:30:36.406 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:30:36.407 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:36.407 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:36.407 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:36.407 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:36.407 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:36.407 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:36.407 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:36.407 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:36.407 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:36.407 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:36.407 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:36.407 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:36.407 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:36.407 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:36.407 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:36.407 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:36.407 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:36.407 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:36.407 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:36.407 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:36.407 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:36.407 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:36.407 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:36.407 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:36.407 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:36.407 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:36.407 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:36.407 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:36.407 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:36.407 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:36.407 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:36.407 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:36.407 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:36.407 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:36.407 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:36.407 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:36.407 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:36.407 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:36.407 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:36.407 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:36.407 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:36.407 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:36.407 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:36.407 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:36.407 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:36.407 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:36.407 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:36.407 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:36.407 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:36.407 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:36.407 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:36.407 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:36.407 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:36.407 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:36.407 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:36.407 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:36.407 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:36.407 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:36.667 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:30:36.667 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:36.667 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:36.667 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:36.667 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:36.667 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:36.667 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:36.667 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:36.667 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:36.667 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:36.667 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:36.667 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:36.667 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:36.667 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:36.667 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:36.667 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:36.667 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:36.667 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:36.667 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:36.667 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:36.667 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:36.667 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:36.667 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:36.667 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:36.667 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:36.667 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:36.667 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:36.667 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:36.667 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:30:36.667 00:30:36.667 --- 10.0.0.2 ping statistics --- 00:30:36.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:36.667 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:30:36.667 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:36.667 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:36.667 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:30:36.667 00:30:36.667 --- 10.0.0.1 ping statistics --- 00:30:36.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:36.667 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:30:36.667 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:36.667 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:30:36.667 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:36.667 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:36.667 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:36.667 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:36.667 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:36.667 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:36.667 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:36.667 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:30:36.667 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:36.667 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:36.667 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3069330 00:30:36.667 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:36.667 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:36.667 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3069330 00:30:36.667 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 3069330 ']' 00:30:36.667 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:36.667 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:36.667 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:36.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:36.667 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:36.667 07:54:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:36.667 [2024-11-19 07:54:28.584290] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:30:36.667 [2024-11-19 07:54:28.584449] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:36.926 [2024-11-19 07:54:28.734231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:37.184 [2024-11-19 07:54:28.864423] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:37.184 [2024-11-19 07:54:28.864496] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:37.184 [2024-11-19 07:54:28.864517] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:37.184 [2024-11-19 07:54:28.864538] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:37.184 [2024-11-19 07:54:28.864554] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:37.184 [2024-11-19 07:54:28.867219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:37.184 [2024-11-19 07:54:28.867283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:37.184 [2024-11-19 07:54:28.867329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:37.184 [2024-11-19 07:54:28.867336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:37.749 07:54:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:37.749 07:54:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:30:37.749 07:54:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:37.749 07:54:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.749 07:54:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:37.749 [2024-11-19 07:54:29.597047] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:37.749 07:54:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:37.749 07:54:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:30:37.749 07:54:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:37.749 07:54:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:37.749 07:54:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:37.749 07:54:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.749 07:54:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:38.008 Malloc0 00:30:38.008 07:54:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:38.008 07:54:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:38.008 07:54:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:38.008 07:54:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:38.008 07:54:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:38.008 07:54:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:30:38.008 07:54:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:38.008 07:54:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:38.008 07:54:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:38.008 07:54:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:38.008 07:54:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:38.008 07:54:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:38.008 [2024-11-19 07:54:29.740445] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:38.008 07:54:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:38.008 07:54:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:38.008 07:54:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:38.008 07:54:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:38.008 07:54:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:38.008 07:54:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:30:38.008 07:54:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:38.008 07:54:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:38.008 [ 00:30:38.008 { 00:30:38.008 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:38.008 "subtype": "Discovery", 00:30:38.008 "listen_addresses": [ 00:30:38.008 { 00:30:38.008 "trtype": "TCP", 00:30:38.008 "adrfam": "IPv4", 00:30:38.008 "traddr": "10.0.0.2", 00:30:38.008 "trsvcid": "4420" 00:30:38.008 } 00:30:38.008 ], 00:30:38.008 "allow_any_host": true, 00:30:38.008 "hosts": [] 00:30:38.008 }, 00:30:38.008 { 00:30:38.008 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:38.008 "subtype": "NVMe", 00:30:38.008 "listen_addresses": [ 00:30:38.008 { 00:30:38.008 "trtype": "TCP", 00:30:38.009 "adrfam": "IPv4", 00:30:38.009 "traddr": "10.0.0.2", 00:30:38.009 "trsvcid": "4420" 00:30:38.009 } 00:30:38.009 ], 00:30:38.009 "allow_any_host": true, 00:30:38.009 "hosts": [], 00:30:38.009 "serial_number": "SPDK00000000000001", 00:30:38.009 "model_number": "SPDK bdev Controller", 00:30:38.009 "max_namespaces": 32, 00:30:38.009 "min_cntlid": 1, 00:30:38.009 "max_cntlid": 65519, 00:30:38.009 "namespaces": [ 00:30:38.009 { 00:30:38.009 "nsid": 1, 00:30:38.009 "bdev_name": "Malloc0", 00:30:38.009 "name": "Malloc0", 00:30:38.009 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:30:38.009 "eui64": "ABCDEF0123456789", 00:30:38.009 "uuid": "52b05d90-4930-4f19-bf86-bf71d9efaf97" 00:30:38.009 } 00:30:38.009 ] 00:30:38.009 } 00:30:38.009 ] 00:30:38.009 07:54:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:38.009 07:54:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:30:38.009 [2024-11-19 07:54:29.813542] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:30:38.009 [2024-11-19 07:54:29.813668] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3069487 ] 00:30:38.009 [2024-11-19 07:54:29.894316] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:30:38.009 [2024-11-19 07:54:29.894451] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:30:38.009 [2024-11-19 07:54:29.894473] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:30:38.009 [2024-11-19 07:54:29.894511] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:30:38.009 [2024-11-19 07:54:29.894536] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:30:38.009 [2024-11-19 07:54:29.895407] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:30:38.009 [2024-11-19 07:54:29.895496] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x615000015700 0 00:30:38.009 [2024-11-19 07:54:29.909717] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:30:38.009 [2024-11-19 07:54:29.909756] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:30:38.009 [2024-11-19 07:54:29.909773] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:30:38.009 [2024-11-19 07:54:29.909785] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:30:38.009 [2024-11-19 07:54:29.909871] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:38.009 [2024-11-19 07:54:29.909892] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:38.009 [2024-11-19 07:54:29.909906] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:38.009 [2024-11-19 07:54:29.909945] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:30:38.009 [2024-11-19 07:54:29.909987] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:38.009 [2024-11-19 07:54:29.916711] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:38.009 [2024-11-19 07:54:29.916743] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:38.009 [2024-11-19 07:54:29.916758] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:38.009 [2024-11-19 07:54:29.916772] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:38.009 [2024-11-19 07:54:29.916819] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:30:38.009 [2024-11-19 07:54:29.916845] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:30:38.009 [2024-11-19 07:54:29.916863] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:30:38.009 [2024-11-19 07:54:29.916890] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:38.009 [2024-11-19 07:54:29.916906] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:38.009 [2024-11-19 07:54:29.916923] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:38.009 [2024-11-19 07:54:29.916983] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.009 [2024-11-19 07:54:29.917020] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:38.009 [2024-11-19 07:54:29.917174] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:38.009 [2024-11-19 07:54:29.917197] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:38.009 [2024-11-19 07:54:29.917210] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:38.009 [2024-11-19 07:54:29.917222] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:38.009 [2024-11-19 07:54:29.917252] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:30:38.009 [2024-11-19 07:54:29.917277] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:30:38.009 [2024-11-19 07:54:29.917305] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:38.009 [2024-11-19 07:54:29.917319] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:38.009 [2024-11-19 07:54:29.917331] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:38.009 [2024-11-19 07:54:29.917356] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.009 [2024-11-19 07:54:29.917391] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:38.009 [2024-11-19 07:54:29.917542] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:38.009 [2024-11-19 07:54:29.917565] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:38.009 [2024-11-19 07:54:29.917577] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:38.009 [2024-11-19 07:54:29.917594] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:38.009 [2024-11-19 07:54:29.917611] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:30:38.009 [2024-11-19 07:54:29.917636] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:30:38.009 [2024-11-19 07:54:29.917657] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:38.009 [2024-11-19 07:54:29.917671] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:38.009 [2024-11-19 07:54:29.917683] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:38.009 [2024-11-19 07:54:29.917719] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.009 [2024-11-19 07:54:29.917759] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:38.009 [2024-11-19 07:54:29.917868] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:38.009 [2024-11-19 07:54:29.917890] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:38.009 [2024-11-19 07:54:29.917902] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:38.009 [2024-11-19 07:54:29.917913] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:38.009 [2024-11-19 07:54:29.917929] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:30:38.009 [2024-11-19 07:54:29.917962] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:38.009 [2024-11-19 07:54:29.917981] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:38.009 [2024-11-19 07:54:29.917993] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:38.009 [2024-11-19 07:54:29.918013] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.009 [2024-11-19 07:54:29.918045] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:38.009 [2024-11-19 07:54:29.918176] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:38.009 [2024-11-19 07:54:29.918197] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:38.009 [2024-11-19 07:54:29.918224] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:38.009 [2024-11-19 07:54:29.918236] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:38.009 [2024-11-19 07:54:29.918251] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:30:38.009 [2024-11-19 07:54:29.918271] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:30:38.009 [2024-11-19 07:54:29.918300] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:30:38.009 [2024-11-19 07:54:29.918418] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:30:38.009 [2024-11-19 07:54:29.918434] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:30:38.009 [2024-11-19 07:54:29.918457] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:38.009 [2024-11-19 07:54:29.918471] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:38.009 [2024-11-19 07:54:29.918488] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:38.009 [2024-11-19 07:54:29.918524] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.009 [2024-11-19 07:54:29.918558] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:38.009 [2024-11-19 07:54:29.918702] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:38.009 [2024-11-19 07:54:29.918730] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:38.009 [2024-11-19 07:54:29.918743] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:38.009 [2024-11-19 07:54:29.918755] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:38.009 [2024-11-19 07:54:29.918771] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:30:38.009 [2024-11-19 07:54:29.918803] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:38.009 [2024-11-19 07:54:29.918820] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:38.009 [2024-11-19 07:54:29.918833] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:38.010 [2024-11-19 07:54:29.918853] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.010 [2024-11-19 07:54:29.918885] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:38.010 [2024-11-19 07:54:29.919016] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:38.010 [2024-11-19 07:54:29.919037] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:38.010 [2024-11-19 07:54:29.919048] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:38.010 [2024-11-19 07:54:29.919076] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:38.010 [2024-11-19 07:54:29.919091] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:30:38.010 [2024-11-19 07:54:29.919105] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:30:38.010 [2024-11-19 07:54:29.919128] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:30:38.010 [2024-11-19 07:54:29.919155] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:30:38.010 [2024-11-19 07:54:29.919189] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:38.010 [2024-11-19 07:54:29.919206] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:38.010 [2024-11-19 07:54:29.919232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.010 [2024-11-19 07:54:29.919270] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:38.010 [2024-11-19 07:54:29.919479] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:38.010 [2024-11-19 07:54:29.919501] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:38.010 [2024-11-19 07:54:29.919513] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:38.010 [2024-11-19 07:54:29.919525] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=0 00:30:38.010 [2024-11-19 07:54:29.919539] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:38.010 [2024-11-19 07:54:29.919553] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:38.010 [2024-11-19 07:54:29.919591] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:38.010 [2024-11-19 07:54:29.919609] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:38.270 [2024-11-19 07:54:29.961721] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:38.270 [2024-11-19 07:54:29.961751] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:38.270 [2024-11-19 07:54:29.961763] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:38.270 [2024-11-19 07:54:29.961779] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:38.270 [2024-11-19 07:54:29.961807] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:30:38.270 [2024-11-19 07:54:29.961824] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:30:38.270 [2024-11-19 07:54:29.961836] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:30:38.270 [2024-11-19 07:54:29.961856] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:30:38.270 [2024-11-19 07:54:29.961870] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:30:38.270 [2024-11-19 07:54:29.961888] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:30:38.270 [2024-11-19 07:54:29.961932] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:30:38.270 [2024-11-19 07:54:29.961955] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:38.270 [2024-11-19 07:54:29.961969] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:38.270 [2024-11-19 07:54:29.961996] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:38.270 [2024-11-19 07:54:29.962024] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:38.270 [2024-11-19 07:54:29.962061] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:38.270 [2024-11-19 07:54:29.962207] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:38.270 [2024-11-19 07:54:29.962230] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:38.270 [2024-11-19 07:54:29.962242] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:38.270 [2024-11-19 07:54:29.962253] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:38.270 [2024-11-19 07:54:29.962284] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:38.270 [2024-11-19 07:54:29.962303] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:38.270 [2024-11-19 07:54:29.962316] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:38.270 [2024-11-19 07:54:29.962336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:38.270 [2024-11-19 07:54:29.962354] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:38.270 [2024-11-19 07:54:29.962371] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:38.270 [2024-11-19 07:54:29.962383] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x615000015700) 00:30:38.270 [2024-11-19 07:54:29.962400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:38.270 [2024-11-19 07:54:29.962417] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:38.270 [2024-11-19 07:54:29.962429] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:38.270 [2024-11-19 07:54:29.962439] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x615000015700) 00:30:38.270 [2024-11-19 07:54:29.962456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:38.270 [2024-11-19 07:54:29.962493] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:38.270 [2024-11-19 07:54:29.962507] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:38.270 [2024-11-19 07:54:29.962517] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:38.270 [2024-11-19 07:54:29.962547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:38.270 [2024-11-19 07:54:29.962563] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:30:38.270 [2024-11-19 07:54:29.962590] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:30:38.270 [2024-11-19 07:54:29.962626] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:38.270 [2024-11-19 07:54:29.962639] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:38.270 [2024-11-19 07:54:29.962659] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.270 [2024-11-19 07:54:29.962719] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:38.270 [2024-11-19 07:54:29.962739] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:30:38.270 [2024-11-19 07:54:29.962752] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:30:38.270 [2024-11-19 07:54:29.962765] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:38.271 [2024-11-19 07:54:29.962783] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:38.271 [2024-11-19 07:54:29.962935] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:38.271 [2024-11-19 07:54:29.962958] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:38.271 [2024-11-19 07:54:29.962970] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:38.271 [2024-11-19 07:54:29.962981] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:38.271 [2024-11-19 07:54:29.962997] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:30:38.271 [2024-11-19 07:54:29.963022] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:30:38.271 [2024-11-19 07:54:29.963057] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:38.271 [2024-11-19 07:54:29.963074] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:38.271 [2024-11-19 07:54:29.963094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.271 [2024-11-19 07:54:29.963127] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:38.271 [2024-11-19 07:54:29.963296] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:38.271 [2024-11-19 07:54:29.963323] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:38.271 [2024-11-19 07:54:29.963342] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:38.271 [2024-11-19 07:54:29.963355] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=4 00:30:38.271 [2024-11-19 07:54:29.963374] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:38.271 [2024-11-19 07:54:29.963387] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:38.271 [2024-11-19 07:54:29.963420] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:38.271 [2024-11-19 07:54:29.963437] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:38.271 [2024-11-19 07:54:29.963457] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:38.271 [2024-11-19 07:54:29.963475] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:38.271 [2024-11-19 07:54:29.963486] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:38.271 [2024-11-19 07:54:29.963498] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:38.271 [2024-11-19 07:54:29.963534] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:30:38.271 [2024-11-19 07:54:29.963601] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:38.271 [2024-11-19 07:54:29.963620] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:38.271 [2024-11-19 07:54:29.963656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.271 [2024-11-19 07:54:29.963683] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:38.271 [2024-11-19 07:54:29.963725] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:38.271 [2024-11-19 07:54:29.963738] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:38.271 [2024-11-19 07:54:29.963757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:30:38.271 [2024-11-19 07:54:29.963806] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:38.271 [2024-11-19 07:54:29.963827] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:38.271 [2024-11-19 07:54:29.964131] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:38.271 [2024-11-19 07:54:29.964155] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:38.271 [2024-11-19 07:54:29.964168] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:38.271 [2024-11-19 07:54:29.964180] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=1024, cccid=4 00:30:38.271 [2024-11-19 07:54:29.964194] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=1024 00:30:38.271 [2024-11-19 07:54:29.964207] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:38.271 [2024-11-19 07:54:29.964236] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:38.271 [2024-11-19 07:54:29.964250] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:38.271 [2024-11-19 07:54:29.964266] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:38.271 [2024-11-19 07:54:29.964282] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:38.271 [2024-11-19 07:54:29.964293] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:38.271 [2024-11-19 07:54:29.964306] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:38.271 [2024-11-19 07:54:30.004818] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:38.271 [2024-11-19 07:54:30.004851] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:38.271 [2024-11-19 07:54:30.004865] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:38.271 [2024-11-19 07:54:30.004883] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:38.271 [2024-11-19 07:54:30.004937] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:38.271 [2024-11-19 07:54:30.004958] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:38.271 [2024-11-19 07:54:30.004981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.271 [2024-11-19 07:54:30.005028] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:38.271 [2024-11-19 07:54:30.005275] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:38.271 [2024-11-19 07:54:30.005299] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:38.271 [2024-11-19 07:54:30.005311] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:38.271 [2024-11-19 07:54:30.005322] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=3072, cccid=4 00:30:38.271 [2024-11-19 07:54:30.005335] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=3072 00:30:38.271 [2024-11-19 07:54:30.005347] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:38.271 [2024-11-19 07:54:30.005365] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:38.271 [2024-11-19 07:54:30.005378] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:38.271 [2024-11-19 07:54:30.005398] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:38.271 [2024-11-19 07:54:30.005415] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:38.271 [2024-11-19 07:54:30.005440] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:38.271 [2024-11-19 07:54:30.005452] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:38.271 [2024-11-19 07:54:30.005482] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:38.271 [2024-11-19 07:54:30.005499] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:38.271 [2024-11-19 07:54:30.005527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.271 [2024-11-19 07:54:30.005569] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:38.271 [2024-11-19 07:54:30.009715] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:38.271 [2024-11-19 07:54:30.009741] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:38.271 [2024-11-19 07:54:30.009754] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:38.271 [2024-11-19 07:54:30.009765] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=8, cccid=4 00:30:38.271 [2024-11-19 07:54:30.009778] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=8 00:30:38.271 [2024-11-19 07:54:30.009790] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:38.271 [2024-11-19 07:54:30.009812] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:38.271 [2024-11-19 07:54:30.009827] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:38.271 [2024-11-19 07:54:30.048734] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:38.271 [2024-11-19 07:54:30.048807] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:38.271 [2024-11-19 07:54:30.048822] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:38.271 [2024-11-19 07:54:30.048838] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:38.271 ===================================================== 00:30:38.271 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:30:38.271 ===================================================== 00:30:38.271 Controller Capabilities/Features 00:30:38.271 ================================ 00:30:38.271 Vendor ID: 0000 00:30:38.271 Subsystem Vendor ID: 0000 00:30:38.271 Serial Number: .................... 00:30:38.271 Model Number: ........................................ 00:30:38.271 Firmware Version: 25.01 00:30:38.271 Recommended Arb Burst: 0 00:30:38.271 IEEE OUI Identifier: 00 00 00 00:30:38.271 Multi-path I/O 00:30:38.271 May have multiple subsystem ports: No 00:30:38.271 May have multiple controllers: No 00:30:38.271 Associated with SR-IOV VF: No 00:30:38.271 Max Data Transfer Size: 131072 00:30:38.271 Max Number of Namespaces: 0 00:30:38.271 Max Number of I/O Queues: 1024 00:30:38.271 NVMe Specification Version (VS): 1.3 00:30:38.271 NVMe Specification Version (Identify): 1.3 00:30:38.271 Maximum Queue Entries: 128 00:30:38.271 Contiguous Queues Required: Yes 00:30:38.271 Arbitration Mechanisms Supported 00:30:38.271 Weighted Round Robin: Not Supported 00:30:38.271 Vendor Specific: Not Supported 00:30:38.271 Reset Timeout: 15000 ms 00:30:38.271 Doorbell Stride: 4 bytes 00:30:38.271 NVM Subsystem Reset: Not Supported 00:30:38.271 Command Sets Supported 00:30:38.271 NVM Command Set: Supported 00:30:38.271 Boot Partition: Not Supported 00:30:38.271 Memory Page Size Minimum: 4096 bytes 00:30:38.271 Memory Page Size Maximum: 4096 bytes 00:30:38.271 Persistent Memory Region: Not Supported 00:30:38.271 Optional Asynchronous Events Supported 00:30:38.272 Namespace Attribute Notices: Not Supported 00:30:38.272 Firmware Activation Notices: Not Supported 00:30:38.272 ANA Change Notices: Not Supported 00:30:38.272 PLE Aggregate Log Change Notices: Not Supported 00:30:38.272 LBA Status Info Alert Notices: Not Supported 00:30:38.272 EGE Aggregate Log Change Notices: Not Supported 00:30:38.272 Normal NVM Subsystem Shutdown event: Not Supported 00:30:38.272 Zone Descriptor Change Notices: Not Supported 00:30:38.272 Discovery Log Change Notices: Supported 00:30:38.272 Controller Attributes 00:30:38.272 128-bit Host Identifier: Not Supported 00:30:38.272 Non-Operational Permissive Mode: Not Supported 00:30:38.272 NVM Sets: Not Supported 00:30:38.272 Read Recovery Levels: Not Supported 00:30:38.272 Endurance Groups: Not Supported 00:30:38.272 Predictable Latency Mode: Not Supported 00:30:38.272 Traffic Based Keep ALive: Not Supported 00:30:38.272 Namespace Granularity: Not Supported 00:30:38.272 SQ Associations: Not Supported 00:30:38.272 UUID List: Not Supported 00:30:38.272 Multi-Domain Subsystem: Not Supported 00:30:38.272 Fixed Capacity Management: Not Supported 00:30:38.272 Variable Capacity Management: Not Supported 00:30:38.272 Delete Endurance Group: Not Supported 00:30:38.272 Delete NVM Set: Not Supported 00:30:38.272 Extended LBA Formats Supported: Not Supported 00:30:38.272 Flexible Data Placement Supported: Not Supported 00:30:38.272 00:30:38.272 Controller Memory Buffer Support 00:30:38.272 ================================ 00:30:38.272 Supported: No 00:30:38.272 00:30:38.272 Persistent Memory Region Support 00:30:38.272 ================================ 00:30:38.272 Supported: No 00:30:38.272 00:30:38.272 Admin Command Set Attributes 00:30:38.272 ============================ 00:30:38.272 Security Send/Receive: Not Supported 00:30:38.272 Format NVM: Not Supported 00:30:38.272 Firmware Activate/Download: Not Supported 00:30:38.272 Namespace Management: Not Supported 00:30:38.272 Device Self-Test: Not Supported 00:30:38.272 Directives: Not Supported 00:30:38.272 NVMe-MI: Not Supported 00:30:38.272 Virtualization Management: Not Supported 00:30:38.272 Doorbell Buffer Config: Not Supported 00:30:38.272 Get LBA Status Capability: Not Supported 00:30:38.272 Command & Feature Lockdown Capability: Not Supported 00:30:38.272 Abort Command Limit: 1 00:30:38.272 Async Event Request Limit: 4 00:30:38.272 Number of Firmware Slots: N/A 00:30:38.272 Firmware Slot 1 Read-Only: N/A 00:30:38.272 Firmware Activation Without Reset: N/A 00:30:38.272 Multiple Update Detection Support: N/A 00:30:38.272 Firmware Update Granularity: No Information Provided 00:30:38.272 Per-Namespace SMART Log: No 00:30:38.272 Asymmetric Namespace Access Log Page: Not Supported 00:30:38.272 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:30:38.272 Command Effects Log Page: Not Supported 00:30:38.272 Get Log Page Extended Data: Supported 00:30:38.272 Telemetry Log Pages: Not Supported 00:30:38.272 Persistent Event Log Pages: Not Supported 00:30:38.272 Supported Log Pages Log Page: May Support 00:30:38.272 Commands Supported & Effects Log Page: Not Supported 00:30:38.272 Feature Identifiers & Effects Log Page:May Support 00:30:38.272 NVMe-MI Commands & Effects Log Page: May Support 00:30:38.272 Data Area 4 for Telemetry Log: Not Supported 00:30:38.272 Error Log Page Entries Supported: 128 00:30:38.272 Keep Alive: Not Supported 00:30:38.272 00:30:38.272 NVM Command Set Attributes 00:30:38.272 ========================== 00:30:38.272 Submission Queue Entry Size 00:30:38.272 Max: 1 00:30:38.272 Min: 1 00:30:38.272 Completion Queue Entry Size 00:30:38.272 Max: 1 00:30:38.272 Min: 1 00:30:38.272 Number of Namespaces: 0 00:30:38.272 Compare Command: Not Supported 00:30:38.272 Write Uncorrectable Command: Not Supported 00:30:38.272 Dataset Management Command: Not Supported 00:30:38.272 Write Zeroes Command: Not Supported 00:30:38.272 Set Features Save Field: Not Supported 00:30:38.272 Reservations: Not Supported 00:30:38.272 Timestamp: Not Supported 00:30:38.272 Copy: Not Supported 00:30:38.272 Volatile Write Cache: Not Present 00:30:38.272 Atomic Write Unit (Normal): 1 00:30:38.272 Atomic Write Unit (PFail): 1 00:30:38.272 Atomic Compare & Write Unit: 1 00:30:38.272 Fused Compare & Write: Supported 00:30:38.272 Scatter-Gather List 00:30:38.272 SGL Command Set: Supported 00:30:38.272 SGL Keyed: Supported 00:30:38.272 SGL Bit Bucket Descriptor: Not Supported 00:30:38.272 SGL Metadata Pointer: Not Supported 00:30:38.272 Oversized SGL: Not Supported 00:30:38.272 SGL Metadata Address: Not Supported 00:30:38.272 SGL Offset: Supported 00:30:38.272 Transport SGL Data Block: Not Supported 00:30:38.272 Replay Protected Memory Block: Not Supported 00:30:38.272 00:30:38.272 Firmware Slot Information 00:30:38.272 ========================= 00:30:38.272 Active slot: 0 00:30:38.272 00:30:38.272 00:30:38.272 Error Log 00:30:38.272 ========= 00:30:38.272 00:30:38.272 Active Namespaces 00:30:38.272 ================= 00:30:38.272 Discovery Log Page 00:30:38.272 ================== 00:30:38.272 Generation Counter: 2 00:30:38.272 Number of Records: 2 00:30:38.272 Record Format: 0 00:30:38.272 00:30:38.272 Discovery Log Entry 0 00:30:38.272 ---------------------- 00:30:38.272 Transport Type: 3 (TCP) 00:30:38.272 Address Family: 1 (IPv4) 00:30:38.272 Subsystem Type: 3 (Current Discovery Subsystem) 00:30:38.272 Entry Flags: 00:30:38.272 Duplicate Returned Information: 1 00:30:38.272 Explicit Persistent Connection Support for Discovery: 1 00:30:38.272 Transport Requirements: 00:30:38.272 Secure Channel: Not Required 00:30:38.272 Port ID: 0 (0x0000) 00:30:38.272 Controller ID: 65535 (0xffff) 00:30:38.272 Admin Max SQ Size: 128 00:30:38.272 Transport Service Identifier: 4420 00:30:38.272 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:30:38.272 Transport Address: 10.0.0.2 00:30:38.272 Discovery Log Entry 1 00:30:38.272 ---------------------- 00:30:38.272 Transport Type: 3 (TCP) 00:30:38.272 Address Family: 1 (IPv4) 00:30:38.272 Subsystem Type: 2 (NVM Subsystem) 00:30:38.272 Entry Flags: 00:30:38.272 Duplicate Returned Information: 0 00:30:38.272 Explicit Persistent Connection Support for Discovery: 0 00:30:38.272 Transport Requirements: 00:30:38.272 Secure Channel: Not Required 00:30:38.272 Port ID: 0 (0x0000) 00:30:38.272 Controller ID: 65535 (0xffff) 00:30:38.272 Admin Max SQ Size: 128 00:30:38.272 Transport Service Identifier: 4420 00:30:38.272 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:30:38.272 Transport Address: 10.0.0.2 [2024-11-19 07:54:30.049079] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:30:38.272 [2024-11-19 07:54:30.049114] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:38.272 [2024-11-19 07:54:30.049148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.272 [2024-11-19 07:54:30.049166] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x615000015700 00:30:38.272 [2024-11-19 07:54:30.049180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.272 [2024-11-19 07:54:30.049193] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x615000015700 00:30:38.272 [2024-11-19 07:54:30.049207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.272 [2024-11-19 07:54:30.049220] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:38.272 [2024-11-19 07:54:30.049233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.272 [2024-11-19 07:54:30.049261] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:38.272 [2024-11-19 07:54:30.049279] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:38.272 [2024-11-19 07:54:30.049292] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:38.272 [2024-11-19 07:54:30.049326] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.272 [2024-11-19 07:54:30.049368] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:38.272 [2024-11-19 07:54:30.049577] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:38.272 [2024-11-19 07:54:30.049603] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:38.272 [2024-11-19 07:54:30.049616] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:38.272 [2024-11-19 07:54:30.049629] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:38.272 [2024-11-19 07:54:30.049653] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:38.272 [2024-11-19 07:54:30.049668] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:38.273 [2024-11-19 07:54:30.049681] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:38.273 [2024-11-19 07:54:30.049721] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.273 [2024-11-19 07:54:30.049777] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:38.273 [2024-11-19 07:54:30.049964] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:38.273 [2024-11-19 07:54:30.049986] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:38.273 [2024-11-19 07:54:30.049998] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:38.273 [2024-11-19 07:54:30.050010] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:38.273 [2024-11-19 07:54:30.050032] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:30:38.273 [2024-11-19 07:54:30.050052] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:30:38.273 [2024-11-19 07:54:30.050080] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:38.273 [2024-11-19 07:54:30.050097] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:38.273 [2024-11-19 07:54:30.050110] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:38.273 [2024-11-19 07:54:30.050130] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.273 [2024-11-19 07:54:30.050164] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:38.273 [2024-11-19 07:54:30.050308] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:38.273 [2024-11-19 07:54:30.050336] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:38.273 [2024-11-19 07:54:30.050348] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:38.273 [2024-11-19 07:54:30.050360] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:38.273 [2024-11-19 07:54:30.050390] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:38.273 [2024-11-19 07:54:30.050407] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:38.273 [2024-11-19 07:54:30.050418] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:38.273 [2024-11-19 07:54:30.050438] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.273 [2024-11-19 07:54:30.050469] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:38.273 [2024-11-19 07:54:30.050582] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:38.273 [2024-11-19 07:54:30.050603] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:38.273 [2024-11-19 07:54:30.050616] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:38.273 [2024-11-19 07:54:30.050627] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:38.273 [2024-11-19 07:54:30.050655] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:38.273 [2024-11-19 07:54:30.050671] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:38.273 [2024-11-19 07:54:30.050682] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:38.273 [2024-11-19 07:54:30.050714] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.273 [2024-11-19 07:54:30.050748] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:38.273 [2024-11-19 07:54:30.050873] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:38.273 [2024-11-19 07:54:30.050896] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:38.273 [2024-11-19 07:54:30.050908] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:38.273 [2024-11-19 07:54:30.050919] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:38.273 [2024-11-19 07:54:30.050947] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:38.273 [2024-11-19 07:54:30.050963] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:38.273 [2024-11-19 07:54:30.050974] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:38.273 [2024-11-19 07:54:30.050993] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.273 [2024-11-19 07:54:30.051024] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:38.273 [2024-11-19 07:54:30.051133] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:38.273 [2024-11-19 07:54:30.051163] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:38.273 [2024-11-19 07:54:30.051177] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:38.273 [2024-11-19 07:54:30.051188] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:38.273 [2024-11-19 07:54:30.051216] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:38.273 [2024-11-19 07:54:30.051232] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:38.273 [2024-11-19 07:54:30.051244] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:38.273 [2024-11-19 07:54:30.051274] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.273 [2024-11-19 07:54:30.051304] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:38.273 [2024-11-19 07:54:30.051408] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:38.273 [2024-11-19 07:54:30.051431] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:38.273 [2024-11-19 07:54:30.051448] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:38.273 [2024-11-19 07:54:30.051460] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:38.273 [2024-11-19 07:54:30.051495] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:38.273 [2024-11-19 07:54:30.051511] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:38.273 [2024-11-19 07:54:30.051523] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:38.273 [2024-11-19 07:54:30.051542] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.273 [2024-11-19 07:54:30.051573] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:38.273 [2024-11-19 07:54:30.051680] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:38.273 [2024-11-19 07:54:30.051710] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:38.273 [2024-11-19 07:54:30.051730] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:38.273 [2024-11-19 07:54:30.051741] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:38.273 [2024-11-19 07:54:30.051768] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:38.273 [2024-11-19 07:54:30.051784] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:38.273 [2024-11-19 07:54:30.051796] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:38.273 [2024-11-19 07:54:30.051820] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.273 [2024-11-19 07:54:30.051852] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:38.273 [2024-11-19 07:54:30.051972] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:38.273 [2024-11-19 07:54:30.051993] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:38.273 [2024-11-19 07:54:30.052004] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:38.273 [2024-11-19 07:54:30.052016] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:38.273 [2024-11-19 07:54:30.052043] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:38.273 [2024-11-19 07:54:30.052059] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:38.273 [2024-11-19 07:54:30.052070] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:38.273 [2024-11-19 07:54:30.052089] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.273 [2024-11-19 07:54:30.052120] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:38.273 [2024-11-19 07:54:30.052222] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:38.273 [2024-11-19 07:54:30.052243] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:38.273 [2024-11-19 07:54:30.052254] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:38.273 [2024-11-19 07:54:30.052265] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:38.273 [2024-11-19 07:54:30.052292] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:38.273 [2024-11-19 07:54:30.052308] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:38.273 [2024-11-19 07:54:30.052319] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:38.273 [2024-11-19 07:54:30.052338] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.273 [2024-11-19 07:54:30.052369] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:38.273 [2024-11-19 07:54:30.052495] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:38.273 [2024-11-19 07:54:30.052517] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:38.273 [2024-11-19 07:54:30.052533] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:38.273 [2024-11-19 07:54:30.052576] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:38.273 [2024-11-19 07:54:30.052606] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:38.273 [2024-11-19 07:54:30.052622] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:38.273 [2024-11-19 07:54:30.052633] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:38.273 [2024-11-19 07:54:30.052652] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.273 [2024-11-19 07:54:30.052683] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:38.273 [2024-11-19 07:54:30.056726] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:38.273 [2024-11-19 07:54:30.056747] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:38.273 [2024-11-19 07:54:30.056758] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:38.273 [2024-11-19 07:54:30.056769] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:38.273 [2024-11-19 07:54:30.056813] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:38.273 [2024-11-19 07:54:30.056830] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:38.273 [2024-11-19 07:54:30.056841] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:38.273 [2024-11-19 07:54:30.056860] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.273 [2024-11-19 07:54:30.056894] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:38.274 [2024-11-19 07:54:30.057044] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:38.274 [2024-11-19 07:54:30.057067] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:38.274 [2024-11-19 07:54:30.057080] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:38.274 [2024-11-19 07:54:30.057092] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:38.274 [2024-11-19 07:54:30.057117] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 7 milliseconds 00:30:38.274 00:30:38.274 07:54:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:30:38.274 [2024-11-19 07:54:30.172401] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:30:38.274 [2024-11-19 07:54:30.172514] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3069531 ] 00:30:38.533 [2024-11-19 07:54:30.261986] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:30:38.533 [2024-11-19 07:54:30.262125] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:30:38.533 [2024-11-19 07:54:30.262163] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:30:38.533 [2024-11-19 07:54:30.262202] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:30:38.533 [2024-11-19 07:54:30.262226] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:30:38.533 [2024-11-19 07:54:30.263060] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:30:38.533 [2024-11-19 07:54:30.263145] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x615000015700 0 00:30:38.533 [2024-11-19 07:54:30.276708] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:30:38.533 [2024-11-19 07:54:30.276763] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:30:38.533 [2024-11-19 07:54:30.276781] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:30:38.533 [2024-11-19 07:54:30.276793] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:30:38.533 [2024-11-19 07:54:30.276865] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:38.533 [2024-11-19 07:54:30.276886] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:38.533 [2024-11-19 07:54:30.276906] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:38.534 [2024-11-19 07:54:30.276937] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:30:38.534 [2024-11-19 07:54:30.276982] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:38.534 [2024-11-19 07:54:30.284720] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:38.534 [2024-11-19 07:54:30.284747] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:38.534 [2024-11-19 07:54:30.284760] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:38.534 [2024-11-19 07:54:30.284773] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:38.534 [2024-11-19 07:54:30.284824] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:30:38.534 [2024-11-19 07:54:30.284851] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:30:38.534 [2024-11-19 07:54:30.284868] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:30:38.534 [2024-11-19 07:54:30.284896] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:38.534 [2024-11-19 07:54:30.284916] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:38.534 [2024-11-19 07:54:30.284932] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:38.534 [2024-11-19 07:54:30.284955] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.534 [2024-11-19 07:54:30.284991] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:38.534 [2024-11-19 07:54:30.285143] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:38.534 [2024-11-19 07:54:30.285166] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:38.534 [2024-11-19 07:54:30.285178] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:38.534 [2024-11-19 07:54:30.285191] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:38.534 [2024-11-19 07:54:30.285212] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:30:38.534 [2024-11-19 07:54:30.285237] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:30:38.534 [2024-11-19 07:54:30.285263] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:38.534 [2024-11-19 07:54:30.285279] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:38.534 [2024-11-19 07:54:30.285291] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:38.534 [2024-11-19 07:54:30.285315] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.534 [2024-11-19 07:54:30.285354] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:38.534 [2024-11-19 07:54:30.285497] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:38.534 [2024-11-19 07:54:30.285528] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:38.534 [2024-11-19 07:54:30.285550] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:38.534 [2024-11-19 07:54:30.285564] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:38.534 [2024-11-19 07:54:30.285580] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:30:38.534 [2024-11-19 07:54:30.285605] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:30:38.534 [2024-11-19 07:54:30.285627] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:38.534 [2024-11-19 07:54:30.285641] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:38.534 [2024-11-19 07:54:30.285653] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:38.534 [2024-11-19 07:54:30.285679] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.534 [2024-11-19 07:54:30.285722] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:38.534 [2024-11-19 07:54:30.285862] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:38.534 [2024-11-19 07:54:30.285885] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:38.534 [2024-11-19 07:54:30.285897] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:38.534 [2024-11-19 07:54:30.285908] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:38.534 [2024-11-19 07:54:30.285924] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:30:38.534 [2024-11-19 07:54:30.285953] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:38.534 [2024-11-19 07:54:30.285975] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:38.534 [2024-11-19 07:54:30.285988] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:38.534 [2024-11-19 07:54:30.286012] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.534 [2024-11-19 07:54:30.286046] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:38.534 [2024-11-19 07:54:30.286170] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:38.534 [2024-11-19 07:54:30.286197] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:38.534 [2024-11-19 07:54:30.286211] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:38.534 [2024-11-19 07:54:30.286222] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:38.534 [2024-11-19 07:54:30.286238] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:30:38.534 [2024-11-19 07:54:30.286253] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:30:38.534 [2024-11-19 07:54:30.286276] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:30:38.534 [2024-11-19 07:54:30.286399] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:30:38.534 [2024-11-19 07:54:30.286414] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:30:38.534 [2024-11-19 07:54:30.286437] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:38.534 [2024-11-19 07:54:30.286466] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:38.534 [2024-11-19 07:54:30.286478] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:38.534 [2024-11-19 07:54:30.286497] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.534 [2024-11-19 07:54:30.286538] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:38.534 [2024-11-19 07:54:30.286683] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:38.534 [2024-11-19 07:54:30.286716] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:38.534 [2024-11-19 07:54:30.286733] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:38.534 [2024-11-19 07:54:30.286747] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:38.534 [2024-11-19 07:54:30.286762] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:30:38.534 [2024-11-19 07:54:30.286791] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:38.534 [2024-11-19 07:54:30.286808] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:38.534 [2024-11-19 07:54:30.286826] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:38.534 [2024-11-19 07:54:30.286846] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.534 [2024-11-19 07:54:30.286879] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:38.534 [2024-11-19 07:54:30.287026] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:38.534 [2024-11-19 07:54:30.287049] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:38.534 [2024-11-19 07:54:30.287061] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:38.534 [2024-11-19 07:54:30.287072] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:38.534 [2024-11-19 07:54:30.287087] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:30:38.534 [2024-11-19 07:54:30.287102] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:30:38.534 [2024-11-19 07:54:30.287124] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:30:38.534 [2024-11-19 07:54:30.287151] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:30:38.534 [2024-11-19 07:54:30.287188] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:38.534 [2024-11-19 07:54:30.287205] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:38.534 [2024-11-19 07:54:30.287225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.534 [2024-11-19 07:54:30.287257] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:38.534 [2024-11-19 07:54:30.287458] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:38.534 [2024-11-19 07:54:30.287479] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:38.534 [2024-11-19 07:54:30.287491] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:38.534 [2024-11-19 07:54:30.287509] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=0 00:30:38.534 [2024-11-19 07:54:30.287524] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:38.534 [2024-11-19 07:54:30.287537] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:38.534 [2024-11-19 07:54:30.287571] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:38.534 [2024-11-19 07:54:30.287587] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:38.534 [2024-11-19 07:54:30.287612] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:38.534 [2024-11-19 07:54:30.287630] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:38.534 [2024-11-19 07:54:30.287642] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:38.534 [2024-11-19 07:54:30.287657] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:38.534 [2024-11-19 07:54:30.287682] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:30:38.534 [2024-11-19 07:54:30.287716] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:30:38.534 [2024-11-19 07:54:30.287730] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:30:38.535 [2024-11-19 07:54:30.287743] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:30:38.535 [2024-11-19 07:54:30.287756] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:30:38.535 [2024-11-19 07:54:30.287798] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:30:38.535 [2024-11-19 07:54:30.287828] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:30:38.535 [2024-11-19 07:54:30.287863] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:38.535 [2024-11-19 07:54:30.287879] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:38.535 [2024-11-19 07:54:30.287891] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:38.535 [2024-11-19 07:54:30.287918] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:38.535 [2024-11-19 07:54:30.287953] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:38.535 [2024-11-19 07:54:30.288091] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:38.535 [2024-11-19 07:54:30.288112] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:38.535 [2024-11-19 07:54:30.288124] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:38.535 [2024-11-19 07:54:30.288136] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:38.535 [2024-11-19 07:54:30.288160] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:38.535 [2024-11-19 07:54:30.288176] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:38.535 [2024-11-19 07:54:30.288188] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:38.535 [2024-11-19 07:54:30.288216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:38.535 [2024-11-19 07:54:30.288235] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:38.535 [2024-11-19 07:54:30.288248] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:38.535 [2024-11-19 07:54:30.288259] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x615000015700) 00:30:38.535 [2024-11-19 07:54:30.288275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:38.535 [2024-11-19 07:54:30.288292] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:38.535 [2024-11-19 07:54:30.288304] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:38.535 [2024-11-19 07:54:30.288314] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x615000015700) 00:30:38.535 [2024-11-19 07:54:30.288330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:38.535 [2024-11-19 07:54:30.288346] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:38.535 [2024-11-19 07:54:30.288358] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:38.535 [2024-11-19 07:54:30.288369] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:38.535 [2024-11-19 07:54:30.288389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:38.535 [2024-11-19 07:54:30.288409] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:30:38.535 [2024-11-19 07:54:30.288438] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:30:38.535 [2024-11-19 07:54:30.288459] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:38.535 [2024-11-19 07:54:30.288473] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:38.535 [2024-11-19 07:54:30.288492] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.535 [2024-11-19 07:54:30.288526] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:38.535 [2024-11-19 07:54:30.288544] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:30:38.535 [2024-11-19 07:54:30.288556] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:30:38.535 [2024-11-19 07:54:30.288569] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:38.535 [2024-11-19 07:54:30.288581] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:38.535 [2024-11-19 07:54:30.292729] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:38.535 [2024-11-19 07:54:30.292754] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:38.535 [2024-11-19 07:54:30.292766] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:38.535 [2024-11-19 07:54:30.292777] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:38.535 [2024-11-19 07:54:30.292794] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:30:38.535 [2024-11-19 07:54:30.292810] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:30:38.535 [2024-11-19 07:54:30.292833] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:30:38.535 [2024-11-19 07:54:30.292857] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:30:38.535 [2024-11-19 07:54:30.292879] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:38.535 [2024-11-19 07:54:30.292893] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:38.535 [2024-11-19 07:54:30.292905] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:38.535 [2024-11-19 07:54:30.292925] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:38.535 [2024-11-19 07:54:30.292959] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:38.535 [2024-11-19 07:54:30.293102] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:38.535 [2024-11-19 07:54:30.293124] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:38.535 [2024-11-19 07:54:30.293136] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:38.535 [2024-11-19 07:54:30.293147] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:38.535 [2024-11-19 07:54:30.293245] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:30:38.535 [2024-11-19 07:54:30.293287] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:30:38.535 [2024-11-19 07:54:30.293315] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:38.535 [2024-11-19 07:54:30.293334] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:38.535 [2024-11-19 07:54:30.293360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.535 [2024-11-19 07:54:30.293415] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:38.535 [2024-11-19 07:54:30.293614] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:38.535 [2024-11-19 07:54:30.293637] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:38.535 [2024-11-19 07:54:30.293649] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:38.535 [2024-11-19 07:54:30.293660] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=4 00:30:38.535 [2024-11-19 07:54:30.293672] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:38.535 [2024-11-19 07:54:30.293684] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:38.535 [2024-11-19 07:54:30.293734] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:38.535 [2024-11-19 07:54:30.293751] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:38.535 [2024-11-19 07:54:30.333810] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:38.535 [2024-11-19 07:54:30.333841] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:38.535 [2024-11-19 07:54:30.333854] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:38.535 [2024-11-19 07:54:30.333867] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:38.535 [2024-11-19 07:54:30.333920] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:30:38.535 [2024-11-19 07:54:30.333964] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:30:38.535 [2024-11-19 07:54:30.334003] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:30:38.535 [2024-11-19 07:54:30.334032] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:38.535 [2024-11-19 07:54:30.334048] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:38.535 [2024-11-19 07:54:30.334071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.535 [2024-11-19 07:54:30.334122] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:38.535 [2024-11-19 07:54:30.334348] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:38.535 [2024-11-19 07:54:30.334369] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:38.535 [2024-11-19 07:54:30.334381] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:38.535 [2024-11-19 07:54:30.334392] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=4 00:30:38.535 [2024-11-19 07:54:30.334405] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:38.535 [2024-11-19 07:54:30.334416] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:38.535 [2024-11-19 07:54:30.334444] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:38.535 [2024-11-19 07:54:30.334459] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:38.535 [2024-11-19 07:54:30.378714] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:38.535 [2024-11-19 07:54:30.378753] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:38.535 [2024-11-19 07:54:30.378766] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:38.535 [2024-11-19 07:54:30.378777] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:38.535 [2024-11-19 07:54:30.378825] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:30:38.535 [2024-11-19 07:54:30.378877] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:30:38.535 [2024-11-19 07:54:30.378907] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:38.536 [2024-11-19 07:54:30.378929] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:38.536 [2024-11-19 07:54:30.378952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.536 [2024-11-19 07:54:30.378988] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:38.536 [2024-11-19 07:54:30.379163] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:38.536 [2024-11-19 07:54:30.379184] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:38.536 [2024-11-19 07:54:30.379210] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:38.536 [2024-11-19 07:54:30.379221] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=4 00:30:38.536 [2024-11-19 07:54:30.379234] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:38.536 [2024-11-19 07:54:30.379245] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:38.536 [2024-11-19 07:54:30.379274] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:38.536 [2024-11-19 07:54:30.379289] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:38.536 [2024-11-19 07:54:30.423720] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:38.536 [2024-11-19 07:54:30.423760] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:38.536 [2024-11-19 07:54:30.423773] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:38.536 [2024-11-19 07:54:30.423784] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:38.536 [2024-11-19 07:54:30.423814] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:30:38.536 [2024-11-19 07:54:30.423847] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:30:38.536 [2024-11-19 07:54:30.423872] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:30:38.536 [2024-11-19 07:54:30.423890] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:30:38.536 [2024-11-19 07:54:30.423905] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:30:38.536 [2024-11-19 07:54:30.423920] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:30:38.536 [2024-11-19 07:54:30.423934] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:30:38.536 [2024-11-19 07:54:30.423947] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:30:38.536 [2024-11-19 07:54:30.423962] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:30:38.536 [2024-11-19 07:54:30.424013] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:38.536 [2024-11-19 07:54:30.424051] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:38.536 [2024-11-19 07:54:30.424078] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.536 [2024-11-19 07:54:30.424104] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:38.536 [2024-11-19 07:54:30.424132] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:38.536 [2024-11-19 07:54:30.424148] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:38.536 [2024-11-19 07:54:30.424166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:30:38.536 [2024-11-19 07:54:30.424214] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:38.536 [2024-11-19 07:54:30.424235] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:38.536 [2024-11-19 07:54:30.424449] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:38.536 [2024-11-19 07:54:30.424471] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:38.536 [2024-11-19 07:54:30.424484] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:38.536 [2024-11-19 07:54:30.424497] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:38.536 [2024-11-19 07:54:30.424522] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:38.536 [2024-11-19 07:54:30.424539] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:38.536 [2024-11-19 07:54:30.424551] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:38.536 [2024-11-19 07:54:30.424562] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:38.536 [2024-11-19 07:54:30.424587] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:38.536 [2024-11-19 07:54:30.424603] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:38.536 [2024-11-19 07:54:30.424622] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.536 [2024-11-19 07:54:30.424654] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:38.536 [2024-11-19 07:54:30.424803] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:38.536 [2024-11-19 07:54:30.424826] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:38.536 [2024-11-19 07:54:30.424838] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:38.536 [2024-11-19 07:54:30.424849] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:38.536 [2024-11-19 07:54:30.424875] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:38.536 [2024-11-19 07:54:30.424891] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:38.536 [2024-11-19 07:54:30.424910] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.536 [2024-11-19 07:54:30.424940] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:38.536 [2024-11-19 07:54:30.425048] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:38.536 [2024-11-19 07:54:30.425070] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:38.536 [2024-11-19 07:54:30.425081] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:38.536 [2024-11-19 07:54:30.425092] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:38.536 [2024-11-19 07:54:30.425118] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:38.536 [2024-11-19 07:54:30.425134] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:38.536 [2024-11-19 07:54:30.425152] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.536 [2024-11-19 07:54:30.425183] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:38.536 [2024-11-19 07:54:30.425287] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:38.536 [2024-11-19 07:54:30.425307] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:38.536 [2024-11-19 07:54:30.425319] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:38.536 [2024-11-19 07:54:30.425335] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:38.536 [2024-11-19 07:54:30.425380] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:38.536 [2024-11-19 07:54:30.425398] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:38.536 [2024-11-19 07:54:30.425418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.536 [2024-11-19 07:54:30.425441] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:38.536 [2024-11-19 07:54:30.425456] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:38.536 [2024-11-19 07:54:30.425474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.536 [2024-11-19 07:54:30.425495] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:38.536 [2024-11-19 07:54:30.425510] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x615000015700) 00:30:38.536 [2024-11-19 07:54:30.425528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.536 [2024-11-19 07:54:30.425572] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:38.536 [2024-11-19 07:54:30.425588] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x615000015700) 00:30:38.536 [2024-11-19 07:54:30.425617] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.536 [2024-11-19 07:54:30.425666] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:38.536 [2024-11-19 07:54:30.425684] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:38.536 [2024-11-19 07:54:30.425725] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001ba00, cid 6, qid 0 00:30:38.536 [2024-11-19 07:54:30.425738] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:30:38.536 [2024-11-19 07:54:30.426022] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:38.536 [2024-11-19 07:54:30.426044] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:38.536 [2024-11-19 07:54:30.426057] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:38.536 [2024-11-19 07:54:30.426069] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=8192, cccid=5 00:30:38.536 [2024-11-19 07:54:30.426083] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b880) on tqpair(0x615000015700): expected_datao=0, payload_size=8192 00:30:38.536 [2024-11-19 07:54:30.426095] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:38.536 [2024-11-19 07:54:30.426130] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:38.536 [2024-11-19 07:54:30.426147] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:38.536 [2024-11-19 07:54:30.426162] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:38.536 [2024-11-19 07:54:30.426186] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:38.536 [2024-11-19 07:54:30.426198] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:38.536 [2024-11-19 07:54:30.426209] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=512, cccid=4 00:30:38.536 [2024-11-19 07:54:30.426221] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=512 00:30:38.536 [2024-11-19 07:54:30.426232] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:38.536 [2024-11-19 07:54:30.426249] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:38.536 [2024-11-19 07:54:30.426261] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:38.536 [2024-11-19 07:54:30.426279] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:38.536 [2024-11-19 07:54:30.426295] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:38.537 [2024-11-19 07:54:30.426306] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:38.537 [2024-11-19 07:54:30.426332] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=512, cccid=6 00:30:38.537 [2024-11-19 07:54:30.426344] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001ba00) on tqpair(0x615000015700): expected_datao=0, payload_size=512 00:30:38.537 [2024-11-19 07:54:30.426355] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:38.537 [2024-11-19 07:54:30.426370] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:38.537 [2024-11-19 07:54:30.426396] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:38.537 [2024-11-19 07:54:30.426410] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:38.537 [2024-11-19 07:54:30.426424] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:38.537 [2024-11-19 07:54:30.426434] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:38.537 [2024-11-19 07:54:30.426444] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=7 00:30:38.537 [2024-11-19 07:54:30.426455] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001bb80) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:38.537 [2024-11-19 07:54:30.426465] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:38.537 [2024-11-19 07:54:30.426480] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:38.537 [2024-11-19 07:54:30.426491] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:38.796 [2024-11-19 07:54:30.466817] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:38.796 [2024-11-19 07:54:30.466848] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:38.796 [2024-11-19 07:54:30.466861] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:38.796 [2024-11-19 07:54:30.466881] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:38.796 [2024-11-19 07:54:30.466922] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:38.796 [2024-11-19 07:54:30.466941] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:38.796 [2024-11-19 07:54:30.466953] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:38.796 [2024-11-19 07:54:30.466964] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:38.796 [2024-11-19 07:54:30.466993] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:38.796 [2024-11-19 07:54:30.467011] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:38.796 [2024-11-19 07:54:30.467023] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:38.796 [2024-11-19 07:54:30.467034] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001ba00) on tqpair=0x615000015700 00:30:38.796 [2024-11-19 07:54:30.467054] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:38.796 [2024-11-19 07:54:30.467071] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:38.796 [2024-11-19 07:54:30.467082] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:38.796 [2024-11-19 07:54:30.467108] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x615000015700 00:30:38.796 ===================================================== 00:30:38.796 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:38.796 ===================================================== 00:30:38.796 Controller Capabilities/Features 00:30:38.797 ================================ 00:30:38.797 Vendor ID: 8086 00:30:38.797 Subsystem Vendor ID: 8086 00:30:38.797 Serial Number: SPDK00000000000001 00:30:38.797 Model Number: SPDK bdev Controller 00:30:38.797 Firmware Version: 25.01 00:30:38.797 Recommended Arb Burst: 6 00:30:38.797 IEEE OUI Identifier: e4 d2 5c 00:30:38.797 Multi-path I/O 00:30:38.797 May have multiple subsystem ports: Yes 00:30:38.797 May have multiple controllers: Yes 00:30:38.797 Associated with SR-IOV VF: No 00:30:38.797 Max Data Transfer Size: 131072 00:30:38.797 Max Number of Namespaces: 32 00:30:38.797 Max Number of I/O Queues: 127 00:30:38.797 NVMe Specification Version (VS): 1.3 00:30:38.797 NVMe Specification Version (Identify): 1.3 00:30:38.797 Maximum Queue Entries: 128 00:30:38.797 Contiguous Queues Required: Yes 00:30:38.797 Arbitration Mechanisms Supported 00:30:38.797 Weighted Round Robin: Not Supported 00:30:38.797 Vendor Specific: Not Supported 00:30:38.797 Reset Timeout: 15000 ms 00:30:38.797 Doorbell Stride: 4 bytes 00:30:38.797 NVM Subsystem Reset: Not Supported 00:30:38.797 Command Sets Supported 00:30:38.797 NVM Command Set: Supported 00:30:38.797 Boot Partition: Not Supported 00:30:38.797 Memory Page Size Minimum: 4096 bytes 00:30:38.797 Memory Page Size Maximum: 4096 bytes 00:30:38.797 Persistent Memory Region: Not Supported 00:30:38.797 Optional Asynchronous Events Supported 00:30:38.797 Namespace Attribute Notices: Supported 00:30:38.797 Firmware Activation Notices: Not Supported 00:30:38.797 ANA Change Notices: Not Supported 00:30:38.797 PLE Aggregate Log Change Notices: Not Supported 00:30:38.797 LBA Status Info Alert Notices: Not Supported 00:30:38.797 EGE Aggregate Log Change Notices: Not Supported 00:30:38.797 Normal NVM Subsystem Shutdown event: Not Supported 00:30:38.797 Zone Descriptor Change Notices: Not Supported 00:30:38.797 Discovery Log Change Notices: Not Supported 00:30:38.797 Controller Attributes 00:30:38.797 128-bit Host Identifier: Supported 00:30:38.797 Non-Operational Permissive Mode: Not Supported 00:30:38.797 NVM Sets: Not Supported 00:30:38.797 Read Recovery Levels: Not Supported 00:30:38.797 Endurance Groups: Not Supported 00:30:38.797 Predictable Latency Mode: Not Supported 00:30:38.797 Traffic Based Keep ALive: Not Supported 00:30:38.797 Namespace Granularity: Not Supported 00:30:38.797 SQ Associations: Not Supported 00:30:38.797 UUID List: Not Supported 00:30:38.797 Multi-Domain Subsystem: Not Supported 00:30:38.797 Fixed Capacity Management: Not Supported 00:30:38.797 Variable Capacity Management: Not Supported 00:30:38.797 Delete Endurance Group: Not Supported 00:30:38.797 Delete NVM Set: Not Supported 00:30:38.797 Extended LBA Formats Supported: Not Supported 00:30:38.797 Flexible Data Placement Supported: Not Supported 00:30:38.797 00:30:38.797 Controller Memory Buffer Support 00:30:38.797 ================================ 00:30:38.797 Supported: No 00:30:38.797 00:30:38.797 Persistent Memory Region Support 00:30:38.797 ================================ 00:30:38.797 Supported: No 00:30:38.797 00:30:38.797 Admin Command Set Attributes 00:30:38.797 ============================ 00:30:38.797 Security Send/Receive: Not Supported 00:30:38.797 Format NVM: Not Supported 00:30:38.797 Firmware Activate/Download: Not Supported 00:30:38.797 Namespace Management: Not Supported 00:30:38.797 Device Self-Test: Not Supported 00:30:38.797 Directives: Not Supported 00:30:38.797 NVMe-MI: Not Supported 00:30:38.797 Virtualization Management: Not Supported 00:30:38.797 Doorbell Buffer Config: Not Supported 00:30:38.797 Get LBA Status Capability: Not Supported 00:30:38.797 Command & Feature Lockdown Capability: Not Supported 00:30:38.797 Abort Command Limit: 4 00:30:38.797 Async Event Request Limit: 4 00:30:38.797 Number of Firmware Slots: N/A 00:30:38.797 Firmware Slot 1 Read-Only: N/A 00:30:38.797 Firmware Activation Without Reset: N/A 00:30:38.797 Multiple Update Detection Support: N/A 00:30:38.797 Firmware Update Granularity: No Information Provided 00:30:38.797 Per-Namespace SMART Log: No 00:30:38.797 Asymmetric Namespace Access Log Page: Not Supported 00:30:38.797 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:30:38.797 Command Effects Log Page: Supported 00:30:38.797 Get Log Page Extended Data: Supported 00:30:38.797 Telemetry Log Pages: Not Supported 00:30:38.797 Persistent Event Log Pages: Not Supported 00:30:38.797 Supported Log Pages Log Page: May Support 00:30:38.797 Commands Supported & Effects Log Page: Not Supported 00:30:38.797 Feature Identifiers & Effects Log Page:May Support 00:30:38.797 NVMe-MI Commands & Effects Log Page: May Support 00:30:38.797 Data Area 4 for Telemetry Log: Not Supported 00:30:38.797 Error Log Page Entries Supported: 128 00:30:38.797 Keep Alive: Supported 00:30:38.797 Keep Alive Granularity: 10000 ms 00:30:38.797 00:30:38.797 NVM Command Set Attributes 00:30:38.797 ========================== 00:30:38.797 Submission Queue Entry Size 00:30:38.797 Max: 64 00:30:38.797 Min: 64 00:30:38.797 Completion Queue Entry Size 00:30:38.797 Max: 16 00:30:38.797 Min: 16 00:30:38.797 Number of Namespaces: 32 00:30:38.797 Compare Command: Supported 00:30:38.797 Write Uncorrectable Command: Not Supported 00:30:38.797 Dataset Management Command: Supported 00:30:38.797 Write Zeroes Command: Supported 00:30:38.797 Set Features Save Field: Not Supported 00:30:38.797 Reservations: Supported 00:30:38.797 Timestamp: Not Supported 00:30:38.797 Copy: Supported 00:30:38.797 Volatile Write Cache: Present 00:30:38.797 Atomic Write Unit (Normal): 1 00:30:38.797 Atomic Write Unit (PFail): 1 00:30:38.797 Atomic Compare & Write Unit: 1 00:30:38.797 Fused Compare & Write: Supported 00:30:38.797 Scatter-Gather List 00:30:38.797 SGL Command Set: Supported 00:30:38.797 SGL Keyed: Supported 00:30:38.797 SGL Bit Bucket Descriptor: Not Supported 00:30:38.797 SGL Metadata Pointer: Not Supported 00:30:38.797 Oversized SGL: Not Supported 00:30:38.797 SGL Metadata Address: Not Supported 00:30:38.797 SGL Offset: Supported 00:30:38.797 Transport SGL Data Block: Not Supported 00:30:38.797 Replay Protected Memory Block: Not Supported 00:30:38.797 00:30:38.797 Firmware Slot Information 00:30:38.797 ========================= 00:30:38.797 Active slot: 1 00:30:38.797 Slot 1 Firmware Revision: 25.01 00:30:38.797 00:30:38.797 00:30:38.797 Commands Supported and Effects 00:30:38.797 ============================== 00:30:38.797 Admin Commands 00:30:38.797 -------------- 00:30:38.797 Get Log Page (02h): Supported 00:30:38.797 Identify (06h): Supported 00:30:38.797 Abort (08h): Supported 00:30:38.797 Set Features (09h): Supported 00:30:38.797 Get Features (0Ah): Supported 00:30:38.797 Asynchronous Event Request (0Ch): Supported 00:30:38.797 Keep Alive (18h): Supported 00:30:38.797 I/O Commands 00:30:38.797 ------------ 00:30:38.797 Flush (00h): Supported LBA-Change 00:30:38.797 Write (01h): Supported LBA-Change 00:30:38.797 Read (02h): Supported 00:30:38.797 Compare (05h): Supported 00:30:38.797 Write Zeroes (08h): Supported LBA-Change 00:30:38.797 Dataset Management (09h): Supported LBA-Change 00:30:38.797 Copy (19h): Supported LBA-Change 00:30:38.797 00:30:38.797 Error Log 00:30:38.797 ========= 00:30:38.797 00:30:38.797 Arbitration 00:30:38.797 =========== 00:30:38.797 Arbitration Burst: 1 00:30:38.797 00:30:38.797 Power Management 00:30:38.797 ================ 00:30:38.797 Number of Power States: 1 00:30:38.797 Current Power State: Power State #0 00:30:38.797 Power State #0: 00:30:38.797 Max Power: 0.00 W 00:30:38.797 Non-Operational State: Operational 00:30:38.797 Entry Latency: Not Reported 00:30:38.797 Exit Latency: Not Reported 00:30:38.797 Relative Read Throughput: 0 00:30:38.797 Relative Read Latency: 0 00:30:38.797 Relative Write Throughput: 0 00:30:38.797 Relative Write Latency: 0 00:30:38.797 Idle Power: Not Reported 00:30:38.797 Active Power: Not Reported 00:30:38.797 Non-Operational Permissive Mode: Not Supported 00:30:38.797 00:30:38.797 Health Information 00:30:38.797 ================== 00:30:38.797 Critical Warnings: 00:30:38.797 Available Spare Space: OK 00:30:38.797 Temperature: OK 00:30:38.797 Device Reliability: OK 00:30:38.797 Read Only: No 00:30:38.797 Volatile Memory Backup: OK 00:30:38.797 Current Temperature: 0 Kelvin (-273 Celsius) 00:30:38.797 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:30:38.797 Available Spare: 0% 00:30:38.797 Available Spare Threshold: 0% 00:30:38.798 Life Percentage Used:[2024-11-19 07:54:30.467332] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:38.798 [2024-11-19 07:54:30.467352] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x615000015700) 00:30:38.798 [2024-11-19 07:54:30.467375] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.798 [2024-11-19 07:54:30.467411] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:30:38.798 [2024-11-19 07:54:30.467540] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:38.798 [2024-11-19 07:54:30.467562] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:38.798 [2024-11-19 07:54:30.467579] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:38.798 [2024-11-19 07:54:30.467592] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x615000015700 00:30:38.798 [2024-11-19 07:54:30.467670] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:30:38.798 [2024-11-19 07:54:30.471716] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:38.798 [2024-11-19 07:54:30.471750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.798 [2024-11-19 07:54:30.471766] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x615000015700 00:30:38.798 [2024-11-19 07:54:30.471780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.798 [2024-11-19 07:54:30.471793] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x615000015700 00:30:38.798 [2024-11-19 07:54:30.471807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.798 [2024-11-19 07:54:30.471820] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:38.798 [2024-11-19 07:54:30.471834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:38.798 [2024-11-19 07:54:30.471856] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:38.798 [2024-11-19 07:54:30.471871] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:38.798 [2024-11-19 07:54:30.471883] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:38.798 [2024-11-19 07:54:30.471904] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.798 [2024-11-19 07:54:30.471941] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:38.798 [2024-11-19 07:54:30.472082] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:38.798 [2024-11-19 07:54:30.472105] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:38.798 [2024-11-19 07:54:30.472117] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:38.798 [2024-11-19 07:54:30.472129] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:38.798 [2024-11-19 07:54:30.472158] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:38.798 [2024-11-19 07:54:30.472172] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:38.798 [2024-11-19 07:54:30.472184] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:38.798 [2024-11-19 07:54:30.472209] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.798 [2024-11-19 07:54:30.472251] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:38.798 [2024-11-19 07:54:30.472433] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:38.798 [2024-11-19 07:54:30.472455] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:38.798 [2024-11-19 07:54:30.472466] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:38.798 [2024-11-19 07:54:30.472477] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:38.798 [2024-11-19 07:54:30.472492] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:30:38.798 [2024-11-19 07:54:30.472506] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:30:38.798 [2024-11-19 07:54:30.472532] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:38.798 [2024-11-19 07:54:30.472548] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:38.798 [2024-11-19 07:54:30.472574] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:38.798 [2024-11-19 07:54:30.472595] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.798 [2024-11-19 07:54:30.472627] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:38.798 [2024-11-19 07:54:30.472760] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:38.798 [2024-11-19 07:54:30.472783] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:38.798 [2024-11-19 07:54:30.472795] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:38.798 [2024-11-19 07:54:30.472806] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:38.798 [2024-11-19 07:54:30.472835] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:38.798 [2024-11-19 07:54:30.472850] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:38.798 [2024-11-19 07:54:30.472861] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:38.798 [2024-11-19 07:54:30.472880] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.798 [2024-11-19 07:54:30.472911] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:38.798 [2024-11-19 07:54:30.473011] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:38.798 [2024-11-19 07:54:30.473036] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:38.798 [2024-11-19 07:54:30.473049] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:38.798 [2024-11-19 07:54:30.473060] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:38.798 [2024-11-19 07:54:30.473088] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:38.798 [2024-11-19 07:54:30.473103] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:38.798 [2024-11-19 07:54:30.473114] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:38.798 [2024-11-19 07:54:30.473132] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.798 [2024-11-19 07:54:30.473162] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:38.798 [2024-11-19 07:54:30.473273] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:38.798 [2024-11-19 07:54:30.473294] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:38.798 [2024-11-19 07:54:30.473306] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:38.798 [2024-11-19 07:54:30.473318] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:38.798 [2024-11-19 07:54:30.473345] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:38.798 [2024-11-19 07:54:30.473360] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:38.798 [2024-11-19 07:54:30.473371] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:38.798 [2024-11-19 07:54:30.473390] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.798 [2024-11-19 07:54:30.473420] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:38.798 [2024-11-19 07:54:30.473562] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:38.798 [2024-11-19 07:54:30.473583] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:38.798 [2024-11-19 07:54:30.473594] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:38.798 [2024-11-19 07:54:30.473605] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:38.798 [2024-11-19 07:54:30.473632] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:38.798 [2024-11-19 07:54:30.473648] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:38.798 [2024-11-19 07:54:30.473659] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:38.798 [2024-11-19 07:54:30.473682] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.798 [2024-11-19 07:54:30.473724] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:38.798 [2024-11-19 07:54:30.473864] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:38.798 [2024-11-19 07:54:30.473885] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:38.798 [2024-11-19 07:54:30.473897] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:38.798 [2024-11-19 07:54:30.473908] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:38.798 [2024-11-19 07:54:30.473935] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:38.798 [2024-11-19 07:54:30.473951] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:38.798 [2024-11-19 07:54:30.473962] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:38.798 [2024-11-19 07:54:30.473980] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.798 [2024-11-19 07:54:30.474010] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:38.798 [2024-11-19 07:54:30.474109] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:38.798 [2024-11-19 07:54:30.474129] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:38.798 [2024-11-19 07:54:30.474141] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:38.798 [2024-11-19 07:54:30.474152] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:38.798 [2024-11-19 07:54:30.474179] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:38.798 [2024-11-19 07:54:30.474194] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:38.798 [2024-11-19 07:54:30.474205] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:38.798 [2024-11-19 07:54:30.474223] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.798 [2024-11-19 07:54:30.474253] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:38.798 [2024-11-19 07:54:30.474356] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:38.798 [2024-11-19 07:54:30.474377] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:38.798 [2024-11-19 07:54:30.474389] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:38.798 [2024-11-19 07:54:30.474400] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:38.798 [2024-11-19 07:54:30.474427] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:38.798 [2024-11-19 07:54:30.474442] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:38.799 [2024-11-19 07:54:30.474454] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:38.799 [2024-11-19 07:54:30.474482] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.799 [2024-11-19 07:54:30.474514] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:38.799 [2024-11-19 07:54:30.474646] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:38.799 [2024-11-19 07:54:30.474673] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:38.799 [2024-11-19 07:54:30.474685] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:38.799 [2024-11-19 07:54:30.474706] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:38.799 [2024-11-19 07:54:30.474734] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:38.799 [2024-11-19 07:54:30.474750] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:38.799 [2024-11-19 07:54:30.474761] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:38.799 [2024-11-19 07:54:30.474783] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.799 [2024-11-19 07:54:30.474816] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:38.799 [2024-11-19 07:54:30.474949] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:38.799 [2024-11-19 07:54:30.474971] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:38.799 [2024-11-19 07:54:30.474983] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:38.799 [2024-11-19 07:54:30.474994] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:38.799 [2024-11-19 07:54:30.475022] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:38.799 [2024-11-19 07:54:30.475038] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:38.799 [2024-11-19 07:54:30.475049] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:38.799 [2024-11-19 07:54:30.475067] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.799 [2024-11-19 07:54:30.475098] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:38.799 [2024-11-19 07:54:30.475203] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:38.799 [2024-11-19 07:54:30.475228] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:38.799 [2024-11-19 07:54:30.475241] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:38.799 [2024-11-19 07:54:30.475252] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:38.799 [2024-11-19 07:54:30.475280] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:38.799 [2024-11-19 07:54:30.475296] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:38.799 [2024-11-19 07:54:30.475306] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:38.799 [2024-11-19 07:54:30.475325] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.799 [2024-11-19 07:54:30.475369] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:38.799 [2024-11-19 07:54:30.478708] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:38.799 [2024-11-19 07:54:30.478733] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:38.799 [2024-11-19 07:54:30.478745] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:38.799 [2024-11-19 07:54:30.478756] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:38.799 [2024-11-19 07:54:30.478798] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:38.799 [2024-11-19 07:54:30.478815] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:38.799 [2024-11-19 07:54:30.478826] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:38.799 [2024-11-19 07:54:30.478845] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.799 [2024-11-19 07:54:30.478877] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:38.799 [2024-11-19 07:54:30.479016] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:38.799 [2024-11-19 07:54:30.479036] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:38.799 [2024-11-19 07:54:30.479048] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:38.799 [2024-11-19 07:54:30.479059] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:38.799 [2024-11-19 07:54:30.479082] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 6 milliseconds 00:30:38.799 0% 00:30:38.799 Data Units Read: 0 00:30:38.799 Data Units Written: 0 00:30:38.799 Host Read Commands: 0 00:30:38.799 Host Write Commands: 0 00:30:38.799 Controller Busy Time: 0 minutes 00:30:38.799 Power Cycles: 0 00:30:38.799 Power On Hours: 0 hours 00:30:38.799 Unsafe Shutdowns: 0 00:30:38.799 Unrecoverable Media Errors: 0 00:30:38.799 Lifetime Error Log Entries: 0 00:30:38.799 Warning Temperature Time: 0 minutes 00:30:38.799 Critical Temperature Time: 0 minutes 00:30:38.799 00:30:38.799 Number of Queues 00:30:38.799 ================ 00:30:38.799 Number of I/O Submission Queues: 127 00:30:38.799 Number of I/O Completion Queues: 127 00:30:38.799 00:30:38.799 Active Namespaces 00:30:38.799 ================= 00:30:38.799 Namespace ID:1 00:30:38.799 Error Recovery Timeout: Unlimited 00:30:38.799 Command Set Identifier: NVM (00h) 00:30:38.799 Deallocate: Supported 00:30:38.799 Deallocated/Unwritten Error: Not Supported 00:30:38.799 Deallocated Read Value: Unknown 00:30:38.799 Deallocate in Write Zeroes: Not Supported 00:30:38.799 Deallocated Guard Field: 0xFFFF 00:30:38.799 Flush: Supported 00:30:38.799 Reservation: Supported 00:30:38.799 Namespace Sharing Capabilities: Multiple Controllers 00:30:38.799 Size (in LBAs): 131072 (0GiB) 00:30:38.799 Capacity (in LBAs): 131072 (0GiB) 00:30:38.799 Utilization (in LBAs): 131072 (0GiB) 00:30:38.799 NGUID: ABCDEF0123456789ABCDEF0123456789 00:30:38.799 EUI64: ABCDEF0123456789 00:30:38.799 UUID: 52b05d90-4930-4f19-bf86-bf71d9efaf97 00:30:38.799 Thin Provisioning: Not Supported 00:30:38.799 Per-NS Atomic Units: Yes 00:30:38.799 Atomic Boundary Size (Normal): 0 00:30:38.799 Atomic Boundary Size (PFail): 0 00:30:38.799 Atomic Boundary Offset: 0 00:30:38.799 Maximum Single Source Range Length: 65535 00:30:38.799 Maximum Copy Length: 65535 00:30:38.799 Maximum Source Range Count: 1 00:30:38.799 NGUID/EUI64 Never Reused: No 00:30:38.799 Namespace Write Protected: No 00:30:38.799 Number of LBA Formats: 1 00:30:38.799 Current LBA Format: LBA Format #00 00:30:38.799 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:38.799 00:30:38.799 07:54:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:30:38.799 07:54:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:38.799 07:54:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:38.799 07:54:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:38.799 07:54:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:38.799 07:54:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:30:38.799 07:54:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:30:38.799 07:54:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:38.799 07:54:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:30:38.799 07:54:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:38.799 07:54:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:30:38.799 07:54:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:38.799 07:54:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:38.799 rmmod nvme_tcp 00:30:38.799 rmmod nvme_fabrics 00:30:38.799 rmmod nvme_keyring 00:30:38.799 07:54:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:38.799 07:54:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:30:38.799 07:54:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:30:38.799 07:54:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 3069330 ']' 00:30:38.799 07:54:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 3069330 00:30:38.799 07:54:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 3069330 ']' 00:30:38.799 07:54:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 3069330 00:30:38.799 07:54:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:30:38.799 07:54:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:38.799 07:54:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3069330 00:30:38.799 07:54:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:38.799 07:54:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:38.799 07:54:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3069330' 00:30:38.799 killing process with pid 3069330 00:30:38.799 07:54:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 3069330 00:30:38.799 07:54:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 3069330 00:30:40.182 07:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:40.182 07:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:40.182 07:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:40.182 07:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:30:40.183 07:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:30:40.183 07:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:40.183 07:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:30:40.183 07:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:40.183 07:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:40.183 07:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:40.183 07:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:40.183 07:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:42.090 07:54:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:42.090 00:30:42.090 real 0m7.732s 00:30:42.090 user 0m12.022s 00:30:42.090 sys 0m2.244s 00:30:42.090 07:54:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:42.090 07:54:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:42.090 ************************************ 00:30:42.090 END TEST nvmf_identify 00:30:42.090 ************************************ 00:30:42.090 07:54:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:30:42.090 07:54:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:42.090 07:54:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:42.090 07:54:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:42.090 ************************************ 00:30:42.090 START TEST nvmf_perf 00:30:42.090 ************************************ 00:30:42.090 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:30:42.350 * Looking for test storage... 00:30:42.350 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:42.350 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:42.350 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:30:42.350 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:42.350 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:42.350 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:42.350 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:42.350 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:42.350 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:30:42.350 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:30:42.350 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:30:42.350 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:30:42.350 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:30:42.350 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:30:42.350 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:30:42.350 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:42.350 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:30:42.350 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:30:42.350 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:42.350 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:42.350 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:30:42.350 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:30:42.350 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:42.350 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:30:42.350 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:30:42.350 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:30:42.350 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:30:42.350 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:42.350 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:30:42.350 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:30:42.350 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:42.350 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:42.350 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:30:42.350 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:42.350 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:42.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:42.350 --rc genhtml_branch_coverage=1 00:30:42.350 --rc genhtml_function_coverage=1 00:30:42.350 --rc genhtml_legend=1 00:30:42.350 --rc geninfo_all_blocks=1 00:30:42.350 --rc geninfo_unexecuted_blocks=1 00:30:42.350 00:30:42.350 ' 00:30:42.350 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:42.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:42.350 --rc genhtml_branch_coverage=1 00:30:42.350 --rc genhtml_function_coverage=1 00:30:42.350 --rc genhtml_legend=1 00:30:42.350 --rc geninfo_all_blocks=1 00:30:42.350 --rc geninfo_unexecuted_blocks=1 00:30:42.350 00:30:42.350 ' 00:30:42.350 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:42.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:42.350 --rc genhtml_branch_coverage=1 00:30:42.350 --rc genhtml_function_coverage=1 00:30:42.350 --rc genhtml_legend=1 00:30:42.350 --rc geninfo_all_blocks=1 00:30:42.350 --rc geninfo_unexecuted_blocks=1 00:30:42.350 00:30:42.350 ' 00:30:42.350 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:42.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:42.350 --rc genhtml_branch_coverage=1 00:30:42.350 --rc genhtml_function_coverage=1 00:30:42.350 --rc genhtml_legend=1 00:30:42.350 --rc geninfo_all_blocks=1 00:30:42.350 --rc geninfo_unexecuted_blocks=1 00:30:42.350 00:30:42.350 ' 00:30:42.350 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:42.350 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:30:42.350 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:42.350 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:42.350 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:42.350 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:42.350 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:42.350 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:42.350 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:42.350 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:42.350 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:42.351 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:42.351 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:42.351 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:42.351 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:42.351 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:42.351 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:42.351 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:42.351 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:42.351 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:30:42.351 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:42.351 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:42.351 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:42.351 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.351 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.351 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.351 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:30:42.351 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.351 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:30:42.351 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:42.351 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:42.351 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:42.351 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:42.351 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:42.351 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:42.351 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:42.351 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:42.351 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:42.351 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:42.351 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:30:42.351 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:30:42.351 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:42.351 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:30:42.351 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:42.351 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:42.351 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:42.351 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:42.351 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:42.351 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:42.351 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:42.351 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:42.351 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:42.351 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:42.351 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:30:42.351 07:54:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:44.889 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:44.889 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:44.889 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:44.889 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:44.889 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:44.889 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.333 ms 00:30:44.889 00:30:44.889 --- 10.0.0.2 ping statistics --- 00:30:44.889 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:44.889 rtt min/avg/max/mdev = 0.333/0.333/0.333/0.000 ms 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:44.889 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:44.889 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:30:44.889 00:30:44.889 --- 10.0.0.1 ping statistics --- 00:30:44.889 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:44.889 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:44.889 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:44.890 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:44.890 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:44.890 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:44.890 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:30:44.890 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:44.890 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:44.890 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:44.890 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=3071680 00:30:44.890 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:44.890 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 3071680 00:30:44.890 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 3071680 ']' 00:30:44.890 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:44.890 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:44.890 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:44.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:44.890 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:44.890 07:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:44.890 [2024-11-19 07:54:36.455964] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:30:44.890 [2024-11-19 07:54:36.456122] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:44.890 [2024-11-19 07:54:36.610990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:44.890 [2024-11-19 07:54:36.754118] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:44.890 [2024-11-19 07:54:36.754209] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:44.890 [2024-11-19 07:54:36.754236] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:44.890 [2024-11-19 07:54:36.754259] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:44.890 [2024-11-19 07:54:36.754279] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:44.890 [2024-11-19 07:54:36.757138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:44.890 [2024-11-19 07:54:36.757219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:44.890 [2024-11-19 07:54:36.757335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:44.890 [2024-11-19 07:54:36.757340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:45.829 07:54:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:45.829 07:54:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:30:45.829 07:54:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:45.829 07:54:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:45.829 07:54:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:45.829 07:54:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:45.829 07:54:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:45.829 07:54:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:30:49.115 07:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:30:49.115 07:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:30:49.115 07:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:30:49.115 07:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:49.374 07:54:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:30:49.374 07:54:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:30:49.374 07:54:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:30:49.374 07:54:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:30:49.374 07:54:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:30:49.633 [2024-11-19 07:54:41.513927] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:49.633 07:54:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:50.199 07:54:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:30:50.199 07:54:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:50.199 07:54:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:30:50.199 07:54:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:50.458 07:54:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:50.717 [2024-11-19 07:54:42.620052] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:50.717 07:54:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:51.285 07:54:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:30:51.285 07:54:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:30:51.285 07:54:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:30:51.285 07:54:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:30:52.663 Initializing NVMe Controllers 00:30:52.663 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:30:52.663 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:30:52.663 Initialization complete. Launching workers. 00:30:52.663 ======================================================== 00:30:52.663 Latency(us) 00:30:52.663 Device Information : IOPS MiB/s Average min max 00:30:52.663 PCIE (0000:88:00.0) NSID 1 from core 0: 74391.75 290.59 429.44 43.09 5320.23 00:30:52.663 ======================================================== 00:30:52.663 Total : 74391.75 290.59 429.44 43.09 5320.23 00:30:52.663 00:30:52.663 07:54:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:54.041 Initializing NVMe Controllers 00:30:54.041 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:54.041 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:54.041 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:54.041 Initialization complete. Launching workers. 00:30:54.041 ======================================================== 00:30:54.041 Latency(us) 00:30:54.041 Device Information : IOPS MiB/s Average min max 00:30:54.041 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 113.00 0.44 9179.82 192.85 44840.91 00:30:54.041 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 69.00 0.27 14917.70 6959.53 47926.21 00:30:54.041 ======================================================== 00:30:54.041 Total : 182.00 0.71 11355.17 192.85 47926.21 00:30:54.041 00:30:54.300 07:54:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:55.729 Initializing NVMe Controllers 00:30:55.729 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:55.729 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:55.729 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:55.729 Initialization complete. Launching workers. 00:30:55.729 ======================================================== 00:30:55.729 Latency(us) 00:30:55.729 Device Information : IOPS MiB/s Average min max 00:30:55.729 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5218.69 20.39 6155.66 903.65 12317.89 00:30:55.729 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3868.77 15.11 8309.21 5718.40 16489.46 00:30:55.729 ======================================================== 00:30:55.729 Total : 9087.45 35.50 7072.48 903.65 16489.46 00:30:55.729 00:30:55.729 07:54:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:30:55.729 07:54:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:30:55.729 07:54:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:59.018 Initializing NVMe Controllers 00:30:59.018 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:59.018 Controller IO queue size 128, less than required. 00:30:59.018 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:59.018 Controller IO queue size 128, less than required. 00:30:59.018 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:59.018 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:59.018 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:59.018 Initialization complete. Launching workers. 00:30:59.018 ======================================================== 00:30:59.018 Latency(us) 00:30:59.018 Device Information : IOPS MiB/s Average min max 00:30:59.018 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1316.89 329.22 99824.16 63335.23 264776.52 00:30:59.018 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 539.84 134.96 260610.13 119893.04 501261.71 00:30:59.018 ======================================================== 00:30:59.018 Total : 1856.74 464.18 146572.31 63335.23 501261.71 00:30:59.018 00:30:59.018 07:54:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:30:59.018 No valid NVMe controllers or AIO or URING devices found 00:30:59.018 Initializing NVMe Controllers 00:30:59.018 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:59.018 Controller IO queue size 128, less than required. 00:30:59.018 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:59.018 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:30:59.018 Controller IO queue size 128, less than required. 00:30:59.019 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:59.019 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:30:59.019 WARNING: Some requested NVMe devices were skipped 00:30:59.019 07:54:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:31:02.306 Initializing NVMe Controllers 00:31:02.306 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:02.306 Controller IO queue size 128, less than required. 00:31:02.306 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:02.306 Controller IO queue size 128, less than required. 00:31:02.306 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:02.306 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:02.306 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:02.306 Initialization complete. Launching workers. 00:31:02.306 00:31:02.306 ==================== 00:31:02.306 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:31:02.306 TCP transport: 00:31:02.306 polls: 5500 00:31:02.306 idle_polls: 3585 00:31:02.306 sock_completions: 1915 00:31:02.306 nvme_completions: 3953 00:31:02.306 submitted_requests: 6018 00:31:02.306 queued_requests: 1 00:31:02.306 00:31:02.306 ==================== 00:31:02.306 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:31:02.306 TCP transport: 00:31:02.306 polls: 6283 00:31:02.306 idle_polls: 4176 00:31:02.306 sock_completions: 2107 00:31:02.306 nvme_completions: 4175 00:31:02.306 submitted_requests: 6246 00:31:02.306 queued_requests: 1 00:31:02.306 ======================================================== 00:31:02.306 Latency(us) 00:31:02.306 Device Information : IOPS MiB/s Average min max 00:31:02.306 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 986.93 246.73 142761.40 62782.45 437267.44 00:31:02.306 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1042.37 260.59 124775.54 85502.73 311378.30 00:31:02.306 ======================================================== 00:31:02.306 Total : 2029.31 507.33 133522.79 62782.45 437267.44 00:31:02.306 00:31:02.306 07:54:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:31:02.306 07:54:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:02.306 07:54:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:31:02.307 07:54:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:31:02.307 07:54:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:31:06.492 07:54:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=685ac551-5ffa-4097-9baa-131eada0c615 00:31:06.492 07:54:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 685ac551-5ffa-4097-9baa-131eada0c615 00:31:06.492 07:54:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=685ac551-5ffa-4097-9baa-131eada0c615 00:31:06.492 07:54:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:31:06.492 07:54:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:31:06.492 07:54:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:31:06.492 07:54:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:06.492 07:54:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:31:06.492 { 00:31:06.492 "uuid": "685ac551-5ffa-4097-9baa-131eada0c615", 00:31:06.492 "name": "lvs_0", 00:31:06.492 "base_bdev": "Nvme0n1", 00:31:06.492 "total_data_clusters": 238234, 00:31:06.492 "free_clusters": 238234, 00:31:06.492 "block_size": 512, 00:31:06.492 "cluster_size": 4194304 00:31:06.492 } 00:31:06.492 ]' 00:31:06.492 07:54:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="685ac551-5ffa-4097-9baa-131eada0c615") .free_clusters' 00:31:06.492 07:54:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=238234 00:31:06.492 07:54:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="685ac551-5ffa-4097-9baa-131eada0c615") .cluster_size' 00:31:06.492 07:54:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:31:06.492 07:54:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=952936 00:31:06.492 07:54:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 952936 00:31:06.492 952936 00:31:06.492 07:54:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:31:06.492 07:54:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:31:06.492 07:54:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 685ac551-5ffa-4097-9baa-131eada0c615 lbd_0 20480 00:31:06.492 07:54:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=e4efb4e5-f89d-4a98-a2a7-bc50522b7b90 00:31:06.492 07:54:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore e4efb4e5-f89d-4a98-a2a7-bc50522b7b90 lvs_n_0 00:31:07.428 07:54:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=2d15aa8e-32de-4655-960c-a92575dc0812 00:31:07.428 07:54:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 2d15aa8e-32de-4655-960c-a92575dc0812 00:31:07.428 07:54:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=2d15aa8e-32de-4655-960c-a92575dc0812 00:31:07.428 07:54:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:31:07.428 07:54:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:31:07.428 07:54:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:31:07.428 07:54:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:07.685 07:54:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:31:07.685 { 00:31:07.685 "uuid": "685ac551-5ffa-4097-9baa-131eada0c615", 00:31:07.685 "name": "lvs_0", 00:31:07.685 "base_bdev": "Nvme0n1", 00:31:07.685 "total_data_clusters": 238234, 00:31:07.685 "free_clusters": 233114, 00:31:07.685 "block_size": 512, 00:31:07.685 "cluster_size": 4194304 00:31:07.686 }, 00:31:07.686 { 00:31:07.686 "uuid": "2d15aa8e-32de-4655-960c-a92575dc0812", 00:31:07.686 "name": "lvs_n_0", 00:31:07.686 "base_bdev": "e4efb4e5-f89d-4a98-a2a7-bc50522b7b90", 00:31:07.686 "total_data_clusters": 5114, 00:31:07.686 "free_clusters": 5114, 00:31:07.686 "block_size": 512, 00:31:07.686 "cluster_size": 4194304 00:31:07.686 } 00:31:07.686 ]' 00:31:07.686 07:54:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="2d15aa8e-32de-4655-960c-a92575dc0812") .free_clusters' 00:31:07.686 07:54:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=5114 00:31:07.686 07:54:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="2d15aa8e-32de-4655-960c-a92575dc0812") .cluster_size' 00:31:07.686 07:54:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:31:07.686 07:54:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=20456 00:31:07.686 07:54:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 20456 00:31:07.686 20456 00:31:07.686 07:54:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:31:07.686 07:54:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 2d15aa8e-32de-4655-960c-a92575dc0812 lbd_nest_0 20456 00:31:07.944 07:54:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=ddb749a0-baed-428a-b8b0-00f389aa46a9 00:31:07.944 07:54:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:08.511 07:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:31:08.511 07:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 ddb749a0-baed-428a-b8b0-00f389aa46a9 00:31:08.770 07:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:08.770 07:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:31:08.770 07:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:31:08.770 07:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:08.770 07:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:08.770 07:55:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:20.976 Initializing NVMe Controllers 00:31:20.976 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:20.976 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:20.976 Initialization complete. Launching workers. 00:31:20.976 ======================================================== 00:31:20.976 Latency(us) 00:31:20.976 Device Information : IOPS MiB/s Average min max 00:31:20.976 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 45.80 0.02 21858.36 247.55 45819.96 00:31:20.976 ======================================================== 00:31:20.976 Total : 45.80 0.02 21858.36 247.55 45819.96 00:31:20.976 00:31:20.976 07:55:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:20.976 07:55:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:30.951 Initializing NVMe Controllers 00:31:30.951 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:30.951 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:30.951 Initialization complete. Launching workers. 00:31:30.951 ======================================================== 00:31:30.951 Latency(us) 00:31:30.951 Device Information : IOPS MiB/s Average min max 00:31:30.951 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 72.10 9.01 13876.16 5975.68 48885.11 00:31:30.951 ======================================================== 00:31:30.951 Total : 72.10 9.01 13876.16 5975.68 48885.11 00:31:30.951 00:31:30.951 07:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:30.951 07:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:30.951 07:55:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:40.933 Initializing NVMe Controllers 00:31:40.933 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:40.933 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:40.933 Initialization complete. Launching workers. 00:31:40.933 ======================================================== 00:31:40.933 Latency(us) 00:31:40.933 Device Information : IOPS MiB/s Average min max 00:31:40.933 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4833.58 2.36 6619.12 630.84 12318.07 00:31:40.933 ======================================================== 00:31:40.933 Total : 4833.58 2.36 6619.12 630.84 12318.07 00:31:40.933 00:31:40.933 07:55:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:40.933 07:55:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:50.934 Initializing NVMe Controllers 00:31:50.934 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:50.934 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:50.934 Initialization complete. Launching workers. 00:31:50.934 ======================================================== 00:31:50.934 Latency(us) 00:31:50.934 Device Information : IOPS MiB/s Average min max 00:31:50.934 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3116.33 389.54 10271.78 1408.50 24122.05 00:31:50.934 ======================================================== 00:31:50.934 Total : 3116.33 389.54 10271.78 1408.50 24122.05 00:31:50.934 00:31:50.934 07:55:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:50.934 07:55:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:50.934 07:55:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:03.133 Initializing NVMe Controllers 00:32:03.133 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:03.133 Controller IO queue size 128, less than required. 00:32:03.133 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:03.133 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:03.133 Initialization complete. Launching workers. 00:32:03.133 ======================================================== 00:32:03.133 Latency(us) 00:32:03.133 Device Information : IOPS MiB/s Average min max 00:32:03.133 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8541.57 4.17 14985.97 1936.58 56434.81 00:32:03.133 ======================================================== 00:32:03.133 Total : 8541.57 4.17 14985.97 1936.58 56434.81 00:32:03.133 00:32:03.133 07:55:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:03.133 07:55:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:13.134 Initializing NVMe Controllers 00:32:13.134 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:13.134 Controller IO queue size 128, less than required. 00:32:13.134 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:13.134 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:13.134 Initialization complete. Launching workers. 00:32:13.134 ======================================================== 00:32:13.134 Latency(us) 00:32:13.134 Device Information : IOPS MiB/s Average min max 00:32:13.134 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1155.10 144.39 111319.83 15772.93 239645.84 00:32:13.134 ======================================================== 00:32:13.135 Total : 1155.10 144.39 111319.83 15772.93 239645.84 00:32:13.135 00:32:13.135 07:56:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:13.135 07:56:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ddb749a0-baed-428a-b8b0-00f389aa46a9 00:32:13.135 07:56:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:32:13.135 07:56:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e4efb4e5-f89d-4a98-a2a7-bc50522b7b90 00:32:13.438 07:56:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:32:13.719 07:56:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:32:13.719 07:56:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:32:13.719 07:56:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:13.719 07:56:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:32:13.719 07:56:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:13.719 07:56:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:32:13.719 07:56:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:13.719 07:56:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:13.719 rmmod nvme_tcp 00:32:13.719 rmmod nvme_fabrics 00:32:13.719 rmmod nvme_keyring 00:32:13.719 07:56:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:13.719 07:56:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:32:13.719 07:56:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:32:13.719 07:56:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 3071680 ']' 00:32:13.719 07:56:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 3071680 00:32:13.719 07:56:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 3071680 ']' 00:32:13.719 07:56:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 3071680 00:32:13.719 07:56:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:32:13.719 07:56:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:13.719 07:56:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3071680 00:32:13.977 07:56:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:13.977 07:56:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:13.977 07:56:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3071680' 00:32:13.977 killing process with pid 3071680 00:32:13.977 07:56:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 3071680 00:32:13.977 07:56:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 3071680 00:32:16.505 07:56:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:16.505 07:56:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:16.505 07:56:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:16.505 07:56:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:32:16.505 07:56:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:16.505 07:56:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:32:16.505 07:56:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:32:16.505 07:56:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:16.505 07:56:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:16.505 07:56:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:16.505 07:56:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:16.505 07:56:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:18.408 07:56:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:18.408 00:32:18.408 real 1m36.080s 00:32:18.408 user 5m56.231s 00:32:18.408 sys 0m15.573s 00:32:18.408 07:56:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:18.408 07:56:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:32:18.408 ************************************ 00:32:18.408 END TEST nvmf_perf 00:32:18.408 ************************************ 00:32:18.408 07:56:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:32:18.408 07:56:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:18.408 07:56:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:18.408 07:56:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.408 ************************************ 00:32:18.408 START TEST nvmf_fio_host 00:32:18.408 ************************************ 00:32:18.408 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:32:18.408 * Looking for test storage... 00:32:18.408 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:18.408 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:18.408 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:32:18.408 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:18.408 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:18.408 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:18.408 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:18.408 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:18.408 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:32:18.408 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:32:18.408 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:32:18.408 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:32:18.408 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:32:18.408 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:32:18.408 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:32:18.408 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:18.408 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:32:18.408 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:32:18.408 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:18.408 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:18.408 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:32:18.408 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:32:18.408 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:18.408 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:32:18.408 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:32:18.408 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:32:18.408 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:32:18.408 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:18.408 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:32:18.408 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:32:18.408 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:18.408 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:18.408 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:32:18.408 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:18.408 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:18.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:18.408 --rc genhtml_branch_coverage=1 00:32:18.408 --rc genhtml_function_coverage=1 00:32:18.408 --rc genhtml_legend=1 00:32:18.408 --rc geninfo_all_blocks=1 00:32:18.408 --rc geninfo_unexecuted_blocks=1 00:32:18.408 00:32:18.408 ' 00:32:18.408 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:18.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:18.408 --rc genhtml_branch_coverage=1 00:32:18.408 --rc genhtml_function_coverage=1 00:32:18.408 --rc genhtml_legend=1 00:32:18.408 --rc geninfo_all_blocks=1 00:32:18.408 --rc geninfo_unexecuted_blocks=1 00:32:18.408 00:32:18.408 ' 00:32:18.408 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:18.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:18.408 --rc genhtml_branch_coverage=1 00:32:18.408 --rc genhtml_function_coverage=1 00:32:18.408 --rc genhtml_legend=1 00:32:18.408 --rc geninfo_all_blocks=1 00:32:18.408 --rc geninfo_unexecuted_blocks=1 00:32:18.408 00:32:18.408 ' 00:32:18.408 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:18.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:18.408 --rc genhtml_branch_coverage=1 00:32:18.408 --rc genhtml_function_coverage=1 00:32:18.408 --rc genhtml_legend=1 00:32:18.408 --rc geninfo_all_blocks=1 00:32:18.408 --rc geninfo_unexecuted_blocks=1 00:32:18.408 00:32:18.408 ' 00:32:18.408 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:18.408 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:32:18.408 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:18.408 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:18.408 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:18.408 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:18.408 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:18.408 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:18.408 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:32:18.408 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:18.408 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:18.408 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:32:18.408 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:18.408 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:18.408 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:18.408 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:18.408 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:18.409 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:18.409 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:18.409 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:18.409 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:18.409 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:18.409 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:18.409 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:18.409 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:18.409 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:18.409 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:18.409 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:18.409 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:18.409 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:32:18.409 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:18.409 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:18.409 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:18.409 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:18.409 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:18.409 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:18.409 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:32:18.409 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:18.409 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:32:18.409 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:18.409 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:18.409 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:18.409 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:18.409 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:18.409 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:18.409 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:18.409 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:18.409 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:18.409 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:18.409 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:18.409 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:32:18.409 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:18.409 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:18.409 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:18.409 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:18.409 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:18.409 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:18.409 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:18.409 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:18.409 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:18.409 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:18.409 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:32:18.409 07:56:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.938 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:20.938 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:32:20.938 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:20.938 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:20.938 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:20.938 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:20.938 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:20.938 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:32:20.938 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:20.938 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:32:20.938 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:32:20.938 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:32:20.938 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:32:20.938 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:32:20.938 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:32:20.938 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:20.938 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:20.938 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:20.938 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:20.938 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:20.938 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:20.938 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:20.938 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:20.938 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:20.938 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:20.938 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:20.938 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:20.938 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:20.939 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:20.939 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:20.939 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:20.939 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:20.939 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:20.939 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.335 ms 00:32:20.939 00:32:20.939 --- 10.0.0.2 ping statistics --- 00:32:20.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:20.939 rtt min/avg/max/mdev = 0.335/0.335/0.335/0.000 ms 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:20.939 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:20.939 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:32:20.939 00:32:20.939 --- 10.0.0.1 ping statistics --- 00:32:20.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:20.939 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3084185 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3084185 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 3084185 ']' 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:20.939 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:20.940 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:20.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:20.940 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:20.940 07:56:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.940 [2024-11-19 07:56:12.534514] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:32:20.940 [2024-11-19 07:56:12.534656] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:20.940 [2024-11-19 07:56:12.678433] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:20.940 [2024-11-19 07:56:12.800797] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:20.940 [2024-11-19 07:56:12.800889] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:20.940 [2024-11-19 07:56:12.800911] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:20.940 [2024-11-19 07:56:12.800932] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:20.940 [2024-11-19 07:56:12.800950] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:20.940 [2024-11-19 07:56:12.803535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:20.940 [2024-11-19 07:56:12.803599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:20.940 [2024-11-19 07:56:12.803644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:20.940 [2024-11-19 07:56:12.803650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:21.874 07:56:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:21.874 07:56:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:32:21.874 07:56:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:22.133 [2024-11-19 07:56:13.819268] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:22.133 07:56:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:32:22.133 07:56:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:22.133 07:56:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.133 07:56:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:32:22.391 Malloc1 00:32:22.391 07:56:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:22.649 07:56:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:32:22.907 07:56:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:23.165 [2024-11-19 07:56:15.057314] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:23.165 07:56:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:23.729 07:56:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:32:23.729 07:56:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:23.730 07:56:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:23.730 07:56:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:23.730 07:56:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:23.730 07:56:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:23.730 07:56:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:23.730 07:56:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:32:23.730 07:56:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:23.730 07:56:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:23.730 07:56:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:23.730 07:56:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:32:23.730 07:56:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:23.730 07:56:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:23.730 07:56:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:23.730 07:56:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:32:23.730 07:56:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:23.730 07:56:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:23.730 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:23.730 fio-3.35 00:32:23.730 Starting 1 thread 00:32:26.258 00:32:26.258 test: (groupid=0, jobs=1): err= 0: pid=3084662: Tue Nov 19 07:56:18 2024 00:32:26.258 read: IOPS=6232, BW=24.3MiB/s (25.5MB/s)(48.9MiB/2009msec) 00:32:26.258 slat (usec): min=3, max=198, avg= 3.83, stdev= 2.79 00:32:26.258 clat (usec): min=3729, max=19527, avg=11129.75, stdev=993.89 00:32:26.258 lat (usec): min=3780, max=19531, avg=11133.58, stdev=993.75 00:32:26.258 clat percentiles (usec): 00:32:26.258 | 1.00th=[ 8979], 5.00th=[ 9634], 10.00th=[ 9896], 20.00th=[10290], 00:32:26.258 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11207], 60.00th=[11338], 00:32:26.258 | 70.00th=[11600], 80.00th=[11863], 90.00th=[12256], 95.00th=[12649], 00:32:26.258 | 99.00th=[13435], 99.50th=[13698], 99.90th=[17171], 99.95th=[17695], 00:32:26.258 | 99.99th=[19530] 00:32:26.258 bw ( KiB/s): min=23496, max=25600, per=99.86%, avg=24896.00, stdev=948.58, samples=4 00:32:26.258 iops : min= 5874, max= 6400, avg=6224.00, stdev=237.14, samples=4 00:32:26.258 write: IOPS=6224, BW=24.3MiB/s (25.5MB/s)(48.8MiB/2009msec); 0 zone resets 00:32:26.258 slat (usec): min=3, max=153, avg= 3.88, stdev= 1.92 00:32:26.258 clat (usec): min=1912, max=17439, avg=9279.32, stdev=816.82 00:32:26.258 lat (usec): min=1929, max=17443, avg=9283.20, stdev=816.76 00:32:26.258 clat percentiles (usec): 00:32:26.258 | 1.00th=[ 7439], 5.00th=[ 8029], 10.00th=[ 8356], 20.00th=[ 8717], 00:32:26.258 | 30.00th=[ 8979], 40.00th=[ 9110], 50.00th=[ 9241], 60.00th=[ 9503], 00:32:26.258 | 70.00th=[ 9634], 80.00th=[ 9896], 90.00th=[10159], 95.00th=[10421], 00:32:26.258 | 99.00th=[10945], 99.50th=[11207], 99.90th=[15664], 99.95th=[17171], 00:32:26.258 | 99.99th=[17433] 00:32:26.258 bw ( KiB/s): min=24656, max=25216, per=100.00%, avg=24902.00, stdev=238.52, samples=4 00:32:26.258 iops : min= 6164, max= 6304, avg=6225.50, stdev=59.63, samples=4 00:32:26.258 lat (msec) : 2=0.01%, 4=0.08%, 10=48.43%, 20=51.48% 00:32:26.258 cpu : usr=70.32%, sys=28.09%, ctx=85, majf=0, minf=1547 00:32:26.258 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:32:26.258 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:26.258 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:26.258 issued rwts: total=12521,12505,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:26.258 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:26.258 00:32:26.258 Run status group 0 (all jobs): 00:32:26.258 READ: bw=24.3MiB/s (25.5MB/s), 24.3MiB/s-24.3MiB/s (25.5MB/s-25.5MB/s), io=48.9MiB (51.3MB), run=2009-2009msec 00:32:26.258 WRITE: bw=24.3MiB/s (25.5MB/s), 24.3MiB/s-24.3MiB/s (25.5MB/s-25.5MB/s), io=48.8MiB (51.2MB), run=2009-2009msec 00:32:26.517 ----------------------------------------------------- 00:32:26.517 Suppressions used: 00:32:26.517 count bytes template 00:32:26.517 1 57 /usr/src/fio/parse.c 00:32:26.517 1 8 libtcmalloc_minimal.so 00:32:26.517 ----------------------------------------------------- 00:32:26.517 00:32:26.517 07:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:32:26.517 07:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:32:26.517 07:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:26.517 07:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:26.517 07:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:26.517 07:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:26.517 07:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:32:26.517 07:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:26.517 07:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:26.517 07:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:26.517 07:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:32:26.517 07:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:26.517 07:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:26.517 07:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:26.517 07:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:32:26.517 07:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:26.517 07:56:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:32:26.775 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:32:26.775 fio-3.35 00:32:26.775 Starting 1 thread 00:32:29.304 00:32:29.304 test: (groupid=0, jobs=1): err= 0: pid=3085119: Tue Nov 19 07:56:21 2024 00:32:29.304 read: IOPS=6165, BW=96.3MiB/s (101MB/s)(193MiB/2007msec) 00:32:29.304 slat (usec): min=3, max=118, avg= 5.30, stdev= 2.13 00:32:29.304 clat (usec): min=3164, max=23399, avg=11885.18, stdev=2465.89 00:32:29.304 lat (usec): min=3169, max=23404, avg=11890.48, stdev=2465.98 00:32:29.304 clat percentiles (usec): 00:32:29.304 | 1.00th=[ 6652], 5.00th=[ 7963], 10.00th=[ 8979], 20.00th=[ 9896], 00:32:29.304 | 30.00th=[10683], 40.00th=[11207], 50.00th=[11731], 60.00th=[12256], 00:32:29.304 | 70.00th=[13042], 80.00th=[13698], 90.00th=[15008], 95.00th=[16188], 00:32:29.304 | 99.00th=[19006], 99.50th=[19792], 99.90th=[20579], 99.95th=[20841], 00:32:29.304 | 99.99th=[21365] 00:32:29.304 bw ( KiB/s): min=38048, max=57664, per=48.95%, avg=48296.00, stdev=9993.85, samples=4 00:32:29.304 iops : min= 2378, max= 3604, avg=3018.50, stdev=624.62, samples=4 00:32:29.304 write: IOPS=3613, BW=56.5MiB/s (59.2MB/s)(99.1MiB/1755msec); 0 zone resets 00:32:29.304 slat (usec): min=33, max=146, avg=36.66, stdev= 5.61 00:32:29.304 clat (usec): min=6694, max=30221, avg=16200.50, stdev=2612.63 00:32:29.304 lat (usec): min=6731, max=30258, avg=16237.16, stdev=2612.51 00:32:29.304 clat percentiles (usec): 00:32:29.304 | 1.00th=[11207], 5.00th=[12256], 10.00th=[13042], 20.00th=[13829], 00:32:29.304 | 30.00th=[14746], 40.00th=[15533], 50.00th=[16188], 60.00th=[16712], 00:32:29.304 | 70.00th=[17433], 80.00th=[18220], 90.00th=[19530], 95.00th=[20317], 00:32:29.304 | 99.00th=[22938], 99.50th=[24511], 99.90th=[26608], 99.95th=[30016], 00:32:29.304 | 99.99th=[30278] 00:32:29.304 bw ( KiB/s): min=40448, max=59840, per=87.32%, avg=50488.00, stdev=9783.85, samples=4 00:32:29.304 iops : min= 2528, max= 3740, avg=3155.50, stdev=611.49, samples=4 00:32:29.304 lat (msec) : 4=0.06%, 10=14.04%, 20=83.47%, 50=2.43% 00:32:29.304 cpu : usr=77.52%, sys=21.09%, ctx=40, majf=0, minf=2112 00:32:29.304 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.3% 00:32:29.304 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:29.304 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.1% 00:32:29.304 issued rwts: total=12375,6342,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:29.304 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:29.304 00:32:29.304 Run status group 0 (all jobs): 00:32:29.304 READ: bw=96.3MiB/s (101MB/s), 96.3MiB/s-96.3MiB/s (101MB/s-101MB/s), io=193MiB (203MB), run=2007-2007msec 00:32:29.304 WRITE: bw=56.5MiB/s (59.2MB/s), 56.5MiB/s-56.5MiB/s (59.2MB/s-59.2MB/s), io=99.1MiB (104MB), run=1755-1755msec 00:32:29.562 ----------------------------------------------------- 00:32:29.562 Suppressions used: 00:32:29.562 count bytes template 00:32:29.562 1 57 /usr/src/fio/parse.c 00:32:29.562 31 2976 /usr/src/fio/iolog.c 00:32:29.562 1 8 libtcmalloc_minimal.so 00:32:29.562 ----------------------------------------------------- 00:32:29.562 00:32:29.562 07:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:29.820 07:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:32:29.821 07:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:32:29.821 07:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:32:29.821 07:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # bdfs=() 00:32:29.821 07:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # local bdfs 00:32:29.821 07:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:29.821 07:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:32:29.821 07:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:32:29.821 07:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:32:29.821 07:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:32:29.821 07:56:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:32:33.101 Nvme0n1 00:32:33.101 07:56:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:32:36.382 07:56:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=989f8a1c-1308-4f8e-b36e-19d82097181f 00:32:36.382 07:56:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 989f8a1c-1308-4f8e-b36e-19d82097181f 00:32:36.382 07:56:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=989f8a1c-1308-4f8e-b36e-19d82097181f 00:32:36.382 07:56:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:32:36.382 07:56:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:32:36.382 07:56:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:32:36.382 07:56:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:36.382 07:56:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:32:36.382 { 00:32:36.382 "uuid": "989f8a1c-1308-4f8e-b36e-19d82097181f", 00:32:36.382 "name": "lvs_0", 00:32:36.382 "base_bdev": "Nvme0n1", 00:32:36.382 "total_data_clusters": 930, 00:32:36.382 "free_clusters": 930, 00:32:36.382 "block_size": 512, 00:32:36.382 "cluster_size": 1073741824 00:32:36.382 } 00:32:36.382 ]' 00:32:36.382 07:56:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="989f8a1c-1308-4f8e-b36e-19d82097181f") .free_clusters' 00:32:36.382 07:56:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=930 00:32:36.382 07:56:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="989f8a1c-1308-4f8e-b36e-19d82097181f") .cluster_size' 00:32:36.382 07:56:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=1073741824 00:32:36.382 07:56:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=952320 00:32:36.382 07:56:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 952320 00:32:36.382 952320 00:32:36.382 07:56:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:32:36.949 77b540ae-5fe4-4d39-a857-8882531a6048 00:32:36.949 07:56:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:32:37.206 07:56:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:32:37.473 07:56:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:37.731 07:56:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:37.731 07:56:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:37.731 07:56:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:37.731 07:56:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:37.731 07:56:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:37.731 07:56:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:37.731 07:56:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:32:37.731 07:56:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:37.731 07:56:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:37.731 07:56:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:37.731 07:56:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:32:37.731 07:56:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:37.731 07:56:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:37.731 07:56:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:37.731 07:56:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:32:37.731 07:56:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:37.731 07:56:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:37.989 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:37.989 fio-3.35 00:32:37.989 Starting 1 thread 00:32:40.520 00:32:40.520 test: (groupid=0, jobs=1): err= 0: pid=3086505: Tue Nov 19 07:56:32 2024 00:32:40.520 read: IOPS=4429, BW=17.3MiB/s (18.1MB/s)(34.8MiB/2011msec) 00:32:40.520 slat (usec): min=3, max=157, avg= 3.99, stdev= 2.80 00:32:40.520 clat (usec): min=1115, max=172609, avg=15682.43, stdev=13138.19 00:32:40.520 lat (usec): min=1120, max=172666, avg=15686.43, stdev=13138.57 00:32:40.520 clat percentiles (msec): 00:32:40.520 | 1.00th=[ 12], 5.00th=[ 13], 10.00th=[ 13], 20.00th=[ 14], 00:32:40.520 | 30.00th=[ 14], 40.00th=[ 15], 50.00th=[ 15], 60.00th=[ 15], 00:32:40.520 | 70.00th=[ 16], 80.00th=[ 16], 90.00th=[ 17], 95.00th=[ 17], 00:32:40.520 | 99.00th=[ 20], 99.50th=[ 157], 99.90th=[ 174], 99.95th=[ 174], 00:32:40.520 | 99.99th=[ 174] 00:32:40.520 bw ( KiB/s): min=12688, max=19568, per=99.82%, avg=17686.00, stdev=3336.34, samples=4 00:32:40.520 iops : min= 3172, max= 4892, avg=4421.50, stdev=834.09, samples=4 00:32:40.520 write: IOPS=4433, BW=17.3MiB/s (18.2MB/s)(34.8MiB/2011msec); 0 zone resets 00:32:40.520 slat (usec): min=3, max=122, avg= 4.17, stdev= 2.27 00:32:40.520 clat (usec): min=385, max=170107, avg=13038.24, stdev=12359.61 00:32:40.520 lat (usec): min=390, max=170118, avg=13042.41, stdev=12360.01 00:32:40.520 clat percentiles (msec): 00:32:40.520 | 1.00th=[ 9], 5.00th=[ 11], 10.00th=[ 11], 20.00th=[ 12], 00:32:40.520 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 12], 60.00th=[ 13], 00:32:40.520 | 70.00th=[ 13], 80.00th=[ 13], 90.00th=[ 14], 95.00th=[ 14], 00:32:40.520 | 99.00th=[ 18], 99.50th=[ 159], 99.90th=[ 171], 99.95th=[ 171], 00:32:40.520 | 99.99th=[ 171] 00:32:40.520 bw ( KiB/s): min=13352, max=19264, per=99.85%, avg=17706.00, stdev=2903.76, samples=4 00:32:40.520 iops : min= 3338, max= 4816, avg=4426.50, stdev=725.94, samples=4 00:32:40.520 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:32:40.520 lat (msec) : 2=0.02%, 4=0.10%, 10=1.59%, 20=97.39%, 50=0.17% 00:32:40.520 lat (msec) : 250=0.72% 00:32:40.520 cpu : usr=61.54%, sys=37.06%, ctx=87, majf=0, minf=1544 00:32:40.520 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:32:40.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:40.520 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:40.520 issued rwts: total=8908,8915,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:40.520 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:40.520 00:32:40.520 Run status group 0 (all jobs): 00:32:40.520 READ: bw=17.3MiB/s (18.1MB/s), 17.3MiB/s-17.3MiB/s (18.1MB/s-18.1MB/s), io=34.8MiB (36.5MB), run=2011-2011msec 00:32:40.520 WRITE: bw=17.3MiB/s (18.2MB/s), 17.3MiB/s-17.3MiB/s (18.2MB/s-18.2MB/s), io=34.8MiB (36.5MB), run=2011-2011msec 00:32:40.520 ----------------------------------------------------- 00:32:40.520 Suppressions used: 00:32:40.520 count bytes template 00:32:40.520 1 58 /usr/src/fio/parse.c 00:32:40.520 1 8 libtcmalloc_minimal.so 00:32:40.520 ----------------------------------------------------- 00:32:40.520 00:32:40.520 07:56:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:32:41.088 07:56:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:32:42.020 07:56:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=b8404d79-b14c-4873-90e8-ebb6e7cb5a57 00:32:42.020 07:56:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb b8404d79-b14c-4873-90e8-ebb6e7cb5a57 00:32:42.020 07:56:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=b8404d79-b14c-4873-90e8-ebb6e7cb5a57 00:32:42.020 07:56:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:32:42.020 07:56:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:32:42.020 07:56:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:32:42.020 07:56:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:42.278 07:56:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:32:42.278 { 00:32:42.278 "uuid": "989f8a1c-1308-4f8e-b36e-19d82097181f", 00:32:42.278 "name": "lvs_0", 00:32:42.278 "base_bdev": "Nvme0n1", 00:32:42.278 "total_data_clusters": 930, 00:32:42.278 "free_clusters": 0, 00:32:42.278 "block_size": 512, 00:32:42.278 "cluster_size": 1073741824 00:32:42.278 }, 00:32:42.278 { 00:32:42.278 "uuid": "b8404d79-b14c-4873-90e8-ebb6e7cb5a57", 00:32:42.278 "name": "lvs_n_0", 00:32:42.278 "base_bdev": "77b540ae-5fe4-4d39-a857-8882531a6048", 00:32:42.278 "total_data_clusters": 237847, 00:32:42.278 "free_clusters": 237847, 00:32:42.278 "block_size": 512, 00:32:42.278 "cluster_size": 4194304 00:32:42.278 } 00:32:42.278 ]' 00:32:42.278 07:56:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="b8404d79-b14c-4873-90e8-ebb6e7cb5a57") .free_clusters' 00:32:42.536 07:56:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=237847 00:32:42.536 07:56:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="b8404d79-b14c-4873-90e8-ebb6e7cb5a57") .cluster_size' 00:32:42.536 07:56:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=4194304 00:32:42.536 07:56:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=951388 00:32:42.536 07:56:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 951388 00:32:42.536 951388 00:32:42.536 07:56:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:32:43.470 5759bb9f-ea00-4bb5-be8d-885f9bebdbff 00:32:43.727 07:56:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:32:43.985 07:56:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:32:44.244 07:56:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:32:44.502 07:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:44.502 07:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:44.502 07:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:44.502 07:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:44.502 07:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:44.502 07:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:44.502 07:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:32:44.502 07:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:44.502 07:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:44.502 07:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:44.502 07:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:32:44.502 07:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:44.502 07:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:44.502 07:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:44.502 07:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:32:44.502 07:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:44.502 07:56:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:44.760 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:44.760 fio-3.35 00:32:44.760 Starting 1 thread 00:32:47.288 00:32:47.288 test: (groupid=0, jobs=1): err= 0: pid=3087361: Tue Nov 19 07:56:39 2024 00:32:47.288 read: IOPS=4269, BW=16.7MiB/s (17.5MB/s)(33.5MiB/2011msec) 00:32:47.288 slat (usec): min=3, max=171, avg= 4.03, stdev= 3.25 00:32:47.288 clat (usec): min=5655, max=26978, avg=16259.46, stdev=1543.83 00:32:47.288 lat (usec): min=5694, max=26984, avg=16263.49, stdev=1543.73 00:32:47.288 clat percentiles (usec): 00:32:47.288 | 1.00th=[12649], 5.00th=[13960], 10.00th=[14484], 20.00th=[15008], 00:32:47.288 | 30.00th=[15533], 40.00th=[15795], 50.00th=[16188], 60.00th=[16581], 00:32:47.288 | 70.00th=[16909], 80.00th=[17433], 90.00th=[18220], 95.00th=[18744], 00:32:47.288 | 99.00th=[19792], 99.50th=[20055], 99.90th=[23987], 99.95th=[25560], 00:32:47.288 | 99.99th=[26870] 00:32:47.288 bw ( KiB/s): min=16112, max=17480, per=99.71%, avg=17028.00, stdev=638.15, samples=4 00:32:47.288 iops : min= 4028, max= 4370, avg=4257.00, stdev=159.54, samples=4 00:32:47.288 write: IOPS=4276, BW=16.7MiB/s (17.5MB/s)(33.6MiB/2011msec); 0 zone resets 00:32:47.288 slat (usec): min=3, max=112, avg= 4.15, stdev= 2.09 00:32:47.288 clat (usec): min=3874, max=25485, avg=13466.00, stdev=1298.14 00:32:47.288 lat (usec): min=3881, max=25488, avg=13470.15, stdev=1298.08 00:32:47.288 clat percentiles (usec): 00:32:47.288 | 1.00th=[10683], 5.00th=[11600], 10.00th=[11994], 20.00th=[12518], 00:32:47.288 | 30.00th=[12911], 40.00th=[13173], 50.00th=[13435], 60.00th=[13698], 00:32:47.288 | 70.00th=[14091], 80.00th=[14484], 90.00th=[14877], 95.00th=[15401], 00:32:47.288 | 99.00th=[16319], 99.50th=[16712], 99.90th=[21890], 99.95th=[23725], 00:32:47.288 | 99.99th=[25560] 00:32:47.288 bw ( KiB/s): min=16840, max=17344, per=99.91%, avg=17090.00, stdev=212.30, samples=4 00:32:47.288 iops : min= 4210, max= 4336, avg=4272.50, stdev=53.08, samples=4 00:32:47.288 lat (msec) : 4=0.02%, 10=0.31%, 20=99.26%, 50=0.41% 00:32:47.288 cpu : usr=60.55%, sys=38.11%, ctx=80, majf=0, minf=1543 00:32:47.288 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:32:47.288 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:47.288 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:47.288 issued rwts: total=8586,8600,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:47.288 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:47.288 00:32:47.288 Run status group 0 (all jobs): 00:32:47.288 READ: bw=16.7MiB/s (17.5MB/s), 16.7MiB/s-16.7MiB/s (17.5MB/s-17.5MB/s), io=33.5MiB (35.2MB), run=2011-2011msec 00:32:47.288 WRITE: bw=16.7MiB/s (17.5MB/s), 16.7MiB/s-16.7MiB/s (17.5MB/s-17.5MB/s), io=33.6MiB (35.2MB), run=2011-2011msec 00:32:47.545 ----------------------------------------------------- 00:32:47.545 Suppressions used: 00:32:47.545 count bytes template 00:32:47.545 1 58 /usr/src/fio/parse.c 00:32:47.545 1 8 libtcmalloc_minimal.so 00:32:47.545 ----------------------------------------------------- 00:32:47.545 00:32:47.545 07:56:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:32:47.803 07:56:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:32:47.804 07:56:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:32:51.987 07:56:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:32:52.245 07:56:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:32:55.587 07:56:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:32:55.587 07:56:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:32:57.503 07:56:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:32:57.503 07:56:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:32:57.503 07:56:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:32:57.503 07:56:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:57.503 07:56:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:32:57.503 07:56:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:57.503 07:56:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:32:57.503 07:56:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:57.503 07:56:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:57.503 rmmod nvme_tcp 00:32:57.503 rmmod nvme_fabrics 00:32:57.503 rmmod nvme_keyring 00:32:57.503 07:56:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:57.503 07:56:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:32:57.503 07:56:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:32:57.503 07:56:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 3084185 ']' 00:32:57.503 07:56:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 3084185 00:32:57.503 07:56:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 3084185 ']' 00:32:57.503 07:56:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 3084185 00:32:57.503 07:56:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:32:57.503 07:56:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:57.503 07:56:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3084185 00:32:57.763 07:56:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:57.763 07:56:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:57.763 07:56:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3084185' 00:32:57.763 killing process with pid 3084185 00:32:57.763 07:56:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 3084185 00:32:57.763 07:56:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 3084185 00:32:59.141 07:56:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:59.141 07:56:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:59.141 07:56:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:59.141 07:56:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:32:59.141 07:56:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:32:59.141 07:56:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:59.141 07:56:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:32:59.141 07:56:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:59.141 07:56:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:59.141 07:56:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:59.141 07:56:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:59.141 07:56:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:01.051 07:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:01.051 00:33:01.051 real 0m42.620s 00:33:01.051 user 2m42.649s 00:33:01.051 sys 0m8.879s 00:33:01.051 07:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:01.051 07:56:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.051 ************************************ 00:33:01.051 END TEST nvmf_fio_host 00:33:01.051 ************************************ 00:33:01.051 07:56:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:33:01.051 07:56:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:01.051 07:56:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:01.051 07:56:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.051 ************************************ 00:33:01.051 START TEST nvmf_failover 00:33:01.051 ************************************ 00:33:01.051 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:33:01.051 * Looking for test storage... 00:33:01.051 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:01.051 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:01.051 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:33:01.051 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:01.051 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:01.051 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:01.051 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:01.051 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:01.051 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:33:01.051 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:33:01.051 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:33:01.051 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:33:01.051 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:33:01.051 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:33:01.051 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:33:01.051 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:01.051 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:33:01.051 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:33:01.051 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:01.051 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:01.051 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:33:01.051 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:33:01.051 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:01.051 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:33:01.051 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:33:01.051 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:33:01.051 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:33:01.051 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:01.051 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:33:01.051 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:33:01.051 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:01.051 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:01.051 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:33:01.051 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:01.051 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:01.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:01.051 --rc genhtml_branch_coverage=1 00:33:01.051 --rc genhtml_function_coverage=1 00:33:01.051 --rc genhtml_legend=1 00:33:01.051 --rc geninfo_all_blocks=1 00:33:01.051 --rc geninfo_unexecuted_blocks=1 00:33:01.051 00:33:01.051 ' 00:33:01.051 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:01.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:01.051 --rc genhtml_branch_coverage=1 00:33:01.051 --rc genhtml_function_coverage=1 00:33:01.051 --rc genhtml_legend=1 00:33:01.051 --rc geninfo_all_blocks=1 00:33:01.051 --rc geninfo_unexecuted_blocks=1 00:33:01.051 00:33:01.051 ' 00:33:01.051 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:01.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:01.051 --rc genhtml_branch_coverage=1 00:33:01.051 --rc genhtml_function_coverage=1 00:33:01.051 --rc genhtml_legend=1 00:33:01.051 --rc geninfo_all_blocks=1 00:33:01.051 --rc geninfo_unexecuted_blocks=1 00:33:01.051 00:33:01.051 ' 00:33:01.051 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:01.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:01.051 --rc genhtml_branch_coverage=1 00:33:01.051 --rc genhtml_function_coverage=1 00:33:01.051 --rc genhtml_legend=1 00:33:01.051 --rc geninfo_all_blocks=1 00:33:01.051 --rc geninfo_unexecuted_blocks=1 00:33:01.051 00:33:01.051 ' 00:33:01.051 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:01.051 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:33:01.051 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:01.051 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:01.051 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:01.051 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:01.051 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:01.051 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:01.051 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:01.051 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:01.052 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:01.052 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:01.052 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:01.052 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:01.052 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:01.052 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:01.052 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:01.052 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:01.052 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:01.052 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:33:01.052 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:01.052 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:01.052 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:01.052 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:01.052 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:01.052 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:01.052 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:33:01.052 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:01.052 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:33:01.052 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:01.052 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:01.052 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:01.052 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:01.052 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:01.052 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:01.052 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:01.052 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:01.052 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:01.052 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:01.052 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:01.052 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:01.052 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:01.052 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:01.052 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:33:01.052 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:01.052 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:01.052 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:01.052 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:01.052 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:01.052 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:01.052 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:01.052 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:01.052 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:01.052 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:01.052 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:33:01.052 07:56:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:03.590 07:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:03.590 07:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:33:03.590 07:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:03.590 07:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:03.590 07:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:03.590 07:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:03.590 07:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:03.590 07:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:33:03.590 07:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:03.590 07:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:33:03.590 07:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:33:03.590 07:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:33:03.590 07:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:33:03.590 07:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:33:03.590 07:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:33:03.590 07:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:03.590 07:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:03.590 07:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:03.590 07:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:03.590 07:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:03.590 07:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:03.590 07:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:03.590 07:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:03.590 07:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:03.590 07:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:03.590 07:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:03.590 07:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:03.590 07:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:03.590 07:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:03.590 07:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:03.590 07:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:03.590 07:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:03.590 07:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:03.590 07:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:03.590 07:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:03.590 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:03.590 07:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:03.590 07:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:03.590 07:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:03.590 07:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:03.590 07:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:03.590 07:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:03.591 07:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:03.591 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:03.591 07:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:03.591 07:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:03.591 07:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:03.591 07:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:03.591 07:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:03.591 07:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:03.591 07:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:03.591 07:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:03.591 07:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:03.591 07:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:03.591 07:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:03.591 07:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:03.591 07:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:03.591 07:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:03.591 07:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:03.591 07:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:03.591 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:03.591 07:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:03.591 07:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:03.591 07:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:03.591 07:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:03.591 07:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:03.591 07:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:03.591 07:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:03.591 07:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:03.591 07:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:03.591 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:03.591 07:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:03.591 07:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:03.591 07:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:33:03.591 07:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:03.591 07:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:03.591 07:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:03.591 07:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:03.591 07:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:03.591 07:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:03.591 07:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:03.591 07:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:03.591 07:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:03.591 07:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:03.591 07:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:03.591 07:56:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:03.591 07:56:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:03.591 07:56:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:03.591 07:56:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:03.591 07:56:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:03.591 07:56:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:03.591 07:56:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:03.591 07:56:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:03.591 07:56:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:03.591 07:56:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:03.591 07:56:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:03.591 07:56:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:03.591 07:56:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:03.591 07:56:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:03.591 07:56:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:03.591 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:03.591 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.235 ms 00:33:03.591 00:33:03.591 --- 10.0.0.2 ping statistics --- 00:33:03.591 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:03.591 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:33:03.591 07:56:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:03.591 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:03.591 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.095 ms 00:33:03.591 00:33:03.591 --- 10.0.0.1 ping statistics --- 00:33:03.591 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:03.591 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:33:03.591 07:56:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:03.591 07:56:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:33:03.591 07:56:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:03.591 07:56:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:03.591 07:56:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:03.591 07:56:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:03.591 07:56:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:03.591 07:56:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:03.591 07:56:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:03.591 07:56:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:33:03.591 07:56:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:03.591 07:56:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:03.591 07:56:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:03.591 07:56:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=3090872 00:33:03.591 07:56:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:03.591 07:56:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 3090872 00:33:03.591 07:56:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3090872 ']' 00:33:03.591 07:56:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:03.591 07:56:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:03.591 07:56:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:03.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:03.591 07:56:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:03.591 07:56:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:03.591 [2024-11-19 07:56:55.250017] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:33:03.591 [2024-11-19 07:56:55.250153] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:03.591 [2024-11-19 07:56:55.395508] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:03.851 [2024-11-19 07:56:55.533218] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:03.851 [2024-11-19 07:56:55.533305] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:03.851 [2024-11-19 07:56:55.533331] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:03.851 [2024-11-19 07:56:55.533361] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:03.851 [2024-11-19 07:56:55.533382] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:03.851 [2024-11-19 07:56:55.536127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:03.851 [2024-11-19 07:56:55.536222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:03.851 [2024-11-19 07:56:55.536226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:04.418 07:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:04.418 07:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:33:04.418 07:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:04.418 07:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:04.418 07:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:04.418 07:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:04.418 07:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:04.677 [2024-11-19 07:56:56.488849] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:04.677 07:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:33:04.936 Malloc0 00:33:04.936 07:56:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:05.503 07:56:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:05.503 07:56:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:05.762 [2024-11-19 07:56:57.660129] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:05.762 07:56:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:06.020 [2024-11-19 07:56:57.924915] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:06.020 07:56:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:06.279 [2024-11-19 07:56:58.201907] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:33:06.537 07:56:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3091290 00:33:06.537 07:56:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:33:06.537 07:56:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:06.537 07:56:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3091290 /var/tmp/bdevperf.sock 00:33:06.537 07:56:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3091290 ']' 00:33:06.537 07:56:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:06.537 07:56:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:06.537 07:56:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:06.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:06.537 07:56:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:06.537 07:56:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:07.468 07:56:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:07.468 07:56:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:33:07.468 07:56:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:08.038 NVMe0n1 00:33:08.038 07:56:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:08.296 00:33:08.296 07:57:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3091447 00:33:08.296 07:57:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:08.296 07:57:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:33:09.231 07:57:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:09.490 [2024-11-19 07:57:01.294223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:09.490 07:57:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:33:12.780 07:57:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:12.780 00:33:12.780 07:57:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:13.038 [2024-11-19 07:57:04.960076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:13.038 [2024-11-19 07:57:04.960167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:13.038 [2024-11-19 07:57:04.960190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:13.038 [2024-11-19 07:57:04.960210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:13.038 [2024-11-19 07:57:04.960229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:13.038 [2024-11-19 07:57:04.960248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:13.038 [2024-11-19 07:57:04.960266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:13.298 07:57:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:33:16.590 07:57:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:16.590 [2024-11-19 07:57:08.249888] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:16.590 07:57:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:33:17.522 07:57:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:17.779 [2024-11-19 07:57:09.553360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:17.779 [2024-11-19 07:57:09.553440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:17.779 [2024-11-19 07:57:09.553462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:17.779 [2024-11-19 07:57:09.553480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:17.779 [2024-11-19 07:57:09.553497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:17.779 [2024-11-19 07:57:09.553515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:17.779 [2024-11-19 07:57:09.553532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:17.779 [2024-11-19 07:57:09.553550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:17.779 [2024-11-19 07:57:09.553567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:17.779 [2024-11-19 07:57:09.553601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:17.779 [2024-11-19 07:57:09.553618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:17.779 [2024-11-19 07:57:09.553635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:17.779 [2024-11-19 07:57:09.553653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:17.779 [2024-11-19 07:57:09.553710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:17.779 [2024-11-19 07:57:09.553740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:17.779 [2024-11-19 07:57:09.553759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:17.779 [2024-11-19 07:57:09.553778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:17.779 [2024-11-19 07:57:09.553796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:17.779 [2024-11-19 07:57:09.553814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:17.779 [2024-11-19 07:57:09.553832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:17.779 [2024-11-19 07:57:09.553849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:17.779 [2024-11-19 07:57:09.553867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:17.779 [2024-11-19 07:57:09.553884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:17.779 [2024-11-19 07:57:09.553902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:17.779 [2024-11-19 07:57:09.553919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:17.779 [2024-11-19 07:57:09.553937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:17.779 [2024-11-19 07:57:09.553963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:17.779 [2024-11-19 07:57:09.553982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:17.779 [2024-11-19 07:57:09.554000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:17.779 [2024-11-19 07:57:09.554017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:17.779 [2024-11-19 07:57:09.554045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:17.779 [2024-11-19 07:57:09.554062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:17.779 [2024-11-19 07:57:09.554082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:17.779 [2024-11-19 07:57:09.554111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:17.779 [2024-11-19 07:57:09.554130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:17.779 [2024-11-19 07:57:09.554148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:17.779 [2024-11-19 07:57:09.554166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:17.779 [2024-11-19 07:57:09.554195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:17.779 [2024-11-19 07:57:09.554218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:17.779 [2024-11-19 07:57:09.554237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:17.779 [2024-11-19 07:57:09.554255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:17.779 [2024-11-19 07:57:09.554272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:17.779 [2024-11-19 07:57:09.554305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:17.779 07:57:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 3091447 00:33:24.362 { 00:33:24.362 "results": [ 00:33:24.362 { 00:33:24.362 "job": "NVMe0n1", 00:33:24.362 "core_mask": "0x1", 00:33:24.362 "workload": "verify", 00:33:24.362 "status": "finished", 00:33:24.362 "verify_range": { 00:33:24.362 "start": 0, 00:33:24.362 "length": 16384 00:33:24.362 }, 00:33:24.362 "queue_depth": 128, 00:33:24.362 "io_size": 4096, 00:33:24.362 "runtime": 15.01793, 00:33:24.362 "iops": 6011.347768966828, 00:33:24.362 "mibps": 23.48182722252667, 00:33:24.362 "io_failed": 12293, 00:33:24.362 "io_timeout": 0, 00:33:24.362 "avg_latency_us": 18705.92860507464, 00:33:24.362 "min_latency_us": 788.8592592592593, 00:33:24.362 "max_latency_us": 21262.79111111111 00:33:24.362 } 00:33:24.362 ], 00:33:24.362 "core_count": 1 00:33:24.362 } 00:33:24.362 07:57:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 3091290 00:33:24.362 07:57:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3091290 ']' 00:33:24.362 07:57:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3091290 00:33:24.362 07:57:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:33:24.362 07:57:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:24.362 07:57:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3091290 00:33:24.362 07:57:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:24.362 07:57:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:24.362 07:57:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3091290' 00:33:24.362 killing process with pid 3091290 00:33:24.362 07:57:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3091290 00:33:24.362 07:57:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3091290 00:33:24.362 07:57:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:24.362 [2024-11-19 07:56:58.305662] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:33:24.362 [2024-11-19 07:56:58.305832] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3091290 ] 00:33:24.362 [2024-11-19 07:56:58.447104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:24.362 [2024-11-19 07:56:58.572400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:24.362 Running I/O for 15 seconds... 00:33:24.362 6037.00 IOPS, 23.58 MiB/s [2024-11-19T06:57:16.292Z] [2024-11-19 07:57:01.295238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:56992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.362 [2024-11-19 07:57:01.295315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.362 [2024-11-19 07:57:01.295359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:57000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.362 [2024-11-19 07:57:01.295385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.362 [2024-11-19 07:57:01.295412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:57008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.362 [2024-11-19 07:57:01.295436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.362 [2024-11-19 07:57:01.295477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:57016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.362 [2024-11-19 07:57:01.295499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.362 [2024-11-19 07:57:01.295522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:57024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.362 [2024-11-19 07:57:01.295542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.362 [2024-11-19 07:57:01.295565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:57032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.362 [2024-11-19 07:57:01.295586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.362 [2024-11-19 07:57:01.295608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:57040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.362 [2024-11-19 07:57:01.295629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.362 [2024-11-19 07:57:01.295651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:57048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.362 [2024-11-19 07:57:01.295697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.362 [2024-11-19 07:57:01.295725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:57056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.362 [2024-11-19 07:57:01.295748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.362 [2024-11-19 07:57:01.295771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:57064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.362 [2024-11-19 07:57:01.295793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.362 [2024-11-19 07:57:01.295816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:57072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.362 [2024-11-19 07:57:01.295838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.362 [2024-11-19 07:57:01.295874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:57080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.362 [2024-11-19 07:57:01.295896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.362 [2024-11-19 07:57:01.295938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:57088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.362 [2024-11-19 07:57:01.295962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.362 [2024-11-19 07:57:01.296005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:57096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.362 [2024-11-19 07:57:01.296037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.362 [2024-11-19 07:57:01.296060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:57104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.362 [2024-11-19 07:57:01.296082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.362 [2024-11-19 07:57:01.296106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:57112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.362 [2024-11-19 07:57:01.296127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.362 [2024-11-19 07:57:01.296151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:57120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.362 [2024-11-19 07:57:01.296172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.362 [2024-11-19 07:57:01.296195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:57128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.362 [2024-11-19 07:57:01.296216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.362 [2024-11-19 07:57:01.296239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:57136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.362 [2024-11-19 07:57:01.296259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.363 [2024-11-19 07:57:01.296282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:57144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.363 [2024-11-19 07:57:01.296303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.363 [2024-11-19 07:57:01.296326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:57152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.363 [2024-11-19 07:57:01.296346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.363 [2024-11-19 07:57:01.296387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:57160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.363 [2024-11-19 07:57:01.296409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.363 [2024-11-19 07:57:01.296432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:57168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.363 [2024-11-19 07:57:01.296454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.363 [2024-11-19 07:57:01.296478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:57176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.363 [2024-11-19 07:57:01.296505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.363 [2024-11-19 07:57:01.296529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:57184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.363 [2024-11-19 07:57:01.296552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.363 [2024-11-19 07:57:01.296576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:57192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.363 [2024-11-19 07:57:01.296598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.363 [2024-11-19 07:57:01.296622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:57200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.363 [2024-11-19 07:57:01.296645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.363 [2024-11-19 07:57:01.296669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:57208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.363 [2024-11-19 07:57:01.296699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.363 [2024-11-19 07:57:01.296746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:57216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.363 [2024-11-19 07:57:01.296769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.363 [2024-11-19 07:57:01.296793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:57224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.363 [2024-11-19 07:57:01.296815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.363 [2024-11-19 07:57:01.296839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:57232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.363 [2024-11-19 07:57:01.296861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.363 [2024-11-19 07:57:01.296886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:57240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.363 [2024-11-19 07:57:01.296908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.363 [2024-11-19 07:57:01.296931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:57248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.363 [2024-11-19 07:57:01.296954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.363 [2024-11-19 07:57:01.296977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:57256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.363 [2024-11-19 07:57:01.297000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.363 [2024-11-19 07:57:01.297040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:57264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.363 [2024-11-19 07:57:01.297063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.363 [2024-11-19 07:57:01.297086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:57272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.363 [2024-11-19 07:57:01.297107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.363 [2024-11-19 07:57:01.297136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:57280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.363 [2024-11-19 07:57:01.297158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.363 [2024-11-19 07:57:01.297181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:57288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.363 [2024-11-19 07:57:01.297204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.363 [2024-11-19 07:57:01.297227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:57296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.363 [2024-11-19 07:57:01.297249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.363 [2024-11-19 07:57:01.297273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:57304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.363 [2024-11-19 07:57:01.297294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.363 [2024-11-19 07:57:01.297317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:57312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.363 [2024-11-19 07:57:01.297339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.363 [2024-11-19 07:57:01.297362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:57320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.363 [2024-11-19 07:57:01.297384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.363 [2024-11-19 07:57:01.297407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:57328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.363 [2024-11-19 07:57:01.297428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.363 [2024-11-19 07:57:01.297451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:57336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.363 [2024-11-19 07:57:01.297473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.363 [2024-11-19 07:57:01.297497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:57344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.363 [2024-11-19 07:57:01.297520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.363 [2024-11-19 07:57:01.297543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:57352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.363 [2024-11-19 07:57:01.297565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.363 [2024-11-19 07:57:01.297588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:57360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.363 [2024-11-19 07:57:01.297609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.363 [2024-11-19 07:57:01.297634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:57368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.363 [2024-11-19 07:57:01.297656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.363 [2024-11-19 07:57:01.297680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:57376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.363 [2024-11-19 07:57:01.297726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.363 [2024-11-19 07:57:01.297766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:57384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.363 [2024-11-19 07:57:01.297790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.363 [2024-11-19 07:57:01.297814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:57392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.363 [2024-11-19 07:57:01.297836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.363 [2024-11-19 07:57:01.297860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:57400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.363 [2024-11-19 07:57:01.297882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.363 [2024-11-19 07:57:01.297907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:57408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.363 [2024-11-19 07:57:01.297930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.363 [2024-11-19 07:57:01.297953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:57416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.363 [2024-11-19 07:57:01.297975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.363 [2024-11-19 07:57:01.298025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:57424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.363 [2024-11-19 07:57:01.298047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.363 [2024-11-19 07:57:01.298072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:56488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.363 [2024-11-19 07:57:01.298094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.363 [2024-11-19 07:57:01.298117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:56496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.363 [2024-11-19 07:57:01.298139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.363 [2024-11-19 07:57:01.298162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:56504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.363 [2024-11-19 07:57:01.298184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.363 [2024-11-19 07:57:01.298207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:56512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.364 [2024-11-19 07:57:01.298229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.364 [2024-11-19 07:57:01.298252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:56520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.364 [2024-11-19 07:57:01.298273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.364 [2024-11-19 07:57:01.298299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:56528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.364 [2024-11-19 07:57:01.298321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.364 [2024-11-19 07:57:01.298344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:56536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.364 [2024-11-19 07:57:01.298370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.364 [2024-11-19 07:57:01.298395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:56544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.364 [2024-11-19 07:57:01.298417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.364 [2024-11-19 07:57:01.298441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:56552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.364 [2024-11-19 07:57:01.298463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.364 [2024-11-19 07:57:01.298486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:56560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.364 [2024-11-19 07:57:01.298507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.364 [2024-11-19 07:57:01.298530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:56568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.364 [2024-11-19 07:57:01.298552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.364 [2024-11-19 07:57:01.298575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:56576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.364 [2024-11-19 07:57:01.298597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.364 [2024-11-19 07:57:01.298620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:56584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.364 [2024-11-19 07:57:01.298642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.364 [2024-11-19 07:57:01.298666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:56592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.364 [2024-11-19 07:57:01.298711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.364 [2024-11-19 07:57:01.298751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:56600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.364 [2024-11-19 07:57:01.298776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.364 [2024-11-19 07:57:01.298801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:57432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.364 [2024-11-19 07:57:01.298823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.364 [2024-11-19 07:57:01.298848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:56608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.364 [2024-11-19 07:57:01.298871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.364 [2024-11-19 07:57:01.298895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:56616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.364 [2024-11-19 07:57:01.298917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.364 [2024-11-19 07:57:01.298941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:56624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.364 [2024-11-19 07:57:01.298964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.364 [2024-11-19 07:57:01.298993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:56632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.364 [2024-11-19 07:57:01.299032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.364 [2024-11-19 07:57:01.299056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:56640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.364 [2024-11-19 07:57:01.299078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.364 [2024-11-19 07:57:01.299117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:56648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.364 [2024-11-19 07:57:01.299141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.364 [2024-11-19 07:57:01.299164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:56656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.364 [2024-11-19 07:57:01.299186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.364 [2024-11-19 07:57:01.299209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:56664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.364 [2024-11-19 07:57:01.299239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.364 [2024-11-19 07:57:01.299264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:56672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.364 [2024-11-19 07:57:01.299287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.364 [2024-11-19 07:57:01.299310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:56680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.364 [2024-11-19 07:57:01.299332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.364 [2024-11-19 07:57:01.299355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:56688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.364 [2024-11-19 07:57:01.299377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.364 [2024-11-19 07:57:01.299400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:56696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.364 [2024-11-19 07:57:01.299421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.364 [2024-11-19 07:57:01.299445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:56704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.364 [2024-11-19 07:57:01.299467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.364 [2024-11-19 07:57:01.299490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:56712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.364 [2024-11-19 07:57:01.299511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.364 [2024-11-19 07:57:01.299535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:56720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.364 [2024-11-19 07:57:01.299556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.364 [2024-11-19 07:57:01.299579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:56728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.364 [2024-11-19 07:57:01.299606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.364 [2024-11-19 07:57:01.299630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:56736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.364 [2024-11-19 07:57:01.299651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.364 [2024-11-19 07:57:01.299675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:56744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.364 [2024-11-19 07:57:01.299722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.364 [2024-11-19 07:57:01.299753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:56752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.364 [2024-11-19 07:57:01.299776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.364 [2024-11-19 07:57:01.299810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:56760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.364 [2024-11-19 07:57:01.299832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.364 [2024-11-19 07:57:01.299856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:56768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.364 [2024-11-19 07:57:01.299878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.364 [2024-11-19 07:57:01.299903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:56776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.364 [2024-11-19 07:57:01.299926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.364 [2024-11-19 07:57:01.299950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:56784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.364 [2024-11-19 07:57:01.299973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.364 [2024-11-19 07:57:01.299997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:56792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.364 [2024-11-19 07:57:01.300050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.364 [2024-11-19 07:57:01.300076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:56800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.364 [2024-11-19 07:57:01.300098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.364 [2024-11-19 07:57:01.300123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:56808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.364 [2024-11-19 07:57:01.300145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.364 [2024-11-19 07:57:01.300168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:56816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.364 [2024-11-19 07:57:01.300190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.365 [2024-11-19 07:57:01.300213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:56824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.365 [2024-11-19 07:57:01.300234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.365 [2024-11-19 07:57:01.300263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:56832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.365 [2024-11-19 07:57:01.300286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.365 [2024-11-19 07:57:01.300310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:56840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.365 [2024-11-19 07:57:01.300332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.365 [2024-11-19 07:57:01.300356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:56848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.365 [2024-11-19 07:57:01.300377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.365 [2024-11-19 07:57:01.300401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:56856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.365 [2024-11-19 07:57:01.300423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.365 [2024-11-19 07:57:01.300447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:56864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.365 [2024-11-19 07:57:01.300468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.365 [2024-11-19 07:57:01.300491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:56872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.365 [2024-11-19 07:57:01.300513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.365 [2024-11-19 07:57:01.300537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:56880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.365 [2024-11-19 07:57:01.300558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.365 [2024-11-19 07:57:01.300582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:56888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.365 [2024-11-19 07:57:01.300603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.365 [2024-11-19 07:57:01.300627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:56896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.365 [2024-11-19 07:57:01.300649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.365 [2024-11-19 07:57:01.300673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:56904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.365 [2024-11-19 07:57:01.300718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.365 [2024-11-19 07:57:01.300746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:56912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.365 [2024-11-19 07:57:01.300769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.365 [2024-11-19 07:57:01.300793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:56920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.365 [2024-11-19 07:57:01.300817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.365 [2024-11-19 07:57:01.300842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:57440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.365 [2024-11-19 07:57:01.300864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.365 [2024-11-19 07:57:01.300893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:57448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.365 [2024-11-19 07:57:01.300916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.365 [2024-11-19 07:57:01.300941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:57456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.365 [2024-11-19 07:57:01.300963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.365 [2024-11-19 07:57:01.300988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:57464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.365 [2024-11-19 07:57:01.301025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.365 [2024-11-19 07:57:01.301050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:57472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.365 [2024-11-19 07:57:01.301072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.365 [2024-11-19 07:57:01.301096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:57480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.365 [2024-11-19 07:57:01.301118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.365 [2024-11-19 07:57:01.301142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.365 [2024-11-19 07:57:01.301164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.365 [2024-11-19 07:57:01.301188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:56928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.365 [2024-11-19 07:57:01.301210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.365 [2024-11-19 07:57:01.301233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:56936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.365 [2024-11-19 07:57:01.301254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.365 [2024-11-19 07:57:01.301278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:56944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.365 [2024-11-19 07:57:01.301299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.365 [2024-11-19 07:57:01.301323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:56952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.365 [2024-11-19 07:57:01.301345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.365 [2024-11-19 07:57:01.301368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:56960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.365 [2024-11-19 07:57:01.301389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.365 [2024-11-19 07:57:01.301412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:56968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.365 [2024-11-19 07:57:01.301434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.365 [2024-11-19 07:57:01.301459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:56976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.365 [2024-11-19 07:57:01.301485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.365 [2024-11-19 07:57:01.301510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:56984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.365 [2024-11-19 07:57:01.301532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.365 [2024-11-19 07:57:01.301556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:57496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.365 [2024-11-19 07:57:01.301577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.365 [2024-11-19 07:57:01.301598] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2f00 is same with the state(6) to be set 00:33:24.365 [2024-11-19 07:57:01.301626] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:24.365 [2024-11-19 07:57:01.301645] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:24.365 [2024-11-19 07:57:01.301664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57504 len:8 PRP1 0x0 PRP2 0x0 00:33:24.365 [2024-11-19 07:57:01.301685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.365 [2024-11-19 07:57:01.302023] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:33:24.365 [2024-11-19 07:57:01.302105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:24.365 [2024-11-19 07:57:01.302134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.365 [2024-11-19 07:57:01.302185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:24.365 [2024-11-19 07:57:01.302207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.365 [2024-11-19 07:57:01.302228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:24.365 [2024-11-19 07:57:01.302249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.365 [2024-11-19 07:57:01.302270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:24.365 [2024-11-19 07:57:01.302291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.365 [2024-11-19 07:57:01.302311] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:33:24.365 [2024-11-19 07:57:01.302391] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2000 (9): Bad file descriptor 00:33:24.365 [2024-11-19 07:57:01.306316] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:33:24.365 [2024-11-19 07:57:01.422382] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:33:24.365 5671.50 IOPS, 22.15 MiB/s [2024-11-19T06:57:16.295Z] 5846.67 IOPS, 22.84 MiB/s [2024-11-19T06:57:16.295Z] 5950.00 IOPS, 23.24 MiB/s [2024-11-19T06:57:16.295Z] [2024-11-19 07:57:04.962416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:131064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.365 [2024-11-19 07:57:04.962478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.365 [2024-11-19 07:57:04.962547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:0 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.366 [2024-11-19 07:57:04.962583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.366 [2024-11-19 07:57:04.962612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:8 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.366 [2024-11-19 07:57:04.962635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.366 [2024-11-19 07:57:04.962660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.366 [2024-11-19 07:57:04.962684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.366 [2024-11-19 07:57:04.962736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:24 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.366 [2024-11-19 07:57:04.962760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.366 [2024-11-19 07:57:04.962784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:32 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.366 [2024-11-19 07:57:04.962806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.366 [2024-11-19 07:57:04.962830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:40 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.366 [2024-11-19 07:57:04.962852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.366 [2024-11-19 07:57:04.962876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:48 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.366 [2024-11-19 07:57:04.962897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.366 [2024-11-19 07:57:04.962922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:64 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.366 [2024-11-19 07:57:04.962945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.366 [2024-11-19 07:57:04.962970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:72 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.366 [2024-11-19 07:57:04.962992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.366 [2024-11-19 07:57:04.963016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:80 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.366 [2024-11-19 07:57:04.963039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.366 [2024-11-19 07:57:04.963064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:88 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.366 [2024-11-19 07:57:04.963087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.366 [2024-11-19 07:57:04.963111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.366 [2024-11-19 07:57:04.963134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.366 [2024-11-19 07:57:04.963158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.366 [2024-11-19 07:57:04.963180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.366 [2024-11-19 07:57:04.963210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.366 [2024-11-19 07:57:04.963234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.366 [2024-11-19 07:57:04.963258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.366 [2024-11-19 07:57:04.963281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.366 [2024-11-19 07:57:04.963306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.366 [2024-11-19 07:57:04.963330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.366 [2024-11-19 07:57:04.963355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.366 [2024-11-19 07:57:04.963378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.366 [2024-11-19 07:57:04.963403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.366 [2024-11-19 07:57:04.963426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.366 [2024-11-19 07:57:04.963451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.366 [2024-11-19 07:57:04.963473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.366 [2024-11-19 07:57:04.963497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.366 [2024-11-19 07:57:04.963520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.366 [2024-11-19 07:57:04.963544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.366 [2024-11-19 07:57:04.963567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.366 [2024-11-19 07:57:04.963591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.366 [2024-11-19 07:57:04.963613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.366 [2024-11-19 07:57:04.963638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.366 [2024-11-19 07:57:04.963661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.366 [2024-11-19 07:57:04.963686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.366 [2024-11-19 07:57:04.963717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.366 [2024-11-19 07:57:04.963743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.366 [2024-11-19 07:57:04.963766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.366 [2024-11-19 07:57:04.963790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.366 [2024-11-19 07:57:04.963813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.366 [2024-11-19 07:57:04.963847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.366 [2024-11-19 07:57:04.963871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.366 [2024-11-19 07:57:04.963897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.366 [2024-11-19 07:57:04.963920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.366 [2024-11-19 07:57:04.963945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.366 [2024-11-19 07:57:04.963968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.366 [2024-11-19 07:57:04.963993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.366 [2024-11-19 07:57:04.964015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.366 [2024-11-19 07:57:04.964041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.366 [2024-11-19 07:57:04.964063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.366 [2024-11-19 07:57:04.964088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.366 [2024-11-19 07:57:04.964110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.366 [2024-11-19 07:57:04.964135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.366 [2024-11-19 07:57:04.964157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.366 [2024-11-19 07:57:04.964182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.366 [2024-11-19 07:57:04.964205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.367 [2024-11-19 07:57:04.964229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.367 [2024-11-19 07:57:04.964251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.367 [2024-11-19 07:57:04.964276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.367 [2024-11-19 07:57:04.964298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.367 [2024-11-19 07:57:04.964323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.367 [2024-11-19 07:57:04.964345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.367 [2024-11-19 07:57:04.964370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.367 [2024-11-19 07:57:04.964391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.367 [2024-11-19 07:57:04.964416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.367 [2024-11-19 07:57:04.964444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.367 [2024-11-19 07:57:04.964469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.367 [2024-11-19 07:57:04.964491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.367 [2024-11-19 07:57:04.964516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.367 [2024-11-19 07:57:04.964538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.367 [2024-11-19 07:57:04.964562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.367 [2024-11-19 07:57:04.964585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.367 [2024-11-19 07:57:04.964609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.367 [2024-11-19 07:57:04.964631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.367 [2024-11-19 07:57:04.964655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.367 [2024-11-19 07:57:04.964677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.367 [2024-11-19 07:57:04.964711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.367 [2024-11-19 07:57:04.964735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.367 [2024-11-19 07:57:04.964759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.367 [2024-11-19 07:57:04.964781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.367 [2024-11-19 07:57:04.964807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.367 [2024-11-19 07:57:04.964830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.367 [2024-11-19 07:57:04.964855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.367 [2024-11-19 07:57:04.964877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.367 [2024-11-19 07:57:04.964901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.367 [2024-11-19 07:57:04.964923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.367 [2024-11-19 07:57:04.964947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.367 [2024-11-19 07:57:04.964969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.367 [2024-11-19 07:57:04.964993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.367 [2024-11-19 07:57:04.965015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.367 [2024-11-19 07:57:04.965039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.367 [2024-11-19 07:57:04.965066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.367 [2024-11-19 07:57:04.965091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.367 [2024-11-19 07:57:04.965113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.367 [2024-11-19 07:57:04.965137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.367 [2024-11-19 07:57:04.965158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.367 [2024-11-19 07:57:04.965182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.367 [2024-11-19 07:57:04.965205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.367 [2024-11-19 07:57:04.965230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.367 [2024-11-19 07:57:04.965252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.367 [2024-11-19 07:57:04.965276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.367 [2024-11-19 07:57:04.965298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.367 [2024-11-19 07:57:04.965322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.367 [2024-11-19 07:57:04.965344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.367 [2024-11-19 07:57:04.965368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.367 [2024-11-19 07:57:04.965390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.367 [2024-11-19 07:57:04.965414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.367 [2024-11-19 07:57:04.965437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.367 [2024-11-19 07:57:04.965462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.367 [2024-11-19 07:57:04.965484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.367 [2024-11-19 07:57:04.965508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.367 [2024-11-19 07:57:04.965530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.367 [2024-11-19 07:57:04.965555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:56 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.367 [2024-11-19 07:57:04.965578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.367 [2024-11-19 07:57:04.965602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.367 [2024-11-19 07:57:04.965625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.367 [2024-11-19 07:57:04.965669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.367 [2024-11-19 07:57:04.965703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.367 [2024-11-19 07:57:04.965731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.367 [2024-11-19 07:57:04.965754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.367 [2024-11-19 07:57:04.965778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.367 [2024-11-19 07:57:04.965801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.367 [2024-11-19 07:57:04.965825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.367 [2024-11-19 07:57:04.965847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.367 [2024-11-19 07:57:04.965871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.367 [2024-11-19 07:57:04.965893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.367 [2024-11-19 07:57:04.965917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.367 [2024-11-19 07:57:04.965940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.367 [2024-11-19 07:57:04.965964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.367 [2024-11-19 07:57:04.965986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.367 [2024-11-19 07:57:04.966011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.367 [2024-11-19 07:57:04.966033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.367 [2024-11-19 07:57:04.966057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.367 [2024-11-19 07:57:04.966079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.367 [2024-11-19 07:57:04.966103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.368 [2024-11-19 07:57:04.966125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.368 [2024-11-19 07:57:04.966150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.368 [2024-11-19 07:57:04.966172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.368 [2024-11-19 07:57:04.966196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.368 [2024-11-19 07:57:04.966218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.368 [2024-11-19 07:57:04.966242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.368 [2024-11-19 07:57:04.966269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.368 [2024-11-19 07:57:04.966295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.368 [2024-11-19 07:57:04.966317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.368 [2024-11-19 07:57:04.966342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.368 [2024-11-19 07:57:04.966365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.368 [2024-11-19 07:57:04.966391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.368 [2024-11-19 07:57:04.966415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.368 [2024-11-19 07:57:04.966439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.368 [2024-11-19 07:57:04.966461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.368 [2024-11-19 07:57:04.966486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.368 [2024-11-19 07:57:04.966508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.368 [2024-11-19 07:57:04.966533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.368 [2024-11-19 07:57:04.966555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.368 [2024-11-19 07:57:04.966579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.368 [2024-11-19 07:57:04.966601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.368 [2024-11-19 07:57:04.966626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.368 [2024-11-19 07:57:04.966648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.368 [2024-11-19 07:57:04.966673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.368 [2024-11-19 07:57:04.966704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.368 [2024-11-19 07:57:04.966731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.368 [2024-11-19 07:57:04.966754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.368 [2024-11-19 07:57:04.966778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.368 [2024-11-19 07:57:04.966801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.368 [2024-11-19 07:57:04.966826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.368 [2024-11-19 07:57:04.966848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.368 [2024-11-19 07:57:04.966874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.368 [2024-11-19 07:57:04.966901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.368 [2024-11-19 07:57:04.966926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.368 [2024-11-19 07:57:04.966949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.368 [2024-11-19 07:57:04.966973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.368 [2024-11-19 07:57:04.966996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.368 [2024-11-19 07:57:04.967020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.368 [2024-11-19 07:57:04.967043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.368 [2024-11-19 07:57:04.967067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.368 [2024-11-19 07:57:04.967090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.368 [2024-11-19 07:57:04.967116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.368 [2024-11-19 07:57:04.967139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.368 [2024-11-19 07:57:04.967189] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:24.368 [2024-11-19 07:57:04.967217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:760 len:8 PRP1 0x0 PRP2 0x0 00:33:24.368 [2024-11-19 07:57:04.967239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.368 [2024-11-19 07:57:04.967267] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:24.368 [2024-11-19 07:57:04.967286] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:24.368 [2024-11-19 07:57:04.967305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:8 PRP1 0x0 PRP2 0x0 00:33:24.368 [2024-11-19 07:57:04.967327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.368 [2024-11-19 07:57:04.967348] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:24.368 [2024-11-19 07:57:04.967365] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:24.368 [2024-11-19 07:57:04.967383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:776 len:8 PRP1 0x0 PRP2 0x0 00:33:24.368 [2024-11-19 07:57:04.967403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.368 [2024-11-19 07:57:04.967422] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:24.368 [2024-11-19 07:57:04.967439] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:24.368 [2024-11-19 07:57:04.967458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:784 len:8 PRP1 0x0 PRP2 0x0 00:33:24.368 [2024-11-19 07:57:04.967477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.368 [2024-11-19 07:57:04.967497] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:24.368 [2024-11-19 07:57:04.967514] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:24.368 [2024-11-19 07:57:04.967538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:792 len:8 PRP1 0x0 PRP2 0x0 00:33:24.368 [2024-11-19 07:57:04.967559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.368 [2024-11-19 07:57:04.967579] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:24.368 [2024-11-19 07:57:04.967596] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:24.368 [2024-11-19 07:57:04.967614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:8 PRP1 0x0 PRP2 0x0 00:33:24.368 [2024-11-19 07:57:04.967633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.368 [2024-11-19 07:57:04.967653] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:24.368 [2024-11-19 07:57:04.967671] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:24.368 [2024-11-19 07:57:04.967695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:808 len:8 PRP1 0x0 PRP2 0x0 00:33:24.368 [2024-11-19 07:57:04.967717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.368 [2024-11-19 07:57:04.967737] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:24.368 [2024-11-19 07:57:04.967755] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:24.368 [2024-11-19 07:57:04.967774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:816 len:8 PRP1 0x0 PRP2 0x0 00:33:24.368 [2024-11-19 07:57:04.967794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.368 [2024-11-19 07:57:04.967814] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:24.368 [2024-11-19 07:57:04.967832] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:24.368 [2024-11-19 07:57:04.967851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:824 len:8 PRP1 0x0 PRP2 0x0 00:33:24.368 [2024-11-19 07:57:04.967871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.368 [2024-11-19 07:57:04.967890] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:24.368 [2024-11-19 07:57:04.967908] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:24.368 [2024-11-19 07:57:04.967925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:8 PRP1 0x0 PRP2 0x0 00:33:24.368 [2024-11-19 07:57:04.967945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.368 [2024-11-19 07:57:04.967965] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:24.368 [2024-11-19 07:57:04.967983] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:24.368 [2024-11-19 07:57:04.968000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:840 len:8 PRP1 0x0 PRP2 0x0 00:33:24.369 [2024-11-19 07:57:04.968020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.369 [2024-11-19 07:57:04.968040] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:24.369 [2024-11-19 07:57:04.968057] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:24.369 [2024-11-19 07:57:04.968075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:848 len:8 PRP1 0x0 PRP2 0x0 00:33:24.369 [2024-11-19 07:57:04.968095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.369 [2024-11-19 07:57:04.968114] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:24.369 [2024-11-19 07:57:04.968136] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:24.369 [2024-11-19 07:57:04.968155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:856 len:8 PRP1 0x0 PRP2 0x0 00:33:24.369 [2024-11-19 07:57:04.968175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.369 [2024-11-19 07:57:04.968194] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:24.369 [2024-11-19 07:57:04.968212] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:24.369 [2024-11-19 07:57:04.968229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:8 PRP1 0x0 PRP2 0x0 00:33:24.369 [2024-11-19 07:57:04.968249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.369 [2024-11-19 07:57:04.968269] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:24.369 [2024-11-19 07:57:04.968286] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:24.369 [2024-11-19 07:57:04.968303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:872 len:8 PRP1 0x0 PRP2 0x0 00:33:24.369 [2024-11-19 07:57:04.968323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.369 [2024-11-19 07:57:04.968343] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:24.369 [2024-11-19 07:57:04.968361] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:24.369 [2024-11-19 07:57:04.968380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:880 len:8 PRP1 0x0 PRP2 0x0 00:33:24.369 [2024-11-19 07:57:04.968400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.369 [2024-11-19 07:57:04.968420] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:24.369 [2024-11-19 07:57:04.968438] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:24.369 [2024-11-19 07:57:04.968456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:888 len:8 PRP1 0x0 PRP2 0x0 00:33:24.369 [2024-11-19 07:57:04.968488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.369 [2024-11-19 07:57:04.968510] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:24.369 [2024-11-19 07:57:04.968528] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:24.369 [2024-11-19 07:57:04.968545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:8 PRP1 0x0 PRP2 0x0 00:33:24.369 [2024-11-19 07:57:04.968565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.369 [2024-11-19 07:57:04.968585] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:24.369 [2024-11-19 07:57:04.968602] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:24.369 [2024-11-19 07:57:04.968620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:904 len:8 PRP1 0x0 PRP2 0x0 00:33:24.369 [2024-11-19 07:57:04.968639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.369 [2024-11-19 07:57:04.968659] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:24.369 [2024-11-19 07:57:04.968676] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:24.369 [2024-11-19 07:57:04.968702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:912 len:8 PRP1 0x0 PRP2 0x0 00:33:24.369 [2024-11-19 07:57:04.968724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.369 [2024-11-19 07:57:04.968756] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:24.369 [2024-11-19 07:57:04.968775] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:24.369 [2024-11-19 07:57:04.968792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:920 len:8 PRP1 0x0 PRP2 0x0 00:33:24.369 [2024-11-19 07:57:04.968812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.369 [2024-11-19 07:57:04.968831] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:24.369 [2024-11-19 07:57:04.968848] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:24.369 [2024-11-19 07:57:04.968867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:928 len:8 PRP1 0x0 PRP2 0x0 00:33:24.369 [2024-11-19 07:57:04.968886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.369 [2024-11-19 07:57:04.968906] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:24.369 [2024-11-19 07:57:04.968923] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:24.369 [2024-11-19 07:57:04.968941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:936 len:8 PRP1 0x0 PRP2 0x0 00:33:24.369 [2024-11-19 07:57:04.968961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.369 [2024-11-19 07:57:04.968980] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:24.369 [2024-11-19 07:57:04.968997] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:24.369 [2024-11-19 07:57:04.969015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:944 len:8 PRP1 0x0 PRP2 0x0 00:33:24.369 [2024-11-19 07:57:04.969034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.369 [2024-11-19 07:57:04.969054] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:24.369 [2024-11-19 07:57:04.969071] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:24.369 [2024-11-19 07:57:04.969089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:952 len:8 PRP1 0x0 PRP2 0x0 00:33:24.369 [2024-11-19 07:57:04.969109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.369 [2024-11-19 07:57:04.969129] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:24.369 [2024-11-19 07:57:04.969146] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:24.369 [2024-11-19 07:57:04.969164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:8 PRP1 0x0 PRP2 0x0 00:33:24.369 [2024-11-19 07:57:04.969184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.369 [2024-11-19 07:57:04.969203] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:24.369 [2024-11-19 07:57:04.969220] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:24.369 [2024-11-19 07:57:04.969238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:968 len:8 PRP1 0x0 PRP2 0x0 00:33:24.369 [2024-11-19 07:57:04.969258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.369 [2024-11-19 07:57:04.969278] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:24.369 [2024-11-19 07:57:04.969296] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:24.369 [2024-11-19 07:57:04.969314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:976 len:8 PRP1 0x0 PRP2 0x0 00:33:24.369 [2024-11-19 07:57:04.969338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.369 [2024-11-19 07:57:04.969358] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:24.369 [2024-11-19 07:57:04.969376] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:24.369 [2024-11-19 07:57:04.969393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:984 len:8 PRP1 0x0 PRP2 0x0 00:33:24.369 [2024-11-19 07:57:04.969413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.369 [2024-11-19 07:57:04.969432] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:24.369 [2024-11-19 07:57:04.969449] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:24.369 [2024-11-19 07:57:04.969467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:8 PRP1 0x0 PRP2 0x0 00:33:24.369 [2024-11-19 07:57:04.969486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.369 [2024-11-19 07:57:04.969506] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:24.369 [2024-11-19 07:57:04.969523] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:24.369 [2024-11-19 07:57:04.969541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1000 len:8 PRP1 0x0 PRP2 0x0 00:33:24.369 [2024-11-19 07:57:04.969561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.369 [2024-11-19 07:57:04.969581] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:24.369 [2024-11-19 07:57:04.969599] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:24.369 [2024-11-19 07:57:04.969617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1008 len:8 PRP1 0x0 PRP2 0x0 00:33:24.369 [2024-11-19 07:57:04.969636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.369 [2024-11-19 07:57:04.969925] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:33:24.369 [2024-11-19 07:57:04.969984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:24.369 [2024-11-19 07:57:04.970011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.369 [2024-11-19 07:57:04.970035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:24.369 [2024-11-19 07:57:04.970056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.369 [2024-11-19 07:57:04.970077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:24.369 [2024-11-19 07:57:04.970097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.370 [2024-11-19 07:57:04.970128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:24.370 [2024-11-19 07:57:04.970150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.370 [2024-11-19 07:57:04.970169] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:33:24.370 [2024-11-19 07:57:04.970265] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2000 (9): Bad file descriptor 00:33:24.370 [2024-11-19 07:57:04.974098] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:33:24.370 [2024-11-19 07:57:05.052446] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:33:24.370 5870.60 IOPS, 22.93 MiB/s [2024-11-19T06:57:16.300Z] 5938.50 IOPS, 23.20 MiB/s [2024-11-19T06:57:16.300Z] 5976.43 IOPS, 23.35 MiB/s [2024-11-19T06:57:16.300Z] 5989.88 IOPS, 23.40 MiB/s [2024-11-19T06:57:16.300Z] 6012.89 IOPS, 23.49 MiB/s [2024-11-19T06:57:16.300Z] [2024-11-19 07:57:09.556662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:113272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.370 [2024-11-19 07:57:09.556752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.370 [2024-11-19 07:57:09.556797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:113280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.370 [2024-11-19 07:57:09.556823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.370 [2024-11-19 07:57:09.556849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:113288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.370 [2024-11-19 07:57:09.556873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.370 [2024-11-19 07:57:09.556898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:113296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.370 [2024-11-19 07:57:09.556921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.370 [2024-11-19 07:57:09.556945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:113304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.370 [2024-11-19 07:57:09.556967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.370 [2024-11-19 07:57:09.557018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:113312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.370 [2024-11-19 07:57:09.557039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.370 [2024-11-19 07:57:09.557062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:113320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.370 [2024-11-19 07:57:09.557095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.370 [2024-11-19 07:57:09.557118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:113328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.370 [2024-11-19 07:57:09.557139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.370 [2024-11-19 07:57:09.557162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:113336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.370 [2024-11-19 07:57:09.557184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.370 [2024-11-19 07:57:09.557208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:113344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.370 [2024-11-19 07:57:09.557239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.370 [2024-11-19 07:57:09.557262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:113352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.370 [2024-11-19 07:57:09.557284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.370 [2024-11-19 07:57:09.557317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:113360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.370 [2024-11-19 07:57:09.557345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.370 [2024-11-19 07:57:09.557380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:113368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.370 [2024-11-19 07:57:09.557402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.370 [2024-11-19 07:57:09.557436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:113376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.370 [2024-11-19 07:57:09.557458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.370 [2024-11-19 07:57:09.557481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:113384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.370 [2024-11-19 07:57:09.557502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.370 [2024-11-19 07:57:09.557524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:113392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.370 [2024-11-19 07:57:09.557546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.370 [2024-11-19 07:57:09.557569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:113208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.370 [2024-11-19 07:57:09.557599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.370 [2024-11-19 07:57:09.557623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:113400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.370 [2024-11-19 07:57:09.557644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.370 [2024-11-19 07:57:09.557697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:113408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.370 [2024-11-19 07:57:09.557731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.370 [2024-11-19 07:57:09.557755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:113416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.370 [2024-11-19 07:57:09.557778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.370 [2024-11-19 07:57:09.557803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:113424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.370 [2024-11-19 07:57:09.557825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.370 [2024-11-19 07:57:09.557850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:113432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.370 [2024-11-19 07:57:09.557872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.370 [2024-11-19 07:57:09.557896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:113440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.370 [2024-11-19 07:57:09.557918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.370 [2024-11-19 07:57:09.557943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:113448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.370 [2024-11-19 07:57:09.557966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.370 [2024-11-19 07:57:09.558013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:113456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.370 [2024-11-19 07:57:09.558047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.370 [2024-11-19 07:57:09.558071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:113464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.370 [2024-11-19 07:57:09.558092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.370 [2024-11-19 07:57:09.558116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:113472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.370 [2024-11-19 07:57:09.558137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.370 [2024-11-19 07:57:09.558179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:113480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.370 [2024-11-19 07:57:09.558201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.370 [2024-11-19 07:57:09.558225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:113488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.370 [2024-11-19 07:57:09.558247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.370 [2024-11-19 07:57:09.558270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:113496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.370 [2024-11-19 07:57:09.558291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.370 [2024-11-19 07:57:09.558314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:113504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.370 [2024-11-19 07:57:09.558344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.370 [2024-11-19 07:57:09.558368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:113512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.370 [2024-11-19 07:57:09.558400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.370 [2024-11-19 07:57:09.558423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:113520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.370 [2024-11-19 07:57:09.558444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.370 [2024-11-19 07:57:09.558467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:113528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.370 [2024-11-19 07:57:09.558487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.370 [2024-11-19 07:57:09.558511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:113536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.370 [2024-11-19 07:57:09.558532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.370 [2024-11-19 07:57:09.558555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:113544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.371 [2024-11-19 07:57:09.558576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.371 [2024-11-19 07:57:09.558599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:113552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.371 [2024-11-19 07:57:09.558621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.371 [2024-11-19 07:57:09.558649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:113560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.371 [2024-11-19 07:57:09.558703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.371 [2024-11-19 07:57:09.558732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:113568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.371 [2024-11-19 07:57:09.558754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.371 [2024-11-19 07:57:09.558779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:113576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.371 [2024-11-19 07:57:09.558801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.371 [2024-11-19 07:57:09.558826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:113584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.371 [2024-11-19 07:57:09.558856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.371 [2024-11-19 07:57:09.558881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:113592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.371 [2024-11-19 07:57:09.558904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.371 [2024-11-19 07:57:09.558929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:113600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.371 [2024-11-19 07:57:09.558952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.371 [2024-11-19 07:57:09.558976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:113608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.371 [2024-11-19 07:57:09.559014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.371 [2024-11-19 07:57:09.559038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:113616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.371 [2024-11-19 07:57:09.559060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.371 [2024-11-19 07:57:09.559083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:113624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.371 [2024-11-19 07:57:09.559114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.371 [2024-11-19 07:57:09.559136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:113632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.371 [2024-11-19 07:57:09.559166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.371 [2024-11-19 07:57:09.559189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:113640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.371 [2024-11-19 07:57:09.559210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.371 [2024-11-19 07:57:09.559233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:113648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.371 [2024-11-19 07:57:09.559254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.371 [2024-11-19 07:57:09.559277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:113656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.371 [2024-11-19 07:57:09.559303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.371 [2024-11-19 07:57:09.559327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:113664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.371 [2024-11-19 07:57:09.559349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.371 [2024-11-19 07:57:09.559373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:113672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.371 [2024-11-19 07:57:09.559394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.371 [2024-11-19 07:57:09.559418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:113680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.371 [2024-11-19 07:57:09.559439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.371 [2024-11-19 07:57:09.559462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:113688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.371 [2024-11-19 07:57:09.559483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.371 [2024-11-19 07:57:09.559505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:113696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.371 [2024-11-19 07:57:09.559527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.371 [2024-11-19 07:57:09.559550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:113704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.371 [2024-11-19 07:57:09.559571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.371 [2024-11-19 07:57:09.559595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:113712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.371 [2024-11-19 07:57:09.559616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.371 [2024-11-19 07:57:09.559640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:113720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.371 [2024-11-19 07:57:09.559661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.371 [2024-11-19 07:57:09.559716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:113728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.371 [2024-11-19 07:57:09.559740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.371 [2024-11-19 07:57:09.559764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:113736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.371 [2024-11-19 07:57:09.559786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.371 [2024-11-19 07:57:09.559810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:113744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.371 [2024-11-19 07:57:09.559832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.371 [2024-11-19 07:57:09.559856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:113752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.371 [2024-11-19 07:57:09.559879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.371 [2024-11-19 07:57:09.559911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:113760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.371 [2024-11-19 07:57:09.559935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.371 [2024-11-19 07:57:09.559959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:113768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.371 [2024-11-19 07:57:09.559982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.371 [2024-11-19 07:57:09.560032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:113776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.371 [2024-11-19 07:57:09.560053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.371 [2024-11-19 07:57:09.560077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:113784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.371 [2024-11-19 07:57:09.560099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.371 [2024-11-19 07:57:09.560129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:113792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.371 [2024-11-19 07:57:09.560151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.371 [2024-11-19 07:57:09.560182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:113800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.371 [2024-11-19 07:57:09.560204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.371 [2024-11-19 07:57:09.560226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:113808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.371 [2024-11-19 07:57:09.560248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.371 [2024-11-19 07:57:09.560271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:113816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.371 [2024-11-19 07:57:09.560292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.372 [2024-11-19 07:57:09.560316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:113824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.372 [2024-11-19 07:57:09.560338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.372 [2024-11-19 07:57:09.560361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:113832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.372 [2024-11-19 07:57:09.560383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.372 [2024-11-19 07:57:09.560417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:113840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.372 [2024-11-19 07:57:09.560438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.372 [2024-11-19 07:57:09.560462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:113848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.372 [2024-11-19 07:57:09.560483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.372 [2024-11-19 07:57:09.560506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:113856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.372 [2024-11-19 07:57:09.560531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.372 [2024-11-19 07:57:09.560555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:113864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.372 [2024-11-19 07:57:09.560577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.372 [2024-11-19 07:57:09.560601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:113872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.372 [2024-11-19 07:57:09.560630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.372 [2024-11-19 07:57:09.560654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:113216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.372 [2024-11-19 07:57:09.560698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.372 [2024-11-19 07:57:09.560727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:113224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.372 [2024-11-19 07:57:09.560749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.372 [2024-11-19 07:57:09.560773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:113232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.372 [2024-11-19 07:57:09.560795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.372 [2024-11-19 07:57:09.560819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:113240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.372 [2024-11-19 07:57:09.560841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.372 [2024-11-19 07:57:09.560865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:113248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.372 [2024-11-19 07:57:09.560887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.372 [2024-11-19 07:57:09.560911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:113256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.372 [2024-11-19 07:57:09.560933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.372 [2024-11-19 07:57:09.560957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:113264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.372 [2024-11-19 07:57:09.560990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.372 [2024-11-19 07:57:09.561030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:113880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.372 [2024-11-19 07:57:09.561052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.372 [2024-11-19 07:57:09.561075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:113888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.372 [2024-11-19 07:57:09.561097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.372 [2024-11-19 07:57:09.561121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:113896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.372 [2024-11-19 07:57:09.561142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.372 [2024-11-19 07:57:09.561177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:113904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.372 [2024-11-19 07:57:09.561200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.372 [2024-11-19 07:57:09.561224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:113912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.372 [2024-11-19 07:57:09.561245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.372 [2024-11-19 07:57:09.561269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:113920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.372 [2024-11-19 07:57:09.561291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.372 [2024-11-19 07:57:09.561314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:113928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.372 [2024-11-19 07:57:09.561335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.372 [2024-11-19 07:57:09.561374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:113936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.372 [2024-11-19 07:57:09.561396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.372 [2024-11-19 07:57:09.561421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:113944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.372 [2024-11-19 07:57:09.561442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.372 [2024-11-19 07:57:09.561465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:113952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.372 [2024-11-19 07:57:09.561486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.372 [2024-11-19 07:57:09.561510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:113960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.372 [2024-11-19 07:57:09.561532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.372 [2024-11-19 07:57:09.561555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.372 [2024-11-19 07:57:09.561576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.372 [2024-11-19 07:57:09.561606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:113976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.372 [2024-11-19 07:57:09.561627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.372 [2024-11-19 07:57:09.561662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:113984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.372 [2024-11-19 07:57:09.561708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.372 [2024-11-19 07:57:09.561743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:113992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.372 [2024-11-19 07:57:09.561765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.372 [2024-11-19 07:57:09.561790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:114000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.372 [2024-11-19 07:57:09.561812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.372 [2024-11-19 07:57:09.561841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:114008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.372 [2024-11-19 07:57:09.561864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.372 [2024-11-19 07:57:09.561888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:114016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.372 [2024-11-19 07:57:09.561910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.372 [2024-11-19 07:57:09.561934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:114024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.372 [2024-11-19 07:57:09.561956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.372 [2024-11-19 07:57:09.561980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:114032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.372 [2024-11-19 07:57:09.562023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.372 [2024-11-19 07:57:09.562060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:114040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.372 [2024-11-19 07:57:09.562084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.372 [2024-11-19 07:57:09.562117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:114048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.372 [2024-11-19 07:57:09.562138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.372 [2024-11-19 07:57:09.562162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:114056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.372 [2024-11-19 07:57:09.562183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.372 [2024-11-19 07:57:09.562208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:114064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.372 [2024-11-19 07:57:09.562230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.372 [2024-11-19 07:57:09.562253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:114072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.372 [2024-11-19 07:57:09.562275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.372 [2024-11-19 07:57:09.562298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:114080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.373 [2024-11-19 07:57:09.562320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.373 [2024-11-19 07:57:09.562344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:114088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.373 [2024-11-19 07:57:09.562365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.373 [2024-11-19 07:57:09.562389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:114096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.373 [2024-11-19 07:57:09.562410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.373 [2024-11-19 07:57:09.562466] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:24.373 [2024-11-19 07:57:09.562496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114104 len:8 PRP1 0x0 PRP2 0x0 00:33:24.373 [2024-11-19 07:57:09.562518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.373 [2024-11-19 07:57:09.562554] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:24.373 [2024-11-19 07:57:09.562574] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:24.373 [2024-11-19 07:57:09.562593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114112 len:8 PRP1 0x0 PRP2 0x0 00:33:24.373 [2024-11-19 07:57:09.562613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.373 [2024-11-19 07:57:09.562634] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:24.373 [2024-11-19 07:57:09.562651] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:24.373 [2024-11-19 07:57:09.562701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114120 len:8 PRP1 0x0 PRP2 0x0 00:33:24.373 [2024-11-19 07:57:09.562725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.373 [2024-11-19 07:57:09.562747] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:24.373 [2024-11-19 07:57:09.562764] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:24.373 [2024-11-19 07:57:09.562782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114128 len:8 PRP1 0x0 PRP2 0x0 00:33:24.373 [2024-11-19 07:57:09.562802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.373 [2024-11-19 07:57:09.562828] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:24.373 [2024-11-19 07:57:09.562846] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:24.373 [2024-11-19 07:57:09.562865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114136 len:8 PRP1 0x0 PRP2 0x0 00:33:24.373 [2024-11-19 07:57:09.562884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.373 [2024-11-19 07:57:09.562904] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:24.373 [2024-11-19 07:57:09.562921] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:24.373 [2024-11-19 07:57:09.562939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114144 len:8 PRP1 0x0 PRP2 0x0 00:33:24.373 [2024-11-19 07:57:09.562959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.373 [2024-11-19 07:57:09.562978] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:24.373 [2024-11-19 07:57:09.563022] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:24.373 [2024-11-19 07:57:09.563040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114152 len:8 PRP1 0x0 PRP2 0x0 00:33:24.373 [2024-11-19 07:57:09.563059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.373 [2024-11-19 07:57:09.563079] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:24.373 [2024-11-19 07:57:09.563096] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:24.373 [2024-11-19 07:57:09.563113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114160 len:8 PRP1 0x0 PRP2 0x0 00:33:24.373 [2024-11-19 07:57:09.563132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.373 [2024-11-19 07:57:09.563156] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:24.373 [2024-11-19 07:57:09.563174] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:24.373 [2024-11-19 07:57:09.563191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114168 len:8 PRP1 0x0 PRP2 0x0 00:33:24.373 [2024-11-19 07:57:09.563210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.373 [2024-11-19 07:57:09.563240] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:24.373 [2024-11-19 07:57:09.563257] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:24.373 [2024-11-19 07:57:09.563274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114176 len:8 PRP1 0x0 PRP2 0x0 00:33:24.373 [2024-11-19 07:57:09.563293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.373 [2024-11-19 07:57:09.563319] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:24.373 [2024-11-19 07:57:09.563335] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:24.373 [2024-11-19 07:57:09.563358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114184 len:8 PRP1 0x0 PRP2 0x0 00:33:24.373 [2024-11-19 07:57:09.563378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.373 [2024-11-19 07:57:09.563396] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:24.373 [2024-11-19 07:57:09.563413] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:24.373 [2024-11-19 07:57:09.563430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114192 len:8 PRP1 0x0 PRP2 0x0 00:33:24.373 [2024-11-19 07:57:09.563449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.373 [2024-11-19 07:57:09.563473] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:24.373 [2024-11-19 07:57:09.563490] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:24.373 [2024-11-19 07:57:09.563507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114200 len:8 PRP1 0x0 PRP2 0x0 00:33:24.373 [2024-11-19 07:57:09.563526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.373 [2024-11-19 07:57:09.563545] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:24.373 [2024-11-19 07:57:09.563561] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:24.373 [2024-11-19 07:57:09.563578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114208 len:8 PRP1 0x0 PRP2 0x0 00:33:24.373 [2024-11-19 07:57:09.563597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.373 [2024-11-19 07:57:09.563615] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:24.373 [2024-11-19 07:57:09.563632] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:24.373 [2024-11-19 07:57:09.563649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114216 len:8 PRP1 0x0 PRP2 0x0 00:33:24.373 [2024-11-19 07:57:09.563701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.373 [2024-11-19 07:57:09.563725] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:24.373 [2024-11-19 07:57:09.563743] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:24.373 [2024-11-19 07:57:09.563760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114224 len:8 PRP1 0x0 PRP2 0x0 00:33:24.373 [2024-11-19 07:57:09.563785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.373 [2024-11-19 07:57:09.564073] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:33:24.373 [2024-11-19 07:57:09.564145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:24.373 [2024-11-19 07:57:09.564180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.373 [2024-11-19 07:57:09.564203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:24.373 [2024-11-19 07:57:09.564223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.373 [2024-11-19 07:57:09.564245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:24.373 [2024-11-19 07:57:09.564265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.373 [2024-11-19 07:57:09.564287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:24.373 [2024-11-19 07:57:09.564307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:24.373 [2024-11-19 07:57:09.564326] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:33:24.373 [2024-11-19 07:57:09.564412] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2000 (9): Bad file descriptor 00:33:24.373 [2024-11-19 07:57:09.568244] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:33:24.373 [2024-11-19 07:57:09.729408] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:33:24.373 5918.40 IOPS, 23.12 MiB/s [2024-11-19T06:57:16.303Z] 5941.45 IOPS, 23.21 MiB/s [2024-11-19T06:57:16.303Z] 5964.58 IOPS, 23.30 MiB/s [2024-11-19T06:57:16.303Z] 5980.00 IOPS, 23.36 MiB/s [2024-11-19T06:57:16.303Z] 5989.57 IOPS, 23.40 MiB/s [2024-11-19T06:57:16.303Z] 6010.47 IOPS, 23.48 MiB/s 00:33:24.373 Latency(us) 00:33:24.373 [2024-11-19T06:57:16.303Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:24.373 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:24.373 Verification LBA range: start 0x0 length 0x4000 00:33:24.373 NVMe0n1 : 15.02 6011.35 23.48 818.55 0.00 18705.93 788.86 21262.79 00:33:24.373 [2024-11-19T06:57:16.303Z] =================================================================================================================== 00:33:24.373 [2024-11-19T06:57:16.303Z] Total : 6011.35 23.48 818.55 0.00 18705.93 788.86 21262.79 00:33:24.373 Received shutdown signal, test time was about 15.000000 seconds 00:33:24.373 00:33:24.374 Latency(us) 00:33:24.374 [2024-11-19T06:57:16.304Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:24.374 [2024-11-19T06:57:16.304Z] =================================================================================================================== 00:33:24.374 [2024-11-19T06:57:16.304Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:24.374 07:57:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:33:24.374 07:57:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:33:24.374 07:57:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:33:24.374 07:57:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3093897 00:33:24.374 07:57:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:33:24.374 07:57:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3093897 /var/tmp/bdevperf.sock 00:33:24.374 07:57:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3093897 ']' 00:33:24.374 07:57:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:24.374 07:57:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:24.374 07:57:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:24.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:24.374 07:57:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:24.374 07:57:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:25.308 07:57:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:25.308 07:57:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:33:25.308 07:57:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:25.566 [2024-11-19 07:57:17.368084] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:25.567 07:57:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:25.825 [2024-11-19 07:57:17.652989] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:33:25.825 07:57:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:26.391 NVMe0n1 00:33:26.391 07:57:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:26.649 00:33:26.649 07:57:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:27.217 00:33:27.217 07:57:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:27.217 07:57:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:33:27.217 07:57:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:27.788 07:57:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:33:31.075 07:57:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:31.075 07:57:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:33:31.075 07:57:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3094688 00:33:31.075 07:57:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:31.075 07:57:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 3094688 00:33:32.012 { 00:33:32.012 "results": [ 00:33:32.012 { 00:33:32.012 "job": "NVMe0n1", 00:33:32.012 "core_mask": "0x1", 00:33:32.012 "workload": "verify", 00:33:32.012 "status": "finished", 00:33:32.012 "verify_range": { 00:33:32.012 "start": 0, 00:33:32.012 "length": 16384 00:33:32.012 }, 00:33:32.012 "queue_depth": 128, 00:33:32.012 "io_size": 4096, 00:33:32.012 "runtime": 1.025888, 00:33:32.012 "iops": 6145.895068467513, 00:33:32.012 "mibps": 24.007402611201222, 00:33:32.012 "io_failed": 0, 00:33:32.012 "io_timeout": 0, 00:33:32.012 "avg_latency_us": 20714.230518518518, 00:33:32.012 "min_latency_us": 4344.794074074074, 00:33:32.012 "max_latency_us": 19806.435555555556 00:33:32.012 } 00:33:32.012 ], 00:33:32.012 "core_count": 1 00:33:32.012 } 00:33:32.012 07:57:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:32.012 [2024-11-19 07:57:16.146624] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:33:32.012 [2024-11-19 07:57:16.146794] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3093897 ] 00:33:32.012 [2024-11-19 07:57:16.288256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:32.012 [2024-11-19 07:57:16.415162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:32.012 [2024-11-19 07:57:19.393916] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:33:32.012 [2024-11-19 07:57:19.394065] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:32.012 [2024-11-19 07:57:19.394104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.012 [2024-11-19 07:57:19.394134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:32.012 [2024-11-19 07:57:19.394155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.012 [2024-11-19 07:57:19.394232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:32.012 [2024-11-19 07:57:19.394257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.012 [2024-11-19 07:57:19.394279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:32.012 [2024-11-19 07:57:19.394300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:32.012 [2024-11-19 07:57:19.394321] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:33:32.012 [2024-11-19 07:57:19.394404] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:33:32.012 [2024-11-19 07:57:19.394460] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2000 (9): Bad file descriptor 00:33:32.012 [2024-11-19 07:57:19.486133] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:33:32.012 Running I/O for 1 seconds... 00:33:32.013 6067.00 IOPS, 23.70 MiB/s 00:33:32.013 Latency(us) 00:33:32.013 [2024-11-19T06:57:23.943Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:32.013 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:32.013 Verification LBA range: start 0x0 length 0x4000 00:33:32.013 NVMe0n1 : 1.03 6145.90 24.01 0.00 0.00 20714.23 4344.79 19806.44 00:33:32.013 [2024-11-19T06:57:23.943Z] =================================================================================================================== 00:33:32.013 [2024-11-19T06:57:23.943Z] Total : 6145.90 24.01 0.00 0.00 20714.23 4344.79 19806.44 00:33:32.013 07:57:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:32.013 07:57:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:33:32.271 07:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:32.529 07:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:32.529 07:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:33:32.787 07:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:33.046 07:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:33:36.398 07:57:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:36.398 07:57:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:33:36.398 07:57:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 3093897 00:33:36.398 07:57:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3093897 ']' 00:33:36.398 07:57:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3093897 00:33:36.398 07:57:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:33:36.398 07:57:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:36.398 07:57:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3093897 00:33:36.398 07:57:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:36.398 07:57:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:36.398 07:57:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3093897' 00:33:36.398 killing process with pid 3093897 00:33:36.398 07:57:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3093897 00:33:36.398 07:57:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3093897 00:33:37.333 07:57:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:33:37.333 07:57:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:37.593 07:57:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:33:37.593 07:57:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:37.593 07:57:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:33:37.593 07:57:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:37.593 07:57:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:33:37.593 07:57:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:37.593 07:57:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:33:37.593 07:57:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:37.593 07:57:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:37.593 rmmod nvme_tcp 00:33:37.593 rmmod nvme_fabrics 00:33:37.593 rmmod nvme_keyring 00:33:37.593 07:57:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:37.593 07:57:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:33:37.593 07:57:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:33:37.593 07:57:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 3090872 ']' 00:33:37.593 07:57:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 3090872 00:33:37.593 07:57:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3090872 ']' 00:33:37.593 07:57:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3090872 00:33:37.593 07:57:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:33:37.593 07:57:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:37.593 07:57:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3090872 00:33:37.593 07:57:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:37.593 07:57:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:37.593 07:57:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3090872' 00:33:37.593 killing process with pid 3090872 00:33:37.593 07:57:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3090872 00:33:37.593 07:57:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3090872 00:33:38.975 07:57:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:38.975 07:57:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:38.975 07:57:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:38.975 07:57:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:33:38.975 07:57:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:33:38.975 07:57:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:38.975 07:57:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:33:38.975 07:57:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:38.975 07:57:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:38.975 07:57:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:38.975 07:57:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:38.975 07:57:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:40.886 07:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:40.886 00:33:40.886 real 0m39.925s 00:33:40.886 user 2m20.266s 00:33:40.886 sys 0m6.318s 00:33:40.886 07:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:40.886 07:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:40.886 ************************************ 00:33:40.886 END TEST nvmf_failover 00:33:40.886 ************************************ 00:33:40.886 07:57:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:33:40.887 07:57:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:40.887 07:57:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:40.887 07:57:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.887 ************************************ 00:33:40.887 START TEST nvmf_host_discovery 00:33:40.887 ************************************ 00:33:40.887 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:33:41.146 * Looking for test storage... 00:33:41.146 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:41.146 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:41.146 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:33:41.146 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:41.146 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:41.146 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:41.146 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:41.146 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:41.146 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:33:41.146 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:33:41.146 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:33:41.146 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:33:41.146 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:33:41.146 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:33:41.146 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:33:41.146 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:41.146 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:33:41.146 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:33:41.146 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:41.146 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:41.146 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:33:41.146 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:33:41.146 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:41.146 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:33:41.146 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:33:41.146 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:33:41.146 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:33:41.146 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:41.146 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:33:41.146 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:33:41.146 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:41.146 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:41.146 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:33:41.146 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:41.146 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:41.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:41.146 --rc genhtml_branch_coverage=1 00:33:41.146 --rc genhtml_function_coverage=1 00:33:41.146 --rc genhtml_legend=1 00:33:41.146 --rc geninfo_all_blocks=1 00:33:41.146 --rc geninfo_unexecuted_blocks=1 00:33:41.146 00:33:41.146 ' 00:33:41.146 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:41.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:41.146 --rc genhtml_branch_coverage=1 00:33:41.146 --rc genhtml_function_coverage=1 00:33:41.146 --rc genhtml_legend=1 00:33:41.146 --rc geninfo_all_blocks=1 00:33:41.146 --rc geninfo_unexecuted_blocks=1 00:33:41.146 00:33:41.146 ' 00:33:41.146 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:41.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:41.146 --rc genhtml_branch_coverage=1 00:33:41.146 --rc genhtml_function_coverage=1 00:33:41.146 --rc genhtml_legend=1 00:33:41.146 --rc geninfo_all_blocks=1 00:33:41.146 --rc geninfo_unexecuted_blocks=1 00:33:41.146 00:33:41.146 ' 00:33:41.146 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:41.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:41.146 --rc genhtml_branch_coverage=1 00:33:41.146 --rc genhtml_function_coverage=1 00:33:41.146 --rc genhtml_legend=1 00:33:41.146 --rc geninfo_all_blocks=1 00:33:41.146 --rc geninfo_unexecuted_blocks=1 00:33:41.146 00:33:41.146 ' 00:33:41.146 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:41.146 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:33:41.146 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:41.146 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:41.146 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:41.146 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:41.146 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:41.146 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:41.146 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:41.146 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:41.146 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:41.146 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:41.146 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:41.146 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:41.146 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:41.146 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:41.146 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:41.146 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:41.146 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:41.146 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:33:41.146 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:41.146 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:41.146 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:41.146 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:41.146 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:41.146 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:41.147 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:33:41.147 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:41.147 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:33:41.147 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:41.147 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:41.147 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:41.147 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:41.147 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:41.147 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:41.147 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:41.147 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:41.147 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:41.147 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:41.147 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:33:41.147 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:33:41.147 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:33:41.147 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:33:41.147 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:33:41.147 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:33:41.147 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:33:41.147 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:41.147 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:41.147 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:41.147 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:41.147 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:41.147 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:41.147 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:41.147 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:41.147 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:41.147 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:41.147 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:33:41.147 07:57:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:43.051 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:43.051 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:33:43.051 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:43.051 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:43.051 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:43.051 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:43.051 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:43.051 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:33:43.051 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:43.051 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:33:43.051 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:33:43.051 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:33:43.051 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:33:43.051 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:33:43.051 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:33:43.051 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:43.051 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:43.051 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:43.051 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:43.051 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:43.051 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:43.051 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:43.051 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:43.051 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:43.051 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:43.051 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:43.051 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:43.051 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:43.051 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:43.051 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:43.051 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:43.051 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:43.051 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:43.051 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:43.051 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:43.051 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:43.051 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:43.051 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:43.051 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:43.051 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:43.051 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:43.051 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:43.051 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:43.051 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:43.051 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:43.051 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:43.051 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:43.051 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:43.051 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:43.051 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:43.051 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:43.051 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:43.051 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:43.051 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:43.051 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:43.051 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:43.051 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:43.051 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:43.051 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:43.051 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:43.051 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:43.051 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:43.051 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:43.051 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:43.051 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:43.051 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:43.051 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:43.052 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:43.052 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:43.052 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:43.052 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:43.052 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:43.052 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:43.052 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:33:43.052 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:43.052 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:43.052 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:43.052 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:43.052 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:43.052 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:43.052 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:43.052 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:43.052 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:43.052 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:43.052 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:43.052 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:43.052 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:43.052 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:43.052 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:43.052 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:43.052 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:43.052 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:43.052 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:43.052 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:43.052 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:43.052 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:43.052 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:43.052 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:43.052 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:43.052 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:43.052 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:43.052 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.346 ms 00:33:43.052 00:33:43.052 --- 10.0.0.2 ping statistics --- 00:33:43.052 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:43.052 rtt min/avg/max/mdev = 0.346/0.346/0.346/0.000 ms 00:33:43.052 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:43.311 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:43.311 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:33:43.311 00:33:43.311 --- 10.0.0.1 ping statistics --- 00:33:43.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:43.311 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:33:43.311 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:43.311 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:33:43.311 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:43.311 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:43.311 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:43.311 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:43.311 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:43.311 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:43.311 07:57:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:43.311 07:57:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:33:43.311 07:57:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:43.311 07:57:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:43.311 07:57:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:43.311 07:57:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=3097553 00:33:43.311 07:57:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:33:43.311 07:57:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 3097553 00:33:43.311 07:57:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 3097553 ']' 00:33:43.311 07:57:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:43.311 07:57:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:43.311 07:57:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:43.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:43.311 07:57:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:43.311 07:57:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:43.311 [2024-11-19 07:57:35.102576] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:33:43.311 [2024-11-19 07:57:35.102727] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:43.311 [2024-11-19 07:57:35.244241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:43.569 [2024-11-19 07:57:35.362095] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:43.569 [2024-11-19 07:57:35.362178] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:43.569 [2024-11-19 07:57:35.362200] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:43.569 [2024-11-19 07:57:35.362221] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:43.569 [2024-11-19 07:57:35.362237] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:43.569 [2024-11-19 07:57:35.363646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:44.504 07:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:44.504 07:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:33:44.504 07:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:44.504 07:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:44.504 07:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:44.504 07:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:44.504 07:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:44.504 07:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.504 07:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:44.504 [2024-11-19 07:57:36.125863] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:44.504 07:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.504 07:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:33:44.504 07:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.504 07:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:44.504 [2024-11-19 07:57:36.134069] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:33:44.504 07:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.504 07:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:33:44.504 07:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.504 07:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:44.504 null0 00:33:44.504 07:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.504 07:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:33:44.504 07:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.504 07:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:44.504 null1 00:33:44.504 07:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.504 07:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:33:44.504 07:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.504 07:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:44.504 07:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.504 07:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3097711 00:33:44.504 07:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:33:44.504 07:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3097711 /tmp/host.sock 00:33:44.504 07:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 3097711 ']' 00:33:44.504 07:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:33:44.505 07:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:44.505 07:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:33:44.505 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:33:44.505 07:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:44.505 07:57:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:44.505 [2024-11-19 07:57:36.250055] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:33:44.505 [2024-11-19 07:57:36.250188] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3097711 ] 00:33:44.505 [2024-11-19 07:57:36.391716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:44.764 [2024-11-19 07:57:36.527383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:45.699 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:45.699 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:33:45.699 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:45.699 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:33:45.699 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.699 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:45.699 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.699 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:33:45.699 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.699 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:45.699 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.699 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:33:45.699 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:33:45.699 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:45.699 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:45.699 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.699 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:45.699 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:45.699 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:45.699 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.699 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:33:45.699 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:33:45.699 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:45.699 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:45.699 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.699 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:45.699 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:45.699 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:45.699 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.699 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:33:45.699 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:33:45.699 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.699 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:45.699 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.699 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:33:45.699 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:45.699 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:45.699 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.699 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:45.699 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:45.699 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:45.699 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.699 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:33:45.699 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:33:45.699 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:45.699 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.699 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:45.699 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:45.699 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:45.699 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:45.699 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.699 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:33:45.699 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:33:45.699 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.699 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:45.699 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.699 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:33:45.699 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:45.699 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:45.699 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.699 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:45.699 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:45.699 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:45.699 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.699 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:33:45.699 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:33:45.699 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:45.699 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.699 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:45.699 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:45.699 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:45.699 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:45.699 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.699 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:33:45.699 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:45.699 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.699 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:45.699 [2024-11-19 07:57:37.610324] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:45.699 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.699 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:33:45.699 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:45.699 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:45.699 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.699 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:45.699 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:45.699 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:45.699 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.959 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:33:45.959 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:33:45.959 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:45.959 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:45.959 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.959 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:45.959 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:45.959 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:45.959 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.959 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:33:45.959 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:33:45.959 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:33:45.960 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:45.960 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:45.960 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:45.960 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:45.960 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:45.960 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:45.960 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:33:45.960 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:45.960 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.960 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:45.960 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.960 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:33:45.960 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:33:45.960 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:45.960 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:45.960 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:33:45.960 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.960 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:45.960 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.960 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:45.960 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:45.960 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:45.960 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:45.960 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:45.960 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:33:45.960 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:45.960 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:45.960 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.960 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:45.960 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:45.960 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:45.960 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.960 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:33:45.960 07:57:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:33:46.529 [2024-11-19 07:57:38.377902] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:46.529 [2024-11-19 07:57:38.377979] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:46.529 [2024-11-19 07:57:38.378027] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:46.790 [2024-11-19 07:57:38.464303] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:33:46.790 [2024-11-19 07:57:38.646799] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:33:46.790 [2024-11-19 07:57:38.648582] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x6150001f2a00:1 started. 00:33:46.790 [2024-11-19 07:57:38.651170] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:46.790 [2024-11-19 07:57:38.651210] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:46.790 [2024-11-19 07:57:38.656594] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x6150001f2a00 was disconnected and freed. delete nvme_qpair. 00:33:47.050 07:57:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:47.050 07:57:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:47.050 07:57:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:33:47.050 07:57:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:47.050 07:57:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:47.050 07:57:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.050 07:57:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:47.050 07:57:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:47.050 07:57:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:47.050 07:57:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.050 07:57:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:47.050 07:57:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:47.050 07:57:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:33:47.050 07:57:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:33:47.050 07:57:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:47.050 07:57:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:47.050 07:57:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:33:47.050 07:57:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:33:47.050 07:57:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:47.050 07:57:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:47.050 07:57:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.050 07:57:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:47.050 07:57:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:47.050 07:57:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:47.050 07:57:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.050 07:57:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:33:47.050 07:57:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:47.050 07:57:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:33:47.050 07:57:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:33:47.050 07:57:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:47.050 07:57:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:47.050 07:57:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:33:47.050 07:57:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:33:47.050 07:57:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:47.050 07:57:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:47.050 07:57:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.050 07:57:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:47.050 07:57:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:47.050 07:57:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:47.050 07:57:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.050 07:57:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:33:47.050 07:57:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:47.050 07:57:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:33:47.050 07:57:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:33:47.050 07:57:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:47.050 07:57:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:47.050 07:57:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:47.050 07:57:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:47.050 07:57:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:47.050 07:57:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:47.050 07:57:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:33:47.050 07:57:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.050 07:57:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:47.050 07:57:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:47.050 07:57:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.050 07:57:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:33:47.050 07:57:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:33:47.050 07:57:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:47.050 07:57:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:47.050 07:57:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:33:47.050 07:57:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.050 07:57:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:47.050 [2024-11-19 07:57:38.970858] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x6150001f2c80:1 started. 00:33:47.050 07:57:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.050 07:57:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:47.050 07:57:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:47.050 07:57:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:47.051 07:57:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:47.051 07:57:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:33:47.051 07:57:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:33:47.051 07:57:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:47.051 07:57:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:47.051 07:57:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.051 07:57:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:47.051 07:57:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:47.051 07:57:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:47.051 [2024-11-19 07:57:38.977087] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x6150001f2c80 was disconnected and freed. delete nvme_qpair. 00:33:47.310 07:57:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.310 07:57:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:47.310 07:57:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:47.310 07:57:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:33:47.310 07:57:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:33:47.310 07:57:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:47.310 07:57:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:47.310 07:57:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:47.310 07:57:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:47.310 07:57:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:47.310 07:57:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:47.310 07:57:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:33:47.310 07:57:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.310 07:57:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:47.310 07:57:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:47.310 07:57:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.310 07:57:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:33:47.310 07:57:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:33:47.310 07:57:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:47.310 07:57:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:47.310 07:57:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:33:47.310 07:57:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.310 07:57:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:47.310 [2024-11-19 07:57:39.055939] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:47.310 [2024-11-19 07:57:39.056652] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:33:47.310 [2024-11-19 07:57:39.056738] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:47.310 07:57:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.310 07:57:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:47.310 07:57:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:47.310 07:57:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:47.310 07:57:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:47.310 07:57:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:47.310 07:57:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:33:47.310 07:57:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:47.310 07:57:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.310 07:57:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:47.310 07:57:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:47.310 07:57:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:47.310 07:57:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:47.310 07:57:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.310 07:57:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:47.310 07:57:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:47.310 07:57:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:47.310 07:57:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:47.310 07:57:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:47.310 07:57:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:47.310 07:57:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:33:47.310 07:57:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:33:47.310 07:57:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:47.310 07:57:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:47.310 07:57:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.310 07:57:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:47.310 07:57:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:47.310 07:57:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:47.310 07:57:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.310 07:57:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:47.310 07:57:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:47.310 07:57:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:33:47.310 07:57:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:33:47.310 07:57:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:47.310 07:57:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:47.310 07:57:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:33:47.310 07:57:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:33:47.310 [2024-11-19 07:57:39.143861] bdev_nvme.c:7308:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:33:47.310 07:57:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:47.310 07:57:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:47.310 07:57:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.310 07:57:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:47.310 07:57:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:47.310 07:57:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:47.311 07:57:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.311 07:57:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:33:47.311 07:57:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:33:47.569 [2024-11-19 07:57:39.453985] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:33:47.569 [2024-11-19 07:57:39.454126] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:47.569 [2024-11-19 07:57:39.454157] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:47.569 [2024-11-19 07:57:39.454173] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:48.508 07:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:48.508 07:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:33:48.508 07:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:33:48.508 07:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:48.508 07:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:48.508 07:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:48.508 07:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:48.508 07:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:48.508 07:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:48.508 07:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:48.508 07:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:33:48.508 07:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:48.508 07:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:33:48.508 07:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:33:48.508 07:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:48.508 07:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:48.508 07:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:48.508 07:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:48.508 07:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:48.508 07:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:48.508 07:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:48.508 07:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:48.508 07:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:48.508 07:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:48.508 07:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:48.508 07:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:33:48.508 07:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:33:48.508 07:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:48.508 07:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:48.508 07:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:48.508 07:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:48.508 07:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:48.508 [2024-11-19 07:57:40.268503] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:33:48.508 [2024-11-19 07:57:40.268567] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:48.508 [2024-11-19 07:57:40.271834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:48.508 [2024-11-19 07:57:40.271881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.508 [2024-11-19 07:57:40.271908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:48.508 [2024-11-19 07:57:40.271930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.508 [2024-11-19 07:57:40.271951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:48.508 07:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:48.508 [2024-11-19 07:57:40.271982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.508 [2024-11-19 07:57:40.272004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:48.508 [2024-11-19 07:57:40.272024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.508 [2024-11-19 07:57:40.272044] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:33:48.508 07:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:48.508 07:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:48.508 07:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:48.508 07:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:48.508 07:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:48.508 07:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:33:48.508 07:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:48.508 07:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:48.508 07:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:48.508 07:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:48.508 07:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:48.508 07:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:48.508 [2024-11-19 07:57:40.281826] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:48.508 07:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:48.508 [2024-11-19 07:57:40.291865] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:48.508 [2024-11-19 07:57:40.291906] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:48.508 [2024-11-19 07:57:40.291925] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:48.508 [2024-11-19 07:57:40.291939] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:48.508 [2024-11-19 07:57:40.292020] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:48.508 [2024-11-19 07:57:40.292276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.508 [2024-11-19 07:57:40.292316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:33:48.508 [2024-11-19 07:57:40.292342] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:33:48.508 [2024-11-19 07:57:40.292382] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:48.508 [2024-11-19 07:57:40.292415] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:48.508 [2024-11-19 07:57:40.292438] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:48.508 [2024-11-19 07:57:40.292470] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:48.508 [2024-11-19 07:57:40.292490] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:48.508 [2024-11-19 07:57:40.292527] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:48.508 [2024-11-19 07:57:40.292541] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:48.508 [2024-11-19 07:57:40.302085] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:48.508 [2024-11-19 07:57:40.302123] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:48.508 [2024-11-19 07:57:40.302141] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:48.509 [2024-11-19 07:57:40.302155] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:48.509 [2024-11-19 07:57:40.302195] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:48.509 [2024-11-19 07:57:40.302419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.509 [2024-11-19 07:57:40.302459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:33:48.509 [2024-11-19 07:57:40.302500] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:33:48.509 [2024-11-19 07:57:40.302533] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:48.509 [2024-11-19 07:57:40.302564] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:48.509 [2024-11-19 07:57:40.302585] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:48.509 [2024-11-19 07:57:40.302622] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:48.509 [2024-11-19 07:57:40.302640] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:48.509 [2024-11-19 07:57:40.302654] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:48.509 [2024-11-19 07:57:40.302704] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:48.509 07:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:48.509 07:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:48.509 07:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:48.509 07:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:48.509 07:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:48.509 07:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:48.509 07:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:33:48.509 [2024-11-19 07:57:40.312249] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:48.509 [2024-11-19 07:57:40.312286] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:48.509 [2024-11-19 07:57:40.312313] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:48.509 [2024-11-19 07:57:40.312327] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:48.509 [2024-11-19 07:57:40.312380] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:48.509 07:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:33:48.509 [2024-11-19 07:57:40.312599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.509 [2024-11-19 07:57:40.312664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:33:48.509 [2024-11-19 07:57:40.312707] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:33:48.509 [2024-11-19 07:57:40.312760] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:48.509 [2024-11-19 07:57:40.312792] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:48.509 [2024-11-19 07:57:40.312813] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:48.509 [2024-11-19 07:57:40.312832] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:48.509 [2024-11-19 07:57:40.312851] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:48.509 [2024-11-19 07:57:40.312865] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:48.509 [2024-11-19 07:57:40.312878] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:48.509 07:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:48.509 07:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:48.509 07:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:48.509 07:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:48.509 07:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:48.509 07:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:48.509 [2024-11-19 07:57:40.322425] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:48.509 [2024-11-19 07:57:40.322478] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:48.509 [2024-11-19 07:57:40.322497] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:48.509 [2024-11-19 07:57:40.322512] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:48.509 [2024-11-19 07:57:40.322554] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:48.509 [2024-11-19 07:57:40.322797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.509 [2024-11-19 07:57:40.322835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:33:48.509 [2024-11-19 07:57:40.322859] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:33:48.509 [2024-11-19 07:57:40.322892] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:48.509 [2024-11-19 07:57:40.322923] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:48.509 [2024-11-19 07:57:40.322944] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:48.509 [2024-11-19 07:57:40.322963] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:48.509 [2024-11-19 07:57:40.322982] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:48.509 [2024-11-19 07:57:40.322997] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:48.509 [2024-11-19 07:57:40.323010] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:48.509 [2024-11-19 07:57:40.332594] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:48.509 [2024-11-19 07:57:40.332626] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:48.509 [2024-11-19 07:57:40.332641] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:48.509 [2024-11-19 07:57:40.332653] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:48.509 [2024-11-19 07:57:40.332711] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:48.509 [2024-11-19 07:57:40.332850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.509 [2024-11-19 07:57:40.332887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:33:48.509 [2024-11-19 07:57:40.332923] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:33:48.509 [2024-11-19 07:57:40.332956] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:48.509 [2024-11-19 07:57:40.332992] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:48.509 [2024-11-19 07:57:40.333015] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:48.509 [2024-11-19 07:57:40.333035] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:48.509 [2024-11-19 07:57:40.333060] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:48.509 [2024-11-19 07:57:40.333074] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:48.509 [2024-11-19 07:57:40.333105] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:48.509 07:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:48.509 [2024-11-19 07:57:40.342763] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:48.509 [2024-11-19 07:57:40.342798] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:48.509 [2024-11-19 07:57:40.342814] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:48.509 [2024-11-19 07:57:40.342827] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:48.509 [2024-11-19 07:57:40.342873] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:48.509 [2024-11-19 07:57:40.343023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.510 [2024-11-19 07:57:40.343060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:33:48.510 [2024-11-19 07:57:40.343083] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:33:48.510 [2024-11-19 07:57:40.343115] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:48.510 [2024-11-19 07:57:40.343153] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:48.510 [2024-11-19 07:57:40.343173] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:48.510 [2024-11-19 07:57:40.343191] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:48.510 [2024-11-19 07:57:40.343225] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:48.510 [2024-11-19 07:57:40.343244] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:48.510 [2024-11-19 07:57:40.343257] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:48.510 07:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:48.510 07:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:48.510 07:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:33:48.510 07:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:33:48.510 07:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:48.510 07:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:48.510 07:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:33:48.510 07:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:33:48.510 07:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:48.510 07:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:48.510 07:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:48.510 07:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:48.510 [2024-11-19 07:57:40.352914] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:48.510 [2024-11-19 07:57:40.352947] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:48.510 [2024-11-19 07:57:40.352964] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:48.510 [2024-11-19 07:57:40.352977] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:48.510 07:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:48.510 [2024-11-19 07:57:40.353012] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:48.510 07:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:48.510 [2024-11-19 07:57:40.353187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.510 [2024-11-19 07:57:40.353228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:33:48.510 [2024-11-19 07:57:40.353271] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:33:48.510 [2024-11-19 07:57:40.353316] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:48.510 [2024-11-19 07:57:40.353348] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:48.510 [2024-11-19 07:57:40.353369] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:48.510 [2024-11-19 07:57:40.353389] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:48.510 [2024-11-19 07:57:40.353407] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:48.510 [2024-11-19 07:57:40.353422] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:48.510 [2024-11-19 07:57:40.353434] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:48.510 [2024-11-19 07:57:40.363058] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:48.510 [2024-11-19 07:57:40.363099] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:48.510 [2024-11-19 07:57:40.363116] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:48.510 [2024-11-19 07:57:40.363128] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:48.510 [2024-11-19 07:57:40.363181] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:48.510 [2024-11-19 07:57:40.363327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.510 [2024-11-19 07:57:40.363365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:33:48.510 [2024-11-19 07:57:40.363389] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:33:48.510 [2024-11-19 07:57:40.363441] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:48.510 [2024-11-19 07:57:40.363493] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:48.510 [2024-11-19 07:57:40.363522] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:48.510 [2024-11-19 07:57:40.363544] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:48.510 [2024-11-19 07:57:40.363564] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:48.510 [2024-11-19 07:57:40.363580] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:48.510 [2024-11-19 07:57:40.363594] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:48.510 07:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:48.510 [2024-11-19 07:57:40.373232] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:48.510 [2024-11-19 07:57:40.373270] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:48.510 [2024-11-19 07:57:40.373288] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:48.510 [2024-11-19 07:57:40.373302] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:48.510 [2024-11-19 07:57:40.373342] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:48.510 [2024-11-19 07:57:40.373504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.510 [2024-11-19 07:57:40.373558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:33:48.510 [2024-11-19 07:57:40.373600] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:33:48.510 [2024-11-19 07:57:40.373637] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:48.510 [2024-11-19 07:57:40.373701] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:48.510 [2024-11-19 07:57:40.373748] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:48.510 [2024-11-19 07:57:40.373768] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:48.510 [2024-11-19 07:57:40.373786] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:48.510 [2024-11-19 07:57:40.373801] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:48.510 [2024-11-19 07:57:40.373818] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:48.510 [2024-11-19 07:57:40.383382] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:48.511 [2024-11-19 07:57:40.383412] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:48.511 [2024-11-19 07:57:40.383427] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:48.511 [2024-11-19 07:57:40.383439] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:48.511 [2024-11-19 07:57:40.383486] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:48.511 [2024-11-19 07:57:40.383630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.511 [2024-11-19 07:57:40.383664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:33:48.511 [2024-11-19 07:57:40.383714] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:33:48.511 [2024-11-19 07:57:40.383748] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:48.511 [2024-11-19 07:57:40.383794] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:48.511 [2024-11-19 07:57:40.383819] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:48.511 [2024-11-19 07:57:40.383839] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:48.511 [2024-11-19 07:57:40.383857] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:48.511 [2024-11-19 07:57:40.383871] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:48.511 [2024-11-19 07:57:40.383883] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:48.511 07:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:33:48.511 07:57:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:33:48.511 [2024-11-19 07:57:40.393529] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:48.511 [2024-11-19 07:57:40.393577] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:48.511 [2024-11-19 07:57:40.393595] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:48.511 [2024-11-19 07:57:40.393609] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:48.511 [2024-11-19 07:57:40.393666] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:48.511 [2024-11-19 07:57:40.393845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:48.511 [2024-11-19 07:57:40.393881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:33:48.511 [2024-11-19 07:57:40.393905] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:33:48.511 [2024-11-19 07:57:40.393937] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:33:48.511 [2024-11-19 07:57:40.394016] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:48.511 [2024-11-19 07:57:40.394064] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:48.511 [2024-11-19 07:57:40.394090] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:48.511 [2024-11-19 07:57:40.394124] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:48.511 [2024-11-19 07:57:40.394139] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:48.511 [2024-11-19 07:57:40.394151] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:48.511 [2024-11-19 07:57:40.395011] bdev_nvme.c:7171:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:33:48.511 [2024-11-19 07:57:40.395080] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:49.892 07:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:49.892 07:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:33:49.892 07:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:33:49.892 07:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:49.892 07:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:49.892 07:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.892 07:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:49.892 07:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:49.892 07:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:49.892 07:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.892 07:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:33:49.892 07:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:49.892 07:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:33:49.892 07:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:33:49.892 07:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:49.892 07:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:49.892 07:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:49.892 07:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:49.892 07:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:49.892 07:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:49.892 07:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:49.892 07:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:49.892 07:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.892 07:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:49.892 07:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.892 07:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:33:49.892 07:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:33:49.892 07:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:49.892 07:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:49.892 07:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:33:49.892 07:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.892 07:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:49.892 07:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.892 07:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:33:49.892 07:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:33:49.892 07:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:49.892 07:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:49.892 07:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:33:49.892 07:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:33:49.892 07:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:49.892 07:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:49.892 07:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.893 07:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:49.893 07:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:49.893 07:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:49.893 07:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.893 07:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:33:49.893 07:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:49.893 07:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:33:49.893 07:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:33:49.893 07:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:49.893 07:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:49.893 07:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:33:49.893 07:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:33:49.893 07:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:49.893 07:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.893 07:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:49.893 07:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:49.893 07:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:49.893 07:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:49.893 07:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.893 07:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:33:49.893 07:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:49.893 07:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:33:49.893 07:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:33:49.893 07:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:49.893 07:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:49.893 07:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:49.893 07:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:49.893 07:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:49.893 07:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:49.893 07:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:49.893 07:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.893 07:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:49.893 07:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:49.893 07:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.893 07:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:33:49.893 07:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:33:49.893 07:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:49.893 07:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:49.893 07:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:49.893 07:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.893 07:57:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:50.828 [2024-11-19 07:57:42.669517] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:50.828 [2024-11-19 07:57:42.669573] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:50.828 [2024-11-19 07:57:42.669622] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:51.087 [2024-11-19 07:57:42.797124] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:33:51.087 [2024-11-19 07:57:42.902249] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:33:51.087 [2024-11-19 07:57:42.903737] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x6150001f3e00:1 started. 00:33:51.087 [2024-11-19 07:57:42.906744] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:51.087 [2024-11-19 07:57:42.906796] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:51.087 07:57:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.087 07:57:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:51.087 07:57:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:33:51.087 07:57:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:51.087 07:57:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:51.087 07:57:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:51.087 07:57:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:51.087 07:57:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:51.087 07:57:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:51.087 07:57:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.087 07:57:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:51.087 request: 00:33:51.087 { 00:33:51.087 "name": "nvme", 00:33:51.087 "trtype": "tcp", 00:33:51.087 "traddr": "10.0.0.2", 00:33:51.087 "adrfam": "ipv4", 00:33:51.087 "trsvcid": "8009", 00:33:51.087 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:51.087 "wait_for_attach": true, 00:33:51.087 "method": "bdev_nvme_start_discovery", 00:33:51.087 "req_id": 1 00:33:51.087 } 00:33:51.087 Got JSON-RPC error response 00:33:51.087 response: 00:33:51.087 { 00:33:51.087 "code": -17, 00:33:51.087 "message": "File exists" 00:33:51.087 } 00:33:51.087 07:57:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:51.087 07:57:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:33:51.087 07:57:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:51.087 07:57:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:51.087 07:57:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:51.087 07:57:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:33:51.087 07:57:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:51.087 07:57:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:51.087 07:57:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.087 07:57:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:51.087 07:57:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:33:51.088 07:57:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:33:51.088 07:57:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.088 [2024-11-19 07:57:42.950157] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x6150001f3e00 was disconnected and freed. delete nvme_qpair. 00:33:51.088 07:57:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:33:51.088 07:57:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:33:51.088 07:57:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:51.088 07:57:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:51.088 07:57:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.088 07:57:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:51.088 07:57:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:51.088 07:57:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:51.088 07:57:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.088 07:57:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:51.088 07:57:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:51.088 07:57:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:33:51.088 07:57:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:51.088 07:57:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:51.088 07:57:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:51.088 07:57:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:51.088 07:57:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:51.088 07:57:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:51.088 07:57:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.088 07:57:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:51.088 request: 00:33:51.088 { 00:33:51.088 "name": "nvme_second", 00:33:51.088 "trtype": "tcp", 00:33:51.088 "traddr": "10.0.0.2", 00:33:51.088 "adrfam": "ipv4", 00:33:51.088 "trsvcid": "8009", 00:33:51.088 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:51.088 "wait_for_attach": true, 00:33:51.088 "method": "bdev_nvme_start_discovery", 00:33:51.088 "req_id": 1 00:33:51.088 } 00:33:51.088 Got JSON-RPC error response 00:33:51.088 response: 00:33:51.088 { 00:33:51.088 "code": -17, 00:33:51.088 "message": "File exists" 00:33:51.088 } 00:33:51.348 07:57:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:51.348 07:57:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:33:51.348 07:57:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:51.348 07:57:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:51.348 07:57:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:51.348 07:57:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:33:51.348 07:57:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:51.348 07:57:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:51.348 07:57:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:33:51.348 07:57:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.348 07:57:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:51.348 07:57:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:33:51.348 07:57:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.348 07:57:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:33:51.348 07:57:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:33:51.348 07:57:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:51.348 07:57:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:51.348 07:57:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.348 07:57:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:51.348 07:57:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:51.348 07:57:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:51.348 07:57:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.348 07:57:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:51.348 07:57:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:51.348 07:57:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:33:51.348 07:57:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:51.348 07:57:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:51.348 07:57:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:51.348 07:57:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:51.348 07:57:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:51.348 07:57:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:51.348 07:57:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.348 07:57:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:52.287 [2024-11-19 07:57:44.114603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:52.287 [2024-11-19 07:57:44.114721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f4080 with addr=10.0.0.2, port=8010 00:33:52.287 [2024-11-19 07:57:44.114800] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:33:52.287 [2024-11-19 07:57:44.114825] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:52.287 [2024-11-19 07:57:44.114847] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:33:53.225 [2024-11-19 07:57:45.117157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:53.225 [2024-11-19 07:57:45.117252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f4300 with addr=10.0.0.2, port=8010 00:33:53.225 [2024-11-19 07:57:45.117333] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:33:53.225 [2024-11-19 07:57:45.117358] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:53.225 [2024-11-19 07:57:45.117380] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:33:54.605 [2024-11-19 07:57:46.119128] bdev_nvme.c:7427:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:33:54.605 request: 00:33:54.605 { 00:33:54.605 "name": "nvme_second", 00:33:54.605 "trtype": "tcp", 00:33:54.605 "traddr": "10.0.0.2", 00:33:54.605 "adrfam": "ipv4", 00:33:54.605 "trsvcid": "8010", 00:33:54.605 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:54.605 "wait_for_attach": false, 00:33:54.605 "attach_timeout_ms": 3000, 00:33:54.605 "method": "bdev_nvme_start_discovery", 00:33:54.605 "req_id": 1 00:33:54.605 } 00:33:54.605 Got JSON-RPC error response 00:33:54.605 response: 00:33:54.605 { 00:33:54.605 "code": -110, 00:33:54.605 "message": "Connection timed out" 00:33:54.605 } 00:33:54.605 07:57:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:54.605 07:57:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:33:54.605 07:57:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:54.605 07:57:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:54.605 07:57:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:54.605 07:57:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:33:54.605 07:57:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:54.605 07:57:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:54.605 07:57:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.605 07:57:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:54.605 07:57:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:33:54.605 07:57:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:33:54.605 07:57:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.605 07:57:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:33:54.605 07:57:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:33:54.605 07:57:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3097711 00:33:54.605 07:57:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:33:54.605 07:57:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:54.605 07:57:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:33:54.605 07:57:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:54.605 07:57:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:33:54.605 07:57:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:54.605 07:57:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:54.605 rmmod nvme_tcp 00:33:54.605 rmmod nvme_fabrics 00:33:54.605 rmmod nvme_keyring 00:33:54.605 07:57:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:54.605 07:57:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:33:54.605 07:57:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:33:54.605 07:57:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 3097553 ']' 00:33:54.605 07:57:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 3097553 00:33:54.605 07:57:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 3097553 ']' 00:33:54.605 07:57:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 3097553 00:33:54.605 07:57:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:33:54.605 07:57:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:54.605 07:57:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3097553 00:33:54.605 07:57:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:54.605 07:57:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:54.605 07:57:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3097553' 00:33:54.605 killing process with pid 3097553 00:33:54.605 07:57:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 3097553 00:33:54.605 07:57:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 3097553 00:33:55.544 07:57:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:55.544 07:57:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:55.544 07:57:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:55.544 07:57:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:33:55.544 07:57:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:33:55.544 07:57:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:55.544 07:57:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:33:55.544 07:57:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:55.544 07:57:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:55.544 07:57:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:55.544 07:57:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:55.544 07:57:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:58.083 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:58.083 00:33:58.083 real 0m16.679s 00:33:58.083 user 0m25.382s 00:33:58.083 sys 0m3.121s 00:33:58.083 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:58.083 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:58.083 ************************************ 00:33:58.083 END TEST nvmf_host_discovery 00:33:58.083 ************************************ 00:33:58.083 07:57:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:33:58.083 07:57:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:58.083 07:57:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:58.083 07:57:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.083 ************************************ 00:33:58.083 START TEST nvmf_host_multipath_status 00:33:58.083 ************************************ 00:33:58.083 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:33:58.083 * Looking for test storage... 00:33:58.083 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:58.083 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:58.083 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:33:58.083 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:58.083 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:58.083 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:58.083 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:58.083 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:58.083 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:33:58.083 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:33:58.083 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:33:58.083 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:33:58.083 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:33:58.083 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:33:58.083 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:33:58.083 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:58.083 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:33:58.083 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:33:58.083 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:58.083 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:58.083 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:33:58.083 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:33:58.083 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:58.083 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:33:58.083 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:33:58.083 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:33:58.083 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:33:58.083 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:58.083 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:33:58.084 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:33:58.084 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:58.084 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:58.084 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:33:58.084 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:58.084 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:58.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:58.084 --rc genhtml_branch_coverage=1 00:33:58.084 --rc genhtml_function_coverage=1 00:33:58.084 --rc genhtml_legend=1 00:33:58.084 --rc geninfo_all_blocks=1 00:33:58.084 --rc geninfo_unexecuted_blocks=1 00:33:58.084 00:33:58.084 ' 00:33:58.084 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:58.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:58.084 --rc genhtml_branch_coverage=1 00:33:58.084 --rc genhtml_function_coverage=1 00:33:58.084 --rc genhtml_legend=1 00:33:58.084 --rc geninfo_all_blocks=1 00:33:58.084 --rc geninfo_unexecuted_blocks=1 00:33:58.084 00:33:58.084 ' 00:33:58.084 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:58.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:58.084 --rc genhtml_branch_coverage=1 00:33:58.084 --rc genhtml_function_coverage=1 00:33:58.084 --rc genhtml_legend=1 00:33:58.084 --rc geninfo_all_blocks=1 00:33:58.084 --rc geninfo_unexecuted_blocks=1 00:33:58.084 00:33:58.084 ' 00:33:58.084 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:58.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:58.084 --rc genhtml_branch_coverage=1 00:33:58.084 --rc genhtml_function_coverage=1 00:33:58.084 --rc genhtml_legend=1 00:33:58.084 --rc geninfo_all_blocks=1 00:33:58.084 --rc geninfo_unexecuted_blocks=1 00:33:58.084 00:33:58.084 ' 00:33:58.084 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:58.084 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:33:58.084 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:58.084 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:58.084 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:58.084 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:58.084 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:58.084 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:58.084 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:58.084 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:58.084 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:58.084 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:58.084 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:58.084 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:58.084 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:58.084 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:58.084 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:58.084 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:58.084 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:58.084 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:33:58.084 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:58.084 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:58.084 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:58.084 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:58.084 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:58.084 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:58.084 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:33:58.084 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:58.084 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:33:58.084 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:58.084 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:58.084 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:58.084 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:58.084 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:58.084 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:58.084 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:58.084 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:58.084 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:58.084 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:58.084 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:33:58.084 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:33:58.084 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:58.084 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:33:58.084 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:58.084 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:33:58.084 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:33:58.084 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:58.084 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:58.084 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:58.084 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:58.084 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:58.084 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:58.084 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:58.084 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:58.084 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:58.084 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:58.084 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:33:58.084 07:57:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:59.986 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:59.986 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:33:59.986 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:59.986 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:59.986 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:59.986 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:59.986 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:59.986 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:33:59.986 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:59.986 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:33:59.986 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:33:59.986 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:33:59.986 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:33:59.986 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:33:59.986 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:33:59.986 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:59.986 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:59.986 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:59.986 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:59.986 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:59.986 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:59.986 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:59.986 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:59.986 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:59.986 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:59.986 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:59.986 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:59.986 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:59.986 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:59.986 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:59.986 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:59.986 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:59.986 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:59.986 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:59.986 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:59.986 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:59.986 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:59.986 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:59.986 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:59.987 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:59.987 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:59.987 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:59.987 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:59.987 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:59.987 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:59.987 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:59.987 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:59.987 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:59.987 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:59.987 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:59.987 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:59.987 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:59.987 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:59.987 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:59.987 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:59.987 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:59.987 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:59.987 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:59.987 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:59.987 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:59.987 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:59.987 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:59.987 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:59.987 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:59.987 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:59.987 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:59.987 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:59.987 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:59.987 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:59.987 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:59.987 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:59.987 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:59.987 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:59.987 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:33:59.987 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:59.987 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:59.987 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:59.987 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:59.987 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:59.987 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:59.987 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:59.987 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:59.987 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:59.987 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:59.987 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:59.987 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:59.987 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:59.987 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:59.987 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:59.987 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:59.987 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:59.987 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:59.987 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:59.987 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:59.987 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:59.987 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:59.987 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:00.247 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:00.247 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:00.247 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:00.247 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:00.247 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.307 ms 00:34:00.247 00:34:00.247 --- 10.0.0.2 ping statistics --- 00:34:00.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:00.247 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:34:00.247 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:00.247 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:00.247 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:34:00.247 00:34:00.247 --- 10.0.0.1 ping statistics --- 00:34:00.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:00.247 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:34:00.247 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:00.247 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:34:00.247 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:00.247 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:00.247 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:00.248 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:00.248 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:00.248 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:00.248 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:00.248 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:34:00.248 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:00.248 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:00.248 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:00.248 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=3101134 00:34:00.248 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:34:00.248 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 3101134 00:34:00.248 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 3101134 ']' 00:34:00.248 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:00.248 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:00.248 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:00.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:00.248 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:00.248 07:57:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:00.248 [2024-11-19 07:57:52.057635] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:34:00.248 [2024-11-19 07:57:52.057799] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:00.508 [2024-11-19 07:57:52.199654] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:00.508 [2024-11-19 07:57:52.331776] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:00.508 [2024-11-19 07:57:52.331854] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:00.508 [2024-11-19 07:57:52.331875] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:00.508 [2024-11-19 07:57:52.331895] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:00.508 [2024-11-19 07:57:52.331910] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:00.508 [2024-11-19 07:57:52.334284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:00.508 [2024-11-19 07:57:52.334285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:01.448 07:57:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:01.448 07:57:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:34:01.448 07:57:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:01.448 07:57:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:01.448 07:57:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:01.448 07:57:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:01.448 07:57:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3101134 00:34:01.448 07:57:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:01.448 [2024-11-19 07:57:53.358295] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:01.448 07:57:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:34:02.015 Malloc0 00:34:02.016 07:57:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:34:02.273 07:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:02.532 07:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:02.791 [2024-11-19 07:57:54.558261] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:02.791 07:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:03.049 [2024-11-19 07:57:54.822996] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:03.049 07:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3101433 00:34:03.049 07:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:34:03.049 07:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:34:03.049 07:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3101433 /var/tmp/bdevperf.sock 00:34:03.049 07:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 3101433 ']' 00:34:03.049 07:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:03.049 07:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:03.049 07:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:03.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:03.050 07:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:03.050 07:57:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:03.985 07:57:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:03.985 07:57:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:34:03.985 07:57:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:34:04.243 07:57:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:34:04.812 Nvme0n1 00:34:04.812 07:57:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:34:05.069 Nvme0n1 00:34:05.069 07:57:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:34:05.069 07:57:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:34:07.599 07:57:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:34:07.599 07:57:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:34:07.599 07:57:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:07.859 07:57:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:34:08.798 07:58:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:34:08.798 07:58:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:08.798 07:58:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:08.798 07:58:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:09.056 07:58:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:09.056 07:58:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:09.056 07:58:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:09.056 07:58:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:09.340 07:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:09.340 07:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:09.340 07:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:09.340 07:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:09.622 07:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:09.622 07:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:09.622 07:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:09.622 07:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:09.880 07:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:09.880 07:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:09.880 07:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:09.880 07:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:10.139 07:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:10.139 07:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:10.139 07:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:10.139 07:58:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:10.397 07:58:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:10.397 07:58:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:34:10.397 07:58:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:10.655 07:58:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:10.913 07:58:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:34:12.289 07:58:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:34:12.289 07:58:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:12.289 07:58:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:12.289 07:58:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:12.289 07:58:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:12.289 07:58:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:12.289 07:58:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:12.289 07:58:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:12.547 07:58:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:12.547 07:58:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:12.547 07:58:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:12.547 07:58:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:12.805 07:58:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:12.805 07:58:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:12.805 07:58:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:12.805 07:58:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:13.062 07:58:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:13.063 07:58:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:13.063 07:58:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:13.063 07:58:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:13.321 07:58:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:13.321 07:58:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:13.321 07:58:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:13.321 07:58:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:13.579 07:58:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:13.579 07:58:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:34:13.579 07:58:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:13.837 07:58:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:34:14.095 07:58:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:34:15.476 07:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:34:15.476 07:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:15.476 07:58:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:15.476 07:58:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:15.476 07:58:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:15.476 07:58:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:15.476 07:58:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:15.476 07:58:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:15.734 07:58:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:15.734 07:58:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:15.734 07:58:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:15.734 07:58:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:15.993 07:58:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:15.993 07:58:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:15.993 07:58:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:15.993 07:58:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:16.251 07:58:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:16.251 07:58:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:16.252 07:58:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:16.252 07:58:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:16.510 07:58:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:16.510 07:58:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:16.510 07:58:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:16.510 07:58:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:16.768 07:58:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:16.768 07:58:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:34:16.768 07:58:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:17.027 07:58:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:34:17.286 07:58:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:34:18.661 07:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:34:18.661 07:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:18.661 07:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:18.661 07:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:18.661 07:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:18.661 07:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:18.661 07:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:18.661 07:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:18.919 07:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:18.919 07:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:18.919 07:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:18.919 07:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:19.178 07:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:19.178 07:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:19.178 07:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:19.178 07:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:19.437 07:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:19.437 07:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:19.437 07:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:19.437 07:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:19.696 07:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:19.696 07:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:34:19.696 07:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:19.696 07:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:19.955 07:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:19.955 07:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:34:19.955 07:58:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:34:20.524 07:58:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:34:20.524 07:58:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:34:21.905 07:58:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:34:21.905 07:58:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:21.905 07:58:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:21.905 07:58:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:21.905 07:58:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:21.905 07:58:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:21.905 07:58:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:21.905 07:58:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:22.164 07:58:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:22.164 07:58:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:22.164 07:58:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:22.164 07:58:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:22.422 07:58:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:22.422 07:58:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:22.422 07:58:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:22.422 07:58:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:22.680 07:58:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:22.680 07:58:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:34:22.680 07:58:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:22.680 07:58:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:22.938 07:58:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:22.938 07:58:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:34:22.938 07:58:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:22.939 07:58:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:23.197 07:58:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:23.197 07:58:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:34:23.197 07:58:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:34:23.455 07:58:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:23.714 07:58:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:34:25.091 07:58:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:34:25.091 07:58:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:25.091 07:58:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:25.091 07:58:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:25.091 07:58:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:25.091 07:58:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:25.091 07:58:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:25.091 07:58:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:25.349 07:58:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:25.349 07:58:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:25.349 07:58:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:25.349 07:58:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:25.607 07:58:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:25.607 07:58:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:25.607 07:58:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:25.607 07:58:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:25.866 07:58:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:25.866 07:58:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:34:25.866 07:58:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:25.866 07:58:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:26.124 07:58:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:26.124 07:58:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:26.124 07:58:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:26.124 07:58:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:26.382 07:58:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:26.382 07:58:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:34:26.639 07:58:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:34:26.639 07:58:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:34:26.897 07:58:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:27.465 07:58:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:34:28.402 07:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:34:28.403 07:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:28.403 07:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:28.403 07:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:28.661 07:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:28.661 07:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:28.661 07:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:28.661 07:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:28.920 07:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:28.920 07:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:28.920 07:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:28.920 07:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:29.178 07:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:29.178 07:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:29.178 07:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:29.178 07:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:29.437 07:58:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:29.437 07:58:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:29.437 07:58:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:29.437 07:58:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:29.695 07:58:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:29.695 07:58:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:29.695 07:58:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:29.695 07:58:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:29.953 07:58:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:29.954 07:58:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:34:29.954 07:58:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:30.212 07:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:30.472 07:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:34:31.847 07:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:34:31.847 07:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:31.847 07:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:31.847 07:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:31.847 07:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:31.847 07:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:31.847 07:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:31.847 07:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:32.105 07:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:32.106 07:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:32.106 07:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:32.106 07:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:32.364 07:58:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:32.364 07:58:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:32.364 07:58:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:32.364 07:58:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:32.623 07:58:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:32.623 07:58:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:32.623 07:58:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:32.623 07:58:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:32.881 07:58:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:32.881 07:58:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:32.881 07:58:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:32.881 07:58:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:33.139 07:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:33.139 07:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:34:33.139 07:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:33.398 07:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:34:33.965 07:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:34:34.903 07:58:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:34:34.903 07:58:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:34.903 07:58:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:34.903 07:58:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:35.160 07:58:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:35.161 07:58:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:35.161 07:58:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:35.161 07:58:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:35.419 07:58:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:35.419 07:58:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:35.419 07:58:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:35.419 07:58:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:35.677 07:58:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:35.677 07:58:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:35.677 07:58:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:35.678 07:58:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:35.936 07:58:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:35.936 07:58:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:35.936 07:58:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:35.936 07:58:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:36.194 07:58:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:36.194 07:58:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:36.194 07:58:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:36.194 07:58:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:36.451 07:58:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:36.452 07:58:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:34:36.452 07:58:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:36.709 07:58:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:34:36.967 07:58:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:34:37.901 07:58:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:34:37.901 07:58:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:37.901 07:58:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:37.901 07:58:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:38.467 07:58:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:38.467 07:58:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:38.467 07:58:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:38.467 07:58:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:38.726 07:58:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:38.726 07:58:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:38.726 07:58:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:38.726 07:58:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:38.985 07:58:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:38.985 07:58:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:38.985 07:58:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:38.985 07:58:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:39.270 07:58:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:39.270 07:58:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:39.270 07:58:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:39.270 07:58:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:39.552 07:58:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:39.552 07:58:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:34:39.552 07:58:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:39.552 07:58:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:39.811 07:58:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:39.811 07:58:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3101433 00:34:39.811 07:58:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 3101433 ']' 00:34:39.811 07:58:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 3101433 00:34:39.811 07:58:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:34:39.811 07:58:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:39.811 07:58:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3101433 00:34:39.812 07:58:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:34:39.812 07:58:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:34:39.812 07:58:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3101433' 00:34:39.812 killing process with pid 3101433 00:34:39.812 07:58:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 3101433 00:34:39.812 07:58:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 3101433 00:34:39.812 { 00:34:39.812 "results": [ 00:34:39.812 { 00:34:39.812 "job": "Nvme0n1", 00:34:39.812 "core_mask": "0x4", 00:34:39.812 "workload": "verify", 00:34:39.812 "status": "terminated", 00:34:39.812 "verify_range": { 00:34:39.812 "start": 0, 00:34:39.812 "length": 16384 00:34:39.812 }, 00:34:39.812 "queue_depth": 128, 00:34:39.812 "io_size": 4096, 00:34:39.812 "runtime": 34.3772, 00:34:39.812 "iops": 5823.394575474443, 00:34:39.812 "mibps": 22.747635060447042, 00:34:39.812 "io_failed": 0, 00:34:39.812 "io_timeout": 0, 00:34:39.812 "avg_latency_us": 21943.740956711186, 00:34:39.812 "min_latency_us": 2852.0296296296297, 00:34:39.812 "max_latency_us": 4051386.974814815 00:34:39.812 } 00:34:39.812 ], 00:34:39.812 "core_count": 1 00:34:39.812 } 00:34:40.755 07:58:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3101433 00:34:40.755 07:58:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:40.755 [2024-11-19 07:57:54.917593] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:34:40.755 [2024-11-19 07:57:54.917770] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3101433 ] 00:34:40.755 [2024-11-19 07:57:55.058243] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:40.755 [2024-11-19 07:57:55.180958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:40.755 Running I/O for 90 seconds... 00:34:40.755 6125.00 IOPS, 23.93 MiB/s [2024-11-19T06:58:32.685Z] 6161.00 IOPS, 24.07 MiB/s [2024-11-19T06:58:32.685Z] 6123.00 IOPS, 23.92 MiB/s [2024-11-19T06:58:32.685Z] 6082.75 IOPS, 23.76 MiB/s [2024-11-19T06:58:32.685Z] 6088.80 IOPS, 23.78 MiB/s [2024-11-19T06:58:32.685Z] 6103.17 IOPS, 23.84 MiB/s [2024-11-19T06:58:32.685Z] 6134.00 IOPS, 23.96 MiB/s [2024-11-19T06:58:32.685Z] 6139.62 IOPS, 23.98 MiB/s [2024-11-19T06:58:32.685Z] 6147.00 IOPS, 24.01 MiB/s [2024-11-19T06:58:32.685Z] 6153.10 IOPS, 24.04 MiB/s [2024-11-19T06:58:32.685Z] 6146.64 IOPS, 24.01 MiB/s [2024-11-19T06:58:32.685Z] 6147.92 IOPS, 24.02 MiB/s [2024-11-19T06:58:32.685Z] 6148.08 IOPS, 24.02 MiB/s [2024-11-19T06:58:32.685Z] 6144.00 IOPS, 24.00 MiB/s [2024-11-19T06:58:32.685Z] 6141.33 IOPS, 23.99 MiB/s [2024-11-19T06:58:32.685Z] [2024-11-19 07:58:12.146410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:82560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.755 [2024-11-19 07:58:12.146490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:40.755 [2024-11-19 07:58:12.146605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:82568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.755 [2024-11-19 07:58:12.146636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:40.755 [2024-11-19 07:58:12.146702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:82576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.755 [2024-11-19 07:58:12.146758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:40.755 [2024-11-19 07:58:12.146799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:82584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.755 [2024-11-19 07:58:12.146827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:40.755 [2024-11-19 07:58:12.146864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:82592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.755 [2024-11-19 07:58:12.146890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:40.755 [2024-11-19 07:58:12.146927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.755 [2024-11-19 07:58:12.146954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:40.755 [2024-11-19 07:58:12.147003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:82608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.755 [2024-11-19 07:58:12.147029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:40.755 [2024-11-19 07:58:12.147067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:82616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.755 [2024-11-19 07:58:12.147093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:40.755 [2024-11-19 07:58:12.147131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:81856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.755 [2024-11-19 07:58:12.147157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:40.755 [2024-11-19 07:58:12.147208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:81864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.755 [2024-11-19 07:58:12.147235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:40.755 [2024-11-19 07:58:12.147272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:81872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.755 [2024-11-19 07:58:12.147299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:40.755 [2024-11-19 07:58:12.147336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:81880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.755 [2024-11-19 07:58:12.147362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:40.755 [2024-11-19 07:58:12.147400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:81888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.755 [2024-11-19 07:58:12.147426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:40.755 [2024-11-19 07:58:12.147463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:81896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.755 [2024-11-19 07:58:12.147490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:40.755 [2024-11-19 07:58:12.147528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:81904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.755 [2024-11-19 07:58:12.147554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:40.755 [2024-11-19 07:58:12.147591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:81912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.755 [2024-11-19 07:58:12.147617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:40.755 [2024-11-19 07:58:12.147655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:81920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.755 [2024-11-19 07:58:12.147681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:40.755 [2024-11-19 07:58:12.147741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:81928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.755 [2024-11-19 07:58:12.147783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:40.755 [2024-11-19 07:58:12.147821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.755 [2024-11-19 07:58:12.147848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:40.755 [2024-11-19 07:58:12.147884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:81944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.755 [2024-11-19 07:58:12.147910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:40.755 [2024-11-19 07:58:12.147947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:81952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.755 [2024-11-19 07:58:12.147972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:40.755 [2024-11-19 07:58:12.148014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:81960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.755 [2024-11-19 07:58:12.148044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:40.755 [2024-11-19 07:58:12.148082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:81968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.755 [2024-11-19 07:58:12.148108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:40.755 [2024-11-19 07:58:12.148146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:81976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.756 [2024-11-19 07:58:12.148173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:40.756 [2024-11-19 07:58:12.148210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:81984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.756 [2024-11-19 07:58:12.148235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:40.756 [2024-11-19 07:58:12.148271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.756 [2024-11-19 07:58:12.148297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:40.756 [2024-11-19 07:58:12.148333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.756 [2024-11-19 07:58:12.148359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:40.756 [2024-11-19 07:58:12.148395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:82008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.756 [2024-11-19 07:58:12.148421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:40.756 [2024-11-19 07:58:12.148457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:82016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.756 [2024-11-19 07:58:12.148483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:40.756 [2024-11-19 07:58:12.148519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:82024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.756 [2024-11-19 07:58:12.148545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:40.756 [2024-11-19 07:58:12.148580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.756 [2024-11-19 07:58:12.148606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:40.756 [2024-11-19 07:58:12.148642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:82040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.756 [2024-11-19 07:58:12.148668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:40.756 [2024-11-19 07:58:12.148724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:82048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.756 [2024-11-19 07:58:12.148760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:40.756 [2024-11-19 07:58:12.148798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:82056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.756 [2024-11-19 07:58:12.148828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:40.756 [2024-11-19 07:58:12.148865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:82064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.756 [2024-11-19 07:58:12.148891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:40.756 [2024-11-19 07:58:12.148947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:82072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.756 [2024-11-19 07:58:12.148973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:40.756 [2024-11-19 07:58:12.149515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:82080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.756 [2024-11-19 07:58:12.149549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:40.756 [2024-11-19 07:58:12.149609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:82088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.756 [2024-11-19 07:58:12.149639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:40.756 [2024-11-19 07:58:12.149682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:82096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.756 [2024-11-19 07:58:12.149719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:40.756 [2024-11-19 07:58:12.149768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:82104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.756 [2024-11-19 07:58:12.149810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:40.756 [2024-11-19 07:58:12.149852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:82112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.756 [2024-11-19 07:58:12.149877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:40.756 [2024-11-19 07:58:12.149917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:82120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.756 [2024-11-19 07:58:12.149957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:40.756 [2024-11-19 07:58:12.150024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:82128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.756 [2024-11-19 07:58:12.150051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:40.756 [2024-11-19 07:58:12.150091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:82136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.756 [2024-11-19 07:58:12.150117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:40.756 [2024-11-19 07:58:12.150157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:82144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.756 [2024-11-19 07:58:12.150183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:40.756 [2024-11-19 07:58:12.150223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:82152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.756 [2024-11-19 07:58:12.150265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:40.756 [2024-11-19 07:58:12.150310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:82160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.756 [2024-11-19 07:58:12.150336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:40.756 [2024-11-19 07:58:12.150373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:82168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.756 [2024-11-19 07:58:12.150416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:40.756 [2024-11-19 07:58:12.150457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.756 [2024-11-19 07:58:12.150482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:40.756 [2024-11-19 07:58:12.150521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:82184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.756 [2024-11-19 07:58:12.150547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:40.756 [2024-11-19 07:58:12.150585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:82192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.756 [2024-11-19 07:58:12.150612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:40.756 [2024-11-19 07:58:12.150651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:82200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.756 [2024-11-19 07:58:12.150703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:40.756 [2024-11-19 07:58:12.150759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:82208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.756 [2024-11-19 07:58:12.150786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:40.756 [2024-11-19 07:58:12.150828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:82216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.756 [2024-11-19 07:58:12.150855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:40.756 [2024-11-19 07:58:12.150895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:82224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.756 [2024-11-19 07:58:12.150921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:40.756 [2024-11-19 07:58:12.150961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:82232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.756 [2024-11-19 07:58:12.150999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:40.756 [2024-11-19 07:58:12.151055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:82240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.756 [2024-11-19 07:58:12.151082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:40.756 [2024-11-19 07:58:12.151121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:82248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.757 [2024-11-19 07:58:12.151147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:40.757 [2024-11-19 07:58:12.151192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:82256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.757 [2024-11-19 07:58:12.151219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:40.757 [2024-11-19 07:58:12.151257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:82264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.757 [2024-11-19 07:58:12.151283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:40.757 [2024-11-19 07:58:12.151337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:82272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.757 [2024-11-19 07:58:12.151365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:40.757 [2024-11-19 07:58:12.151420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:82280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.757 [2024-11-19 07:58:12.151447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:40.757 [2024-11-19 07:58:12.151486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:82288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.757 [2024-11-19 07:58:12.151511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:40.757 [2024-11-19 07:58:12.151550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:82296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.757 [2024-11-19 07:58:12.151592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:40.757 [2024-11-19 07:58:12.151634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:82304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.757 [2024-11-19 07:58:12.151660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:40.757 [2024-11-19 07:58:12.151890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:82312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.757 [2024-11-19 07:58:12.151923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:40.757 [2024-11-19 07:58:12.151964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.757 [2024-11-19 07:58:12.151991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:40.757 [2024-11-19 07:58:12.152058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.757 [2024-11-19 07:58:12.152085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:40.757 [2024-11-19 07:58:12.152123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:82336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.757 [2024-11-19 07:58:12.152150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:40.757 [2024-11-19 07:58:12.152189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:82344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.757 [2024-11-19 07:58:12.152215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:40.757 [2024-11-19 07:58:12.152254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:82352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.757 [2024-11-19 07:58:12.152288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:40.757 [2024-11-19 07:58:12.152328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:82360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.757 [2024-11-19 07:58:12.152355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:40.757 [2024-11-19 07:58:12.152394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:82368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.757 [2024-11-19 07:58:12.152421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:40.757 [2024-11-19 07:58:12.152459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:82376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.757 [2024-11-19 07:58:12.152485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:40.757 [2024-11-19 07:58:12.152524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:82384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.757 [2024-11-19 07:58:12.152549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:40.757 [2024-11-19 07:58:12.152588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.757 [2024-11-19 07:58:12.152615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:40.757 [2024-11-19 07:58:12.152653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:82400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.757 [2024-11-19 07:58:12.152679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:40.757 [2024-11-19 07:58:12.152757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:82408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.757 [2024-11-19 07:58:12.152785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:40.757 [2024-11-19 07:58:12.152825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:82416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.757 [2024-11-19 07:58:12.152851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:40.757 [2024-11-19 07:58:12.152891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:82424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.757 [2024-11-19 07:58:12.152918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:40.757 [2024-11-19 07:58:12.152957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:82432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.757 [2024-11-19 07:58:12.152995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:40.757 [2024-11-19 07:58:12.153050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:82440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.757 [2024-11-19 07:58:12.153077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:40.757 [2024-11-19 07:58:12.153117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:82448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.757 [2024-11-19 07:58:12.153147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:40.757 [2024-11-19 07:58:12.153188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:82456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.757 [2024-11-19 07:58:12.153214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:40.757 [2024-11-19 07:58:12.153253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.757 [2024-11-19 07:58:12.153279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:40.757 [2024-11-19 07:58:12.153318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:82472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.757 [2024-11-19 07:58:12.153344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:40.757 [2024-11-19 07:58:12.153383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:82480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.757 [2024-11-19 07:58:12.153408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:40.758 [2024-11-19 07:58:12.153447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:82488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.758 [2024-11-19 07:58:12.153472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:40.758 [2024-11-19 07:58:12.153511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.758 [2024-11-19 07:58:12.153537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:40.758 [2024-11-19 07:58:12.153576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:82504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.758 [2024-11-19 07:58:12.153603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:40.758 [2024-11-19 07:58:12.153641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:82512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.758 [2024-11-19 07:58:12.153667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:40.758 [2024-11-19 07:58:12.153739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:82520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.758 [2024-11-19 07:58:12.153768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:40.758 [2024-11-19 07:58:12.153810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:82528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.758 [2024-11-19 07:58:12.153837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:40.758 [2024-11-19 07:58:12.153877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:82536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.758 [2024-11-19 07:58:12.153904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:40.758 [2024-11-19 07:58:12.153945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.758 [2024-11-19 07:58:12.153987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:40.758 [2024-11-19 07:58:12.154222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:82552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.758 [2024-11-19 07:58:12.154253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:40.758 [2024-11-19 07:58:12.154303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:82624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.758 [2024-11-19 07:58:12.154330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:40.758 [2024-11-19 07:58:12.154375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:82632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.758 [2024-11-19 07:58:12.154402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:40.758 [2024-11-19 07:58:12.154447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:82640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.758 [2024-11-19 07:58:12.154474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:40.758 [2024-11-19 07:58:12.154534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:82648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.758 [2024-11-19 07:58:12.154561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:40.758 [2024-11-19 07:58:12.154606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.758 [2024-11-19 07:58:12.154632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:40.758 [2024-11-19 07:58:12.154676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:82664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.758 [2024-11-19 07:58:12.154741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:40.758 [2024-11-19 07:58:12.154788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:82672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.758 [2024-11-19 07:58:12.154815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:40.758 [2024-11-19 07:58:12.154860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:82680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.758 [2024-11-19 07:58:12.154887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:40.758 [2024-11-19 07:58:12.154931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.758 [2024-11-19 07:58:12.154958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:40.758 [2024-11-19 07:58:12.155028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:82696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.758 [2024-11-19 07:58:12.155056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:40.758 [2024-11-19 07:58:12.155099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:82704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.758 [2024-11-19 07:58:12.155126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:40.758 [2024-11-19 07:58:12.155173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:82712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.758 [2024-11-19 07:58:12.155200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:40.758 [2024-11-19 07:58:12.155242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:82720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.758 [2024-11-19 07:58:12.155269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:40.758 [2024-11-19 07:58:12.155312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:82728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.758 [2024-11-19 07:58:12.155338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:40.758 [2024-11-19 07:58:12.155381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:82736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.758 [2024-11-19 07:58:12.155407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:40.758 [2024-11-19 07:58:12.155450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:82744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.758 [2024-11-19 07:58:12.155476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:40.758 [2024-11-19 07:58:12.155519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:82752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.758 [2024-11-19 07:58:12.155562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:40.758 [2024-11-19 07:58:12.155607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:82760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.758 [2024-11-19 07:58:12.155635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:40.758 [2024-11-19 07:58:12.155680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:82768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.758 [2024-11-19 07:58:12.155716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:40.758 [2024-11-19 07:58:12.155769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:82776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.758 [2024-11-19 07:58:12.155796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:40.758 [2024-11-19 07:58:12.155841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:82784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.758 [2024-11-19 07:58:12.155868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:40.758 [2024-11-19 07:58:12.155913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:82792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.758 [2024-11-19 07:58:12.155939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:40.759 [2024-11-19 07:58:12.155995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:82800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.759 [2024-11-19 07:58:12.156037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:40.759 [2024-11-19 07:58:12.156088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:82808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.759 [2024-11-19 07:58:12.156114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:40.759 [2024-11-19 07:58:12.156158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:82816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.759 [2024-11-19 07:58:12.156184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:40.759 [2024-11-19 07:58:12.156226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:82824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.759 [2024-11-19 07:58:12.156253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:40.759 [2024-11-19 07:58:12.156296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:82832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.759 [2024-11-19 07:58:12.156322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:40.759 [2024-11-19 07:58:12.156366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:82840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.759 [2024-11-19 07:58:12.156392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:40.759 5759.50 IOPS, 22.50 MiB/s [2024-11-19T06:58:32.689Z] 5420.71 IOPS, 21.17 MiB/s [2024-11-19T06:58:32.689Z] 5119.56 IOPS, 20.00 MiB/s [2024-11-19T06:58:32.689Z] 4850.11 IOPS, 18.95 MiB/s [2024-11-19T06:58:32.689Z] 4917.45 IOPS, 19.21 MiB/s [2024-11-19T06:58:32.689Z] 4972.05 IOPS, 19.42 MiB/s [2024-11-19T06:58:32.689Z] 5057.14 IOPS, 19.75 MiB/s [2024-11-19T06:58:32.689Z] 5218.61 IOPS, 20.39 MiB/s [2024-11-19T06:58:32.689Z] 5360.17 IOPS, 20.94 MiB/s [2024-11-19T06:58:32.689Z] 5491.28 IOPS, 21.45 MiB/s [2024-11-19T06:58:32.689Z] 5518.19 IOPS, 21.56 MiB/s [2024-11-19T06:58:32.689Z] 5546.85 IOPS, 21.67 MiB/s [2024-11-19T06:58:32.689Z] 5569.43 IOPS, 21.76 MiB/s [2024-11-19T06:58:32.689Z] 5629.69 IOPS, 21.99 MiB/s [2024-11-19T06:58:32.689Z] 5725.27 IOPS, 22.36 MiB/s [2024-11-19T06:58:32.689Z] 5822.00 IOPS, 22.74 MiB/s [2024-11-19T06:58:32.689Z] [2024-11-19 07:58:28.813721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:46184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.759 [2024-11-19 07:58:28.813828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:40.759 [2024-11-19 07:58:28.813890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:46200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.759 [2024-11-19 07:58:28.813919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:40.759 [2024-11-19 07:58:28.813959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:46216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.759 [2024-11-19 07:58:28.813986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:40.759 [2024-11-19 07:58:28.814041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:46232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.759 [2024-11-19 07:58:28.814068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:40.759 [2024-11-19 07:58:28.814106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:46248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.759 [2024-11-19 07:58:28.814131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:40.759 [2024-11-19 07:58:28.814169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:46264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.759 [2024-11-19 07:58:28.814211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:40.759 [2024-11-19 07:58:28.814264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:46280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.759 [2024-11-19 07:58:28.814292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:40.759 [2024-11-19 07:58:28.814332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:46296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.759 [2024-11-19 07:58:28.814359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:40.759 [2024-11-19 07:58:28.814399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:46312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.759 [2024-11-19 07:58:28.814426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:40.759 [2024-11-19 07:58:28.814465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:46328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.759 [2024-11-19 07:58:28.814507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:40.759 [2024-11-19 07:58:28.814560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:46344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.759 [2024-11-19 07:58:28.814586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:40.759 [2024-11-19 07:58:28.814623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:46360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.759 [2024-11-19 07:58:28.814648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:40.759 [2024-11-19 07:58:28.814684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:46376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.759 [2024-11-19 07:58:28.814733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:40.759 [2024-11-19 07:58:28.814773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:45800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.759 [2024-11-19 07:58:28.814799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:40.759 [2024-11-19 07:58:28.814838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:45832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.759 [2024-11-19 07:58:28.814864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:40.759 [2024-11-19 07:58:28.814902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:45864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.759 [2024-11-19 07:58:28.814928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:40.759 [2024-11-19 07:58:28.814966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:45896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.759 [2024-11-19 07:58:28.814992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:40.759 [2024-11-19 07:58:28.815048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:45920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.759 [2024-11-19 07:58:28.815074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:40.759 [2024-11-19 07:58:28.815116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:45944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.759 [2024-11-19 07:58:28.815141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:40.759 [2024-11-19 07:58:28.815178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:46384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.759 [2024-11-19 07:58:28.815203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:40.759 [2024-11-19 07:58:28.815239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:46400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.759 [2024-11-19 07:58:28.815263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:40.759 [2024-11-19 07:58:28.815298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:46416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.759 [2024-11-19 07:58:28.815323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:40.759 [2024-11-19 07:58:28.815359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:46432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.759 [2024-11-19 07:58:28.815384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:40.759 [2024-11-19 07:58:28.815420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:46448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.759 [2024-11-19 07:58:28.815444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:40.759 [2024-11-19 07:58:28.815480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:46464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.759 [2024-11-19 07:58:28.815504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:40.759 [2024-11-19 07:58:28.815540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:46480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.760 [2024-11-19 07:58:28.815564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:40.760 [2024-11-19 07:58:28.815601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:46496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.760 [2024-11-19 07:58:28.815625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:40.760 [2024-11-19 07:58:28.815660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:45792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.760 [2024-11-19 07:58:28.815711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:40.760 [2024-11-19 07:58:28.815768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:45824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.760 [2024-11-19 07:58:28.815795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:40.760 [2024-11-19 07:58:28.815834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:45856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.760 [2024-11-19 07:58:28.815860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:40.760 [2024-11-19 07:58:28.815898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:45888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.760 [2024-11-19 07:58:28.815929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:40.760 [2024-11-19 07:58:28.815969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:45928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.760 [2024-11-19 07:58:28.816011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:40.760 [2024-11-19 07:58:28.818141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:46520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.760 [2024-11-19 07:58:28.818190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:40.760 [2024-11-19 07:58:28.818237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:46536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.760 [2024-11-19 07:58:28.818266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:40.760 [2024-11-19 07:58:28.818305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:46552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.760 [2024-11-19 07:58:28.818334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:40.760 [2024-11-19 07:58:28.818374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:46568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.760 [2024-11-19 07:58:28.818401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:40.760 [2024-11-19 07:58:28.818440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:46584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.760 [2024-11-19 07:58:28.818483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:40.760 [2024-11-19 07:58:28.818536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:45976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.760 [2024-11-19 07:58:28.818564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:40.760 [2024-11-19 07:58:28.818619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:46008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.760 [2024-11-19 07:58:28.818647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:40.760 [2024-11-19 07:58:28.818686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:46040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.760 [2024-11-19 07:58:28.818723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:40.760 [2024-11-19 07:58:28.818765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:46072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.760 [2024-11-19 07:58:28.818792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:40.760 [2024-11-19 07:58:28.818830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:46104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.760 [2024-11-19 07:58:28.818858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:40.760 [2024-11-19 07:58:28.818911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:46136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.760 [2024-11-19 07:58:28.818948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:40.760 [2024-11-19 07:58:28.819004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:46168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.760 [2024-11-19 07:58:28.819030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:40.760 [2024-11-19 07:58:28.819068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:46608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.760 [2024-11-19 07:58:28.819093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:40.760 [2024-11-19 07:58:28.819129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:45968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.760 [2024-11-19 07:58:28.819155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:40.760 [2024-11-19 07:58:28.819191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:46624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.760 [2024-11-19 07:58:28.819215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:40.760 [2024-11-19 07:58:28.819252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:46640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.760 [2024-11-19 07:58:28.819277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:40.760 [2024-11-19 07:58:28.819314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:46656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.761 [2024-11-19 07:58:28.819339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:40.761 [2024-11-19 07:58:28.819374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:46672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.761 [2024-11-19 07:58:28.819399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:40.761 [2024-11-19 07:58:28.819449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:46688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.761 [2024-11-19 07:58:28.819476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:40.761 [2024-11-19 07:58:28.819551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:46704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.761 [2024-11-19 07:58:28.819579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:40.761 [2024-11-19 07:58:28.819617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.761 [2024-11-19 07:58:28.819644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:40.761 [2024-11-19 07:58:28.819683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:46016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.761 [2024-11-19 07:58:28.819719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:40.761 [2024-11-19 07:58:28.819758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:46048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.761 [2024-11-19 07:58:28.819785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:40.761 [2024-11-19 07:58:28.819829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:46080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.761 [2024-11-19 07:58:28.819857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:40.761 [2024-11-19 07:58:28.819896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:46112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.761 [2024-11-19 07:58:28.819922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:40.761 [2024-11-19 07:58:28.819960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:46144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.761 [2024-11-19 07:58:28.819987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:40.761 [2024-11-19 07:58:28.820042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:46176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.761 [2024-11-19 07:58:28.820067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:40.761 [2024-11-19 07:58:28.820104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:46728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.761 [2024-11-19 07:58:28.820129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:40.761 [2024-11-19 07:58:28.820962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:46744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.761 [2024-11-19 07:58:28.821011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:40.761 [2024-11-19 07:58:28.821080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:46760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.761 [2024-11-19 07:58:28.821108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:40.761 [2024-11-19 07:58:28.821145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:46200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.761 [2024-11-19 07:58:28.821170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:40.761 [2024-11-19 07:58:28.821206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:46232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.761 [2024-11-19 07:58:28.821231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:40.761 [2024-11-19 07:58:28.821266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:46264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.761 [2024-11-19 07:58:28.821291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:40.761 [2024-11-19 07:58:28.821327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:46296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.761 [2024-11-19 07:58:28.821352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:40.761 [2024-11-19 07:58:28.821389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:46328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.761 [2024-11-19 07:58:28.821413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:40.761 [2024-11-19 07:58:28.821455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:46360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.761 [2024-11-19 07:58:28.821481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:40.761 [2024-11-19 07:58:28.821517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:45800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.761 [2024-11-19 07:58:28.821542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:40.761 [2024-11-19 07:58:28.821579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:45864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.761 [2024-11-19 07:58:28.821604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:40.761 [2024-11-19 07:58:28.821639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:45920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.761 [2024-11-19 07:58:28.821665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:40.761 [2024-11-19 07:58:28.821725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:46384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.761 [2024-11-19 07:58:28.821753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:40.761 [2024-11-19 07:58:28.821792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:46416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.761 [2024-11-19 07:58:28.821818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:40.761 [2024-11-19 07:58:28.821854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:46448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.761 [2024-11-19 07:58:28.821881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:40.761 [2024-11-19 07:58:28.821918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:46480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.761 [2024-11-19 07:58:28.821961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:40.761 [2024-11-19 07:58:28.822001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:45792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.761 [2024-11-19 07:58:28.822028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:40.761 [2024-11-19 07:58:28.822066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:45856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.761 [2024-11-19 07:58:28.822092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:40.761 [2024-11-19 07:58:28.822131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:45928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.761 [2024-11-19 07:58:28.822158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:40.761 [2024-11-19 07:58:28.823611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:46776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.761 [2024-11-19 07:58:28.823645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:40.761 [2024-11-19 07:58:28.823712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:46792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.761 [2024-11-19 07:58:28.823762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:40.761 [2024-11-19 07:58:28.823804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:46192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.761 [2024-11-19 07:58:28.823831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:40.761 [2024-11-19 07:58:28.823869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:46224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.762 [2024-11-19 07:58:28.823895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:40.762 [2024-11-19 07:58:28.823933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:46256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.762 [2024-11-19 07:58:28.823960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:40.762 [2024-11-19 07:58:28.823997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:46288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.762 [2024-11-19 07:58:28.824045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:40.762 [2024-11-19 07:58:28.824083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:46320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.762 [2024-11-19 07:58:28.824124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:40.762 [2024-11-19 07:58:28.824161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:46352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.762 [2024-11-19 07:58:28.824186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:40.762 [2024-11-19 07:58:28.824221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:46392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.762 [2024-11-19 07:58:28.824246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:40.762 [2024-11-19 07:58:28.824280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:46424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.762 [2024-11-19 07:58:28.824322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:40.762 [2024-11-19 07:58:28.824362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:46816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.762 [2024-11-19 07:58:28.824389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:40.762 [2024-11-19 07:58:28.824428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:46832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.762 [2024-11-19 07:58:28.824455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:40.762 [2024-11-19 07:58:28.824493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:46848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.762 [2024-11-19 07:58:28.824521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:40.762 [2024-11-19 07:58:28.824559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:46440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.762 [2024-11-19 07:58:28.824593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:40.762 [2024-11-19 07:58:28.824647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:46472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.762 [2024-11-19 07:58:28.824697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:40.762 [2024-11-19 07:58:28.824753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:46504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.762 [2024-11-19 07:58:28.824780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:40.762 [2024-11-19 07:58:28.824819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:46536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.762 [2024-11-19 07:58:28.824844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:40.762 [2024-11-19 07:58:28.824881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:46568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.762 [2024-11-19 07:58:28.824907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:40.762 [2024-11-19 07:58:28.824942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:45976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.762 [2024-11-19 07:58:28.824968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:40.762 [2024-11-19 07:58:28.825005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:46040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.762 [2024-11-19 07:58:28.825046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:40.762 [2024-11-19 07:58:28.825084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:46104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.762 [2024-11-19 07:58:28.825108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:40.762 [2024-11-19 07:58:28.825144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:46168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.762 [2024-11-19 07:58:28.825168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:40.762 [2024-11-19 07:58:28.825203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:45968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.762 [2024-11-19 07:58:28.825228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:40.762 [2024-11-19 07:58:28.825263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.762 [2024-11-19 07:58:28.825287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:40.762 [2024-11-19 07:58:28.825323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:46672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.762 [2024-11-19 07:58:28.825347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:40.762 [2024-11-19 07:58:28.825382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:46704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.762 [2024-11-19 07:58:28.825406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:40.762 [2024-11-19 07:58:28.825446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:46016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.762 [2024-11-19 07:58:28.825472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:40.762 [2024-11-19 07:58:28.825508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:46080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.762 [2024-11-19 07:58:28.825533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:40.762 [2024-11-19 07:58:28.825569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:46144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.762 [2024-11-19 07:58:28.825594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:40.762 [2024-11-19 07:58:28.825630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:46728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.762 [2024-11-19 07:58:28.825655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:40.762 [2024-11-19 07:58:28.825698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:46528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.762 [2024-11-19 07:58:28.825740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:40.762 [2024-11-19 07:58:28.825778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:46560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.762 [2024-11-19 07:58:28.825803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:40.762 [2024-11-19 07:58:28.825839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:46592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.762 [2024-11-19 07:58:28.825864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:40.762 [2024-11-19 07:58:28.825901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:46760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.762 [2024-11-19 07:58:28.825926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:40.762 [2024-11-19 07:58:28.825963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:46232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.762 [2024-11-19 07:58:28.825988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:40.762 [2024-11-19 07:58:28.826041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:46296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.763 [2024-11-19 07:58:28.826067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:40.763 [2024-11-19 07:58:28.826102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:46360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.763 [2024-11-19 07:58:28.826126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:40.763 [2024-11-19 07:58:28.826177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:45864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.763 [2024-11-19 07:58:28.826203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:40.763 [2024-11-19 07:58:28.826243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:46384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.763 [2024-11-19 07:58:28.826268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:40.763 [2024-11-19 07:58:28.826304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:46448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.763 [2024-11-19 07:58:28.826328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:40.763 [2024-11-19 07:58:28.826363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:45792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.763 [2024-11-19 07:58:28.826388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:40.763 [2024-11-19 07:58:28.826425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:45928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.763 [2024-11-19 07:58:28.826449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:40.763 [2024-11-19 07:58:28.828988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:46864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.763 [2024-11-19 07:58:28.829024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:40.763 [2024-11-19 07:58:28.829070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:46880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.763 [2024-11-19 07:58:28.829098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:40.763 [2024-11-19 07:58:28.829153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:46896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.763 [2024-11-19 07:58:28.829195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:40.763 [2024-11-19 07:58:28.829247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:46912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.763 [2024-11-19 07:58:28.829273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:40.763 [2024-11-19 07:58:28.829309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:46928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.763 [2024-11-19 07:58:28.829333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:40.763 [2024-11-19 07:58:28.829368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:46944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.763 [2024-11-19 07:58:28.829392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:40.763 [2024-11-19 07:58:28.829445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:46632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.763 [2024-11-19 07:58:28.829471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:40.763 [2024-11-19 07:58:28.829507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:46664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.763 [2024-11-19 07:58:28.829532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:40.763 [2024-11-19 07:58:28.829574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:46696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.763 [2024-11-19 07:58:28.829602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:40.763 [2024-11-19 07:58:28.829653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:46792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.763 [2024-11-19 07:58:28.829680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:40.763 [2024-11-19 07:58:28.829755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:46224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.763 [2024-11-19 07:58:28.829782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:40.763 [2024-11-19 07:58:28.829822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:46288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.763 [2024-11-19 07:58:28.829849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:40.763 [2024-11-19 07:58:28.829888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:46352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.763 [2024-11-19 07:58:28.829916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:40.763 [2024-11-19 07:58:28.829954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:46424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.763 [2024-11-19 07:58:28.829980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:40.763 [2024-11-19 07:58:28.830020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:46832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.763 [2024-11-19 07:58:28.830047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:40.763 [2024-11-19 07:58:28.830085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:46440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.763 [2024-11-19 07:58:28.830111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:40.763 [2024-11-19 07:58:28.830150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:46504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.763 [2024-11-19 07:58:28.830178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:40.763 [2024-11-19 07:58:28.830233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:46568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.763 [2024-11-19 07:58:28.830273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:40.763 [2024-11-19 07:58:28.830311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:46040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.763 [2024-11-19 07:58:28.830337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:40.763 [2024-11-19 07:58:28.830374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:46168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.763 [2024-11-19 07:58:28.830399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:40.763 [2024-11-19 07:58:28.830451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:46640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.763 [2024-11-19 07:58:28.830481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:40.763 [2024-11-19 07:58:28.830517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:46704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.763 [2024-11-19 07:58:28.830541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:40.763 [2024-11-19 07:58:28.830577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:46080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.763 [2024-11-19 07:58:28.830601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:40.763 [2024-11-19 07:58:28.830637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:46728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.764 [2024-11-19 07:58:28.830677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:40.764 [2024-11-19 07:58:28.830740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:46560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.764 [2024-11-19 07:58:28.830768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:40.764 [2024-11-19 07:58:28.830808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:46760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.764 [2024-11-19 07:58:28.830835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:40.764 [2024-11-19 07:58:28.830873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:46296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.764 [2024-11-19 07:58:28.830900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:40.764 [2024-11-19 07:58:28.830938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.764 [2024-11-19 07:58:28.830966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:40.764 [2024-11-19 07:58:28.831003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:46448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.764 [2024-11-19 07:58:28.831030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:40.764 [2024-11-19 07:58:28.831070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:45928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.764 [2024-11-19 07:58:28.831097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:40.764 [2024-11-19 07:58:28.832470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:46736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.764 [2024-11-19 07:58:28.832519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:40.764 [2024-11-19 07:58:28.832564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:46184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.764 [2024-11-19 07:58:28.832593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:40.764 [2024-11-19 07:58:28.832630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:46248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.764 [2024-11-19 07:58:28.832661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:40.764 [2024-11-19 07:58:28.832728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:46312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.764 [2024-11-19 07:58:28.832756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:40.764 [2024-11-19 07:58:28.832794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:46960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.764 [2024-11-19 07:58:28.832822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:40.764 [2024-11-19 07:58:28.832861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:46976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.764 [2024-11-19 07:58:28.832888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:40.764 [2024-11-19 07:58:28.832926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:46992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.764 [2024-11-19 07:58:28.832952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:40.764 [2024-11-19 07:58:28.832991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:46376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.764 [2024-11-19 07:58:28.833018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:40.764 [2024-11-19 07:58:28.833056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:46432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.764 [2024-11-19 07:58:28.833083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:40.764 [2024-11-19 07:58:28.833122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:46496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.764 [2024-11-19 07:58:28.833149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:40.764 [2024-11-19 07:58:28.833187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:47008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.764 [2024-11-19 07:58:28.833214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:40.764 [2024-11-19 07:58:28.833253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:47024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.764 [2024-11-19 07:58:28.833280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:40.764 [2024-11-19 07:58:28.833330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:46768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.764 [2024-11-19 07:58:28.833357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:40.764 [2024-11-19 07:58:28.833396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:46800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.764 [2024-11-19 07:58:28.833423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:40.764 [2024-11-19 07:58:28.835106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:46824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.764 [2024-11-19 07:58:28.835140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:40.764 [2024-11-19 07:58:28.835191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:46856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.764 [2024-11-19 07:58:28.835219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:40.764 [2024-11-19 07:58:28.835268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:46552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.764 [2024-11-19 07:58:28.835303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:40.764 [2024-11-19 07:58:28.835343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:46880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.764 [2024-11-19 07:58:28.835370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:40.764 [2024-11-19 07:58:28.835409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:46912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.764 [2024-11-19 07:58:28.835436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:40.764 [2024-11-19 07:58:28.835474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.764 [2024-11-19 07:58:28.835501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:40.764 [2024-11-19 07:58:28.835539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:46664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.764 [2024-11-19 07:58:28.835565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:40.764 [2024-11-19 07:58:28.835604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:46792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.764 [2024-11-19 07:58:28.835631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:40.764 [2024-11-19 07:58:28.835668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:46288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.764 [2024-11-19 07:58:28.835704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:40.764 [2024-11-19 07:58:28.835754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:46424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.764 [2024-11-19 07:58:28.835781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:40.764 [2024-11-19 07:58:28.835820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:46440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.765 [2024-11-19 07:58:28.835846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:40.765 [2024-11-19 07:58:28.835884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:46568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.765 [2024-11-19 07:58:28.835911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:40.765 [2024-11-19 07:58:28.835948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:46168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.765 [2024-11-19 07:58:28.835989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:40.765 [2024-11-19 07:58:28.836050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:46704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.765 [2024-11-19 07:58:28.836091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:40.765 [2024-11-19 07:58:28.836127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:46728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.765 [2024-11-19 07:58:28.836151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:40.765 [2024-11-19 07:58:28.836202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:46760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.765 [2024-11-19 07:58:28.836243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:40.765 [2024-11-19 07:58:28.836294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:45864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.765 [2024-11-19 07:58:28.836332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:40.765 [2024-11-19 07:58:28.836384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:45928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.765 [2024-11-19 07:58:28.836411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:40.765 [2024-11-19 07:58:28.836450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:46624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.765 [2024-11-19 07:58:28.836477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:40.765 [2024-11-19 07:58:28.836515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:46688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.765 [2024-11-19 07:58:28.836541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:40.765 [2024-11-19 07:58:28.836579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:47048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.765 [2024-11-19 07:58:28.836606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:40.765 [2024-11-19 07:58:28.836644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:47064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.765 [2024-11-19 07:58:28.836671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:40.765 [2024-11-19 07:58:28.836728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:47080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.765 [2024-11-19 07:58:28.836756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:40.765 [2024-11-19 07:58:28.836794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:47096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.765 [2024-11-19 07:58:28.836820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:40.765 [2024-11-19 07:58:28.836858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:46184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.765 [2024-11-19 07:58:28.836884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:40.765 [2024-11-19 07:58:28.836922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:46312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.765 [2024-11-19 07:58:28.836953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:40.765 [2024-11-19 07:58:28.837020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:46976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.765 [2024-11-19 07:58:28.837046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:40.765 [2024-11-19 07:58:28.837098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:46376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.765 [2024-11-19 07:58:28.837138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:40.765 [2024-11-19 07:58:28.837173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:46496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.765 [2024-11-19 07:58:28.837197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:40.765 [2024-11-19 07:58:28.837231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:47024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.765 [2024-11-19 07:58:28.837255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:40.765 [2024-11-19 07:58:28.837289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:46800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.765 [2024-11-19 07:58:28.837330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:40.765 [2024-11-19 07:58:28.841004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:46200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.765 [2024-11-19 07:58:28.841041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:40.765 [2024-11-19 07:58:28.841087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:46328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.765 [2024-11-19 07:58:28.841115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:40.765 [2024-11-19 07:58:28.841154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:46480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.765 [2024-11-19 07:58:28.841181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:40.765 [2024-11-19 07:58:28.841236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:47120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.765 [2024-11-19 07:58:28.841263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:40.765 [2024-11-19 07:58:28.841316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:47136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.765 [2024-11-19 07:58:28.841342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:40.765 [2024-11-19 07:58:28.841401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:47152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.765 [2024-11-19 07:58:28.841428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:40.765 [2024-11-19 07:58:28.841481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:47168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.765 [2024-11-19 07:58:28.841515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:40.765 [2024-11-19 07:58:28.841555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:47184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.765 [2024-11-19 07:58:28.841582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:40.765 [2024-11-19 07:58:28.841620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:47200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.765 [2024-11-19 07:58:28.841656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:40.765 [2024-11-19 07:58:28.841707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:46872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.765 [2024-11-19 07:58:28.841735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:40.765 [2024-11-19 07:58:28.841774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:46904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.766 [2024-11-19 07:58:28.841801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:40.766 [2024-11-19 07:58:28.841840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:46936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.766 [2024-11-19 07:58:28.841867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:40.766 [2024-11-19 07:58:28.841906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:46856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.766 [2024-11-19 07:58:28.841933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:40.766 [2024-11-19 07:58:28.841971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:46880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.766 [2024-11-19 07:58:28.841998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:40.766 [2024-11-19 07:58:28.842037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:46944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.766 [2024-11-19 07:58:28.842065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:40.766 [2024-11-19 07:58:28.842102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:46792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.766 [2024-11-19 07:58:28.842128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:40.766 [2024-11-19 07:58:28.842167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:46424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.766 [2024-11-19 07:58:28.842193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:40.766 [2024-11-19 07:58:28.842233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:46568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.766 [2024-11-19 07:58:28.842277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:40.766 [2024-11-19 07:58:28.842316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:46704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.766 [2024-11-19 07:58:28.842341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:40.766 [2024-11-19 07:58:28.842384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:46760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.766 [2024-11-19 07:58:28.842411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:40.766 [2024-11-19 07:58:28.842466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:45928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.766 [2024-11-19 07:58:28.842503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:40.766 [2024-11-19 07:58:28.842548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:46688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.766 [2024-11-19 07:58:28.842575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:40.766 [2024-11-19 07:58:28.842629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:47064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.766 [2024-11-19 07:58:28.842671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:40.766 [2024-11-19 07:58:28.842720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:47096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.766 [2024-11-19 07:58:28.842758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:40.766 [2024-11-19 07:58:28.842796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:46312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.766 [2024-11-19 07:58:28.842823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:40.766 [2024-11-19 07:58:28.842862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:46376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.766 [2024-11-19 07:58:28.842888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:40.766 [2024-11-19 07:58:28.842927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:47024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.766 [2024-11-19 07:58:28.842953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:40.766 [2024-11-19 07:58:28.843001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:46776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.766 [2024-11-19 07:58:28.843028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:40.766 [2024-11-19 07:58:28.843081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:46848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.766 [2024-11-19 07:58:28.843107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:40.766 [2024-11-19 07:58:28.843143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:46672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.766 [2024-11-19 07:58:28.843169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:40.766 [2024-11-19 07:58:28.843223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:46360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.766 [2024-11-19 07:58:28.843248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:40.766 [2024-11-19 07:58:28.845302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:46952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.766 [2024-11-19 07:58:28.845337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:40.766 [2024-11-19 07:58:28.845382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:46984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.766 [2024-11-19 07:58:28.845410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:40.766 [2024-11-19 07:58:28.845449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:47016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.766 [2024-11-19 07:58:28.845476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:40.766 [2024-11-19 07:58:28.845516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:47216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.766 [2024-11-19 07:58:28.845543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:40.766 [2024-11-19 07:58:28.845581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:47232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.766 [2024-11-19 07:58:28.845624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:40.766 [2024-11-19 07:58:28.845662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:47248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.766 [2024-11-19 07:58:28.845695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:40.766 [2024-11-19 07:58:28.845752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.766 [2024-11-19 07:58:28.845778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:40.766 [2024-11-19 07:58:28.845816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:47280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.766 [2024-11-19 07:58:28.845842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:40.766 [2024-11-19 07:58:28.845878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:47296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.766 [2024-11-19 07:58:28.845904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:40.766 [2024-11-19 07:58:28.845942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:47312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.766 [2024-11-19 07:58:28.845992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:40.766 [2024-11-19 07:58:28.846031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:47328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.766 [2024-11-19 07:58:28.846056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:40.766 [2024-11-19 07:58:28.846091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:47344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.767 [2024-11-19 07:58:28.846125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:40.767 [2024-11-19 07:58:28.846161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:47360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.767 [2024-11-19 07:58:28.846192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:40.767 [2024-11-19 07:58:28.846229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:47376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.767 [2024-11-19 07:58:28.846254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:40.767 [2024-11-19 07:58:28.846290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:47392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.767 [2024-11-19 07:58:28.846315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:40.767 [2024-11-19 07:58:28.846359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:47408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.767 [2024-11-19 07:58:28.846383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:40.767 [2024-11-19 07:58:28.846418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:46864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.767 [2024-11-19 07:58:28.846443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:40.767 [2024-11-19 07:58:28.846494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:46928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.767 [2024-11-19 07:58:28.846520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:40.767 [2024-11-19 07:58:28.846556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:46640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.767 [2024-11-19 07:58:28.846580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:40.767 [2024-11-19 07:58:28.846616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:46328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.767 [2024-11-19 07:58:28.846641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:40.767 [2024-11-19 07:58:28.846698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:47120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.767 [2024-11-19 07:58:28.846732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:40.767 [2024-11-19 07:58:28.846770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:47152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.767 [2024-11-19 07:58:28.846796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:40.767 [2024-11-19 07:58:28.846838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:47184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.767 [2024-11-19 07:58:28.846864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:40.767 [2024-11-19 07:58:28.846900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:46872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.767 [2024-11-19 07:58:28.846925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:40.767 [2024-11-19 07:58:28.846962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:46936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.767 [2024-11-19 07:58:28.847006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:40.767 [2024-11-19 07:58:28.847043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:46880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.767 [2024-11-19 07:58:28.847068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:40.767 [2024-11-19 07:58:28.847103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:46792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.767 [2024-11-19 07:58:28.847127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:40.767 [2024-11-19 07:58:28.847161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:46568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.767 [2024-11-19 07:58:28.847186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:40.767 [2024-11-19 07:58:28.847220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:46760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.767 [2024-11-19 07:58:28.847244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:40.767 [2024-11-19 07:58:28.847278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:46688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.767 [2024-11-19 07:58:28.847303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:40.767 [2024-11-19 07:58:28.847337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:47096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.767 [2024-11-19 07:58:28.847362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:40.767 [2024-11-19 07:58:29.265125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:46376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.767 [2024-11-19 07:58:29.265168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:40.767 [2024-11-19 07:58:29.265206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:46776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.767 [2024-11-19 07:58:29.265232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:40.767 [2024-11-19 07:58:29.265269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:46672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.767 [2024-11-19 07:58:29.265294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:40.767 5813.81 IOPS, 22.71 MiB/s [2024-11-19T06:58:32.697Z] [2024-11-19 07:58:29.269165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:46296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.767 [2024-11-19 07:58:29.269208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:40.767 [2024-11-19 07:58:29.269334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:47424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.767 [2024-11-19 07:58:29.269369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:40.767 [2024-11-19 07:58:29.269413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:47440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.767 [2024-11-19 07:58:29.269457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:40.767 [2024-11-19 07:58:29.269502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:47456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.767 [2024-11-19 07:58:29.269529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:40.767 [2024-11-19 07:58:29.269568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:47472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.767 [2024-11-19 07:58:29.269596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:40.767 [2024-11-19 07:58:29.269634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:47488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.767 [2024-11-19 07:58:29.269661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:40.767 [2024-11-19 07:58:29.269725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:47504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.767 [2024-11-19 07:58:29.269763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:40.767 [2024-11-19 07:58:29.269819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:47040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.767 [2024-11-19 07:58:29.269862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:40.767 [2024-11-19 07:58:29.269903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:47072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.767 [2024-11-19 07:58:29.269930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:40.767 [2024-11-19 07:58:29.269968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:47104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.767 [2024-11-19 07:58:29.270007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:40.767 [2024-11-19 07:58:29.270045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:46992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:40.768 [2024-11-19 07:58:29.270080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:40.768 [2024-11-19 07:58:29.270119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:47520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.768 [2024-11-19 07:58:29.270146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:40.768 [2024-11-19 07:58:29.270200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:47536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.768 [2024-11-19 07:58:29.270228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:40.768 [2024-11-19 07:58:29.270268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:47552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:40.768 [2024-11-19 07:58:29.270296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:40.768 5825.91 IOPS, 22.76 MiB/s [2024-11-19T06:58:32.698Z] 5834.94 IOPS, 22.79 MiB/s [2024-11-19T06:58:32.698Z] Received shutdown signal, test time was about 34.378056 seconds 00:34:40.768 00:34:40.768 Latency(us) 00:34:40.768 [2024-11-19T06:58:32.698Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:40.768 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:34:40.768 Verification LBA range: start 0x0 length 0x4000 00:34:40.768 Nvme0n1 : 34.38 5823.39 22.75 0.00 0.00 21943.74 2852.03 4051386.97 00:34:40.768 [2024-11-19T06:58:32.698Z] =================================================================================================================== 00:34:40.768 [2024-11-19T06:58:32.698Z] Total : 5823.39 22.75 0.00 0.00 21943.74 2852.03 4051386.97 00:34:40.768 07:58:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:40.768 07:58:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:34:40.768 07:58:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:41.026 07:58:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:34:41.027 07:58:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:41.027 07:58:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:34:41.027 07:58:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:41.027 07:58:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:34:41.027 07:58:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:41.027 07:58:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:41.027 rmmod nvme_tcp 00:34:41.027 rmmod nvme_fabrics 00:34:41.027 rmmod nvme_keyring 00:34:41.027 07:58:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:41.027 07:58:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:34:41.027 07:58:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:34:41.027 07:58:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 3101134 ']' 00:34:41.027 07:58:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 3101134 00:34:41.027 07:58:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 3101134 ']' 00:34:41.027 07:58:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 3101134 00:34:41.027 07:58:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:34:41.027 07:58:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:41.027 07:58:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3101134 00:34:41.027 07:58:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:41.027 07:58:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:41.027 07:58:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3101134' 00:34:41.027 killing process with pid 3101134 00:34:41.027 07:58:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 3101134 00:34:41.027 07:58:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 3101134 00:34:42.404 07:58:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:42.404 07:58:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:42.404 07:58:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:42.404 07:58:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:34:42.404 07:58:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:34:42.404 07:58:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:42.404 07:58:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:34:42.404 07:58:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:42.404 07:58:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:42.404 07:58:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:42.404 07:58:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:42.404 07:58:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:44.311 07:58:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:44.312 00:34:44.312 real 0m46.619s 00:34:44.312 user 2m18.163s 00:34:44.312 sys 0m11.086s 00:34:44.312 07:58:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:44.312 07:58:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:44.312 ************************************ 00:34:44.312 END TEST nvmf_host_multipath_status 00:34:44.312 ************************************ 00:34:44.312 07:58:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:34:44.312 07:58:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:44.312 07:58:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:44.312 07:58:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.312 ************************************ 00:34:44.312 START TEST nvmf_discovery_remove_ifc 00:34:44.312 ************************************ 00:34:44.312 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:34:44.312 * Looking for test storage... 00:34:44.312 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:44.312 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:44.312 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:34:44.312 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:44.571 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:44.571 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:44.571 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:44.571 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:44.571 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:34:44.571 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:34:44.571 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:34:44.571 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:34:44.571 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:34:44.571 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:34:44.571 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:34:44.571 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:44.571 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:34:44.571 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:34:44.571 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:44.571 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:44.571 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:34:44.571 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:34:44.571 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:44.571 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:34:44.571 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:34:44.571 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:34:44.571 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:34:44.571 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:44.571 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:34:44.571 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:34:44.571 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:44.571 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:44.571 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:34:44.571 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:44.571 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:44.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:44.571 --rc genhtml_branch_coverage=1 00:34:44.571 --rc genhtml_function_coverage=1 00:34:44.571 --rc genhtml_legend=1 00:34:44.571 --rc geninfo_all_blocks=1 00:34:44.571 --rc geninfo_unexecuted_blocks=1 00:34:44.571 00:34:44.571 ' 00:34:44.571 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:44.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:44.571 --rc genhtml_branch_coverage=1 00:34:44.571 --rc genhtml_function_coverage=1 00:34:44.571 --rc genhtml_legend=1 00:34:44.571 --rc geninfo_all_blocks=1 00:34:44.571 --rc geninfo_unexecuted_blocks=1 00:34:44.571 00:34:44.571 ' 00:34:44.571 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:44.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:44.571 --rc genhtml_branch_coverage=1 00:34:44.571 --rc genhtml_function_coverage=1 00:34:44.571 --rc genhtml_legend=1 00:34:44.571 --rc geninfo_all_blocks=1 00:34:44.571 --rc geninfo_unexecuted_blocks=1 00:34:44.572 00:34:44.572 ' 00:34:44.572 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:44.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:44.572 --rc genhtml_branch_coverage=1 00:34:44.572 --rc genhtml_function_coverage=1 00:34:44.572 --rc genhtml_legend=1 00:34:44.572 --rc geninfo_all_blocks=1 00:34:44.572 --rc geninfo_unexecuted_blocks=1 00:34:44.572 00:34:44.572 ' 00:34:44.572 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:44.572 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:34:44.572 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:44.572 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:44.572 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:44.572 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:44.572 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:44.572 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:44.572 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:44.572 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:44.572 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:44.572 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:44.572 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:44.572 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:44.572 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:44.572 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:44.572 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:44.572 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:44.572 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:44.572 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:34:44.572 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:44.572 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:44.572 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:44.572 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:44.572 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:44.572 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:44.572 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:34:44.572 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:44.572 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:34:44.572 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:44.572 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:44.572 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:44.572 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:44.572 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:44.572 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:44.572 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:44.572 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:44.572 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:44.572 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:44.572 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:34:44.572 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:34:44.572 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:34:44.572 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:34:44.572 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:34:44.572 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:34:44.572 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:34:44.572 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:44.572 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:44.572 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:44.572 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:44.572 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:44.572 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:44.572 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:44.572 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:44.572 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:44.572 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:44.572 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:34:44.572 07:58:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:46.477 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:46.477 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:46.477 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:46.477 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:46.477 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:46.477 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:46.478 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.289 ms 00:34:46.478 00:34:46.478 --- 10.0.0.2 ping statistics --- 00:34:46.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:46.478 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:34:46.478 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:46.478 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:46.478 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:34:46.478 00:34:46.478 --- 10.0.0.1 ping statistics --- 00:34:46.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:46.478 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:34:46.478 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:46.478 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:34:46.478 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:46.478 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:46.478 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:46.478 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:46.478 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:46.478 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:46.478 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:46.478 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:34:46.478 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:46.478 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:46.478 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:46.478 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=3108151 00:34:46.478 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:34:46.478 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 3108151 00:34:46.478 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 3108151 ']' 00:34:46.478 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:46.478 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:46.478 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:46.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:46.478 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:46.478 07:58:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:46.736 [2024-11-19 07:58:38.459755] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:34:46.736 [2024-11-19 07:58:38.459902] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:46.736 [2024-11-19 07:58:38.614551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:46.995 [2024-11-19 07:58:38.749756] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:46.995 [2024-11-19 07:58:38.749838] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:46.995 [2024-11-19 07:58:38.749858] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:46.995 [2024-11-19 07:58:38.749878] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:46.995 [2024-11-19 07:58:38.749895] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:46.995 [2024-11-19 07:58:38.751526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:47.562 07:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:47.562 07:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:34:47.562 07:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:47.562 07:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:47.562 07:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:47.562 07:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:47.562 07:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:34:47.562 07:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.562 07:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:47.562 [2024-11-19 07:58:39.489960] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:47.820 [2024-11-19 07:58:39.498274] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:34:47.820 null0 00:34:47.820 [2024-11-19 07:58:39.530152] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:47.820 07:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.820 07:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3108302 00:34:47.820 07:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:34:47.820 07:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3108302 /tmp/host.sock 00:34:47.820 07:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 3108302 ']' 00:34:47.820 07:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:34:47.820 07:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:47.820 07:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:34:47.820 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:34:47.820 07:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:47.820 07:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:47.820 [2024-11-19 07:58:39.640918] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:34:47.820 [2024-11-19 07:58:39.641083] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3108302 ] 00:34:48.079 [2024-11-19 07:58:39.783296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:48.079 [2024-11-19 07:58:39.919889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:49.012 07:58:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:49.012 07:58:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:34:49.012 07:58:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:49.012 07:58:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:34:49.012 07:58:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.012 07:58:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:49.012 07:58:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.012 07:58:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:34:49.012 07:58:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.012 07:58:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:49.271 07:58:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.271 07:58:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:34:49.271 07:58:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.271 07:58:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:50.205 [2024-11-19 07:58:42.052860] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:50.205 [2024-11-19 07:58:42.052903] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:50.205 [2024-11-19 07:58:42.052940] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:50.464 [2024-11-19 07:58:42.139282] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:34:50.464 [2024-11-19 07:58:42.240477] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:34:50.464 [2024-11-19 07:58:42.242124] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x6150001f2c80:1 started. 00:34:50.464 [2024-11-19 07:58:42.244539] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:34:50.464 [2024-11-19 07:58:42.244629] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:34:50.464 [2024-11-19 07:58:42.244747] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:34:50.464 [2024-11-19 07:58:42.244785] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:50.464 [2024-11-19 07:58:42.244827] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:50.464 07:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.464 07:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:34:50.464 07:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:50.464 07:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:50.464 07:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:50.464 07:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.464 07:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:50.464 07:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:50.464 07:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:50.464 [2024-11-19 07:58:42.251089] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x6150001f2c80 was disconnected and freed. delete nvme_qpair. 00:34:50.464 07:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.464 07:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:34:50.464 07:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:34:50.464 07:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:34:50.464 07:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:34:50.464 07:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:50.464 07:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:50.464 07:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.464 07:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:50.464 07:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:50.464 07:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:50.464 07:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:50.464 07:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.464 07:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:50.464 07:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:51.839 07:58:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:51.839 07:58:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:51.839 07:58:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.839 07:58:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:51.839 07:58:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:51.839 07:58:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:51.839 07:58:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:51.839 07:58:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.839 07:58:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:51.839 07:58:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:52.772 07:58:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:52.772 07:58:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:52.772 07:58:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:52.772 07:58:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.772 07:58:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:52.772 07:58:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:52.772 07:58:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:52.772 07:58:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.772 07:58:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:52.772 07:58:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:53.706 07:58:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:53.706 07:58:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:53.706 07:58:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.706 07:58:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:53.706 07:58:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:53.706 07:58:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:53.706 07:58:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:53.706 07:58:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.706 07:58:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:53.706 07:58:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:54.640 07:58:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:54.640 07:58:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:54.640 07:58:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:54.640 07:58:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.640 07:58:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:54.640 07:58:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:54.640 07:58:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:54.640 07:58:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.640 07:58:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:54.640 07:58:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:56.015 07:58:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:56.015 07:58:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:56.015 07:58:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:56.015 07:58:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.015 07:58:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:56.015 07:58:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:56.015 07:58:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:56.015 07:58:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.015 07:58:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:56.015 07:58:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:56.015 [2024-11-19 07:58:47.685907] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:34:56.015 [2024-11-19 07:58:47.686018] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:56.015 [2024-11-19 07:58:47.686049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.015 [2024-11-19 07:58:47.686076] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:56.015 [2024-11-19 07:58:47.686096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.015 [2024-11-19 07:58:47.686116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:56.015 [2024-11-19 07:58:47.686135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.015 [2024-11-19 07:58:47.686156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:56.015 [2024-11-19 07:58:47.686177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.015 [2024-11-19 07:58:47.686198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:34:56.015 [2024-11-19 07:58:47.686225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.015 [2024-11-19 07:58:47.686246] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2780 is same with the state(6) to be set 00:34:56.015 [2024-11-19 07:58:47.695918] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2780 (9): Bad file descriptor 00:34:56.015 [2024-11-19 07:58:47.705965] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:56.015 [2024-11-19 07:58:47.706015] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:56.015 [2024-11-19 07:58:47.706033] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:56.015 [2024-11-19 07:58:47.706047] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:56.015 [2024-11-19 07:58:47.706117] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:56.949 07:58:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:56.949 07:58:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:56.949 07:58:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:56.949 07:58:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.949 07:58:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:56.949 07:58:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:56.949 07:58:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:56.949 [2024-11-19 07:58:48.744752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:34:56.949 [2024-11-19 07:58:48.744878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:34:56.949 [2024-11-19 07:58:48.744920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2780 is same with the state(6) to be set 00:34:56.949 [2024-11-19 07:58:48.744994] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2780 (9): Bad file descriptor 00:34:56.949 [2024-11-19 07:58:48.745801] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:34:56.949 [2024-11-19 07:58:48.745862] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:56.949 [2024-11-19 07:58:48.745892] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:56.949 [2024-11-19 07:58:48.745915] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:56.949 [2024-11-19 07:58:48.745937] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:56.949 [2024-11-19 07:58:48.745956] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:56.949 [2024-11-19 07:58:48.745978] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:56.949 [2024-11-19 07:58:48.746016] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:56.949 [2024-11-19 07:58:48.746032] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:56.949 07:58:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.949 07:58:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:56.949 07:58:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:57.928 [2024-11-19 07:58:49.748583] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:57.928 [2024-11-19 07:58:49.748656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:57.928 [2024-11-19 07:58:49.748701] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:57.928 [2024-11-19 07:58:49.748739] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:57.928 [2024-11-19 07:58:49.748760] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:34:57.928 [2024-11-19 07:58:49.748782] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:57.928 [2024-11-19 07:58:49.748801] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:57.928 [2024-11-19 07:58:49.748814] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:57.928 [2024-11-19 07:58:49.748902] bdev_nvme.c:7135:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:34:57.928 [2024-11-19 07:58:49.748987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:57.928 [2024-11-19 07:58:49.749020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.928 [2024-11-19 07:58:49.749064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:57.928 [2024-11-19 07:58:49.749085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.928 [2024-11-19 07:58:49.749109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:57.928 [2024-11-19 07:58:49.749130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.928 [2024-11-19 07:58:49.749152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:57.928 [2024-11-19 07:58:49.749174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.928 [2024-11-19 07:58:49.749195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:34:57.928 [2024-11-19 07:58:49.749218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.928 [2024-11-19 07:58:49.749239] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:34:57.928 [2024-11-19 07:58:49.749338] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:34:57.928 [2024-11-19 07:58:49.750313] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:34:57.928 [2024-11-19 07:58:49.750343] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:34:57.928 07:58:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:57.928 07:58:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:57.928 07:58:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:57.928 07:58:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.928 07:58:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:57.929 07:58:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:57.929 07:58:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:57.929 07:58:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.929 07:58:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:34:57.929 07:58:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:57.929 07:58:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:57.929 07:58:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:34:57.929 07:58:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:57.929 07:58:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:57.929 07:58:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:57.929 07:58:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.929 07:58:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:57.929 07:58:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:57.929 07:58:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:57.929 07:58:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.188 07:58:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:34:58.188 07:58:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:59.123 07:58:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:59.123 07:58:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:59.123 07:58:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:59.123 07:58:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.123 07:58:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:59.123 07:58:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:59.123 07:58:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:59.123 07:58:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.123 07:58:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:34:59.123 07:58:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:00.055 [2024-11-19 07:58:51.768862] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:35:00.055 [2024-11-19 07:58:51.768901] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:35:00.055 [2024-11-19 07:58:51.768943] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:00.055 [2024-11-19 07:58:51.895430] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:35:00.055 07:58:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:00.055 07:58:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:00.055 07:58:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:00.055 07:58:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.055 07:58:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:00.055 07:58:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:00.055 07:58:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:00.055 07:58:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.055 [2024-11-19 07:58:51.956390] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:35:00.055 [2024-11-19 07:58:51.957953] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x6150001f3900:1 started. 00:35:00.055 [2024-11-19 07:58:51.960305] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:35:00.055 [2024-11-19 07:58:51.960379] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:35:00.055 [2024-11-19 07:58:51.960460] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:35:00.055 [2024-11-19 07:58:51.960502] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:35:00.055 [2024-11-19 07:58:51.960527] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:35:00.055 07:58:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:35:00.055 07:58:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:00.314 [2024-11-19 07:58:52.007980] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x6150001f3900 was disconnected and freed. delete nvme_qpair. 00:35:01.248 07:58:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:01.248 07:58:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:01.248 07:58:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:01.248 07:58:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.248 07:58:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:01.248 07:58:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:01.248 07:58:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:01.248 07:58:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.248 07:58:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:35:01.248 07:58:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:35:01.248 07:58:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3108302 00:35:01.248 07:58:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 3108302 ']' 00:35:01.249 07:58:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 3108302 00:35:01.249 07:58:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:35:01.249 07:58:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:01.249 07:58:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3108302 00:35:01.249 07:58:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:01.249 07:58:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:01.249 07:58:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3108302' 00:35:01.249 killing process with pid 3108302 00:35:01.249 07:58:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 3108302 00:35:01.249 07:58:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 3108302 00:35:02.184 07:58:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:35:02.184 07:58:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:02.184 07:58:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:35:02.184 07:58:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:02.184 07:58:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:35:02.184 07:58:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:02.184 07:58:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:02.184 rmmod nvme_tcp 00:35:02.184 rmmod nvme_fabrics 00:35:02.184 rmmod nvme_keyring 00:35:02.184 07:58:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:02.184 07:58:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:35:02.184 07:58:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:35:02.184 07:58:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 3108151 ']' 00:35:02.184 07:58:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 3108151 00:35:02.184 07:58:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 3108151 ']' 00:35:02.184 07:58:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 3108151 00:35:02.184 07:58:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:35:02.184 07:58:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:02.184 07:58:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3108151 00:35:02.184 07:58:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:02.184 07:58:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:02.184 07:58:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3108151' 00:35:02.184 killing process with pid 3108151 00:35:02.184 07:58:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 3108151 00:35:02.184 07:58:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 3108151 00:35:03.569 07:58:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:03.569 07:58:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:03.569 07:58:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:03.569 07:58:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:35:03.569 07:58:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:35:03.569 07:58:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:03.569 07:58:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:35:03.569 07:58:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:03.569 07:58:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:03.569 07:58:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:03.569 07:58:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:03.569 07:58:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:05.473 07:58:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:05.473 00:35:05.473 real 0m21.060s 00:35:05.473 user 0m31.140s 00:35:05.473 sys 0m3.193s 00:35:05.473 07:58:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:05.473 07:58:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:05.473 ************************************ 00:35:05.473 END TEST nvmf_discovery_remove_ifc 00:35:05.473 ************************************ 00:35:05.473 07:58:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:35:05.473 07:58:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:05.473 07:58:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:05.473 07:58:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.473 ************************************ 00:35:05.473 START TEST nvmf_identify_kernel_target 00:35:05.474 ************************************ 00:35:05.474 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:35:05.474 * Looking for test storage... 00:35:05.474 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:05.474 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:05.474 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:35:05.474 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:05.733 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:05.733 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:05.733 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:05.733 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:05.733 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:35:05.733 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:35:05.733 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:35:05.733 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:35:05.733 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:35:05.733 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:35:05.733 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:35:05.733 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:05.733 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:35:05.733 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:35:05.733 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:05.733 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:05.733 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:35:05.733 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:35:05.733 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:05.733 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:35:05.733 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:35:05.733 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:35:05.733 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:35:05.733 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:05.733 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:35:05.733 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:35:05.733 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:05.733 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:05.733 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:35:05.733 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:05.733 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:05.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:05.733 --rc genhtml_branch_coverage=1 00:35:05.733 --rc genhtml_function_coverage=1 00:35:05.733 --rc genhtml_legend=1 00:35:05.733 --rc geninfo_all_blocks=1 00:35:05.733 --rc geninfo_unexecuted_blocks=1 00:35:05.733 00:35:05.733 ' 00:35:05.733 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:05.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:05.733 --rc genhtml_branch_coverage=1 00:35:05.733 --rc genhtml_function_coverage=1 00:35:05.733 --rc genhtml_legend=1 00:35:05.733 --rc geninfo_all_blocks=1 00:35:05.733 --rc geninfo_unexecuted_blocks=1 00:35:05.733 00:35:05.733 ' 00:35:05.733 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:05.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:05.733 --rc genhtml_branch_coverage=1 00:35:05.733 --rc genhtml_function_coverage=1 00:35:05.733 --rc genhtml_legend=1 00:35:05.733 --rc geninfo_all_blocks=1 00:35:05.733 --rc geninfo_unexecuted_blocks=1 00:35:05.733 00:35:05.733 ' 00:35:05.733 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:05.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:05.733 --rc genhtml_branch_coverage=1 00:35:05.733 --rc genhtml_function_coverage=1 00:35:05.733 --rc genhtml_legend=1 00:35:05.733 --rc geninfo_all_blocks=1 00:35:05.733 --rc geninfo_unexecuted_blocks=1 00:35:05.733 00:35:05.733 ' 00:35:05.733 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:05.733 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:35:05.733 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:05.733 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:05.733 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:05.733 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:05.733 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:05.733 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:05.733 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:05.733 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:05.733 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:05.733 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:05.733 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:05.733 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:05.733 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:05.733 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:05.733 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:05.733 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:05.733 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:05.733 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:35:05.733 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:05.733 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:05.733 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:05.733 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:05.734 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:05.734 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:05.734 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:35:05.734 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:05.734 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:35:05.734 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:05.734 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:05.734 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:05.734 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:05.734 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:05.734 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:05.734 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:05.734 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:05.734 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:05.734 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:05.734 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:35:05.734 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:05.734 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:05.734 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:05.734 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:05.734 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:05.734 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:05.734 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:05.734 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:05.734 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:05.734 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:05.734 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:35:05.734 07:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:35:07.636 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:07.636 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:35:07.636 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:07.636 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:07.636 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:07.636 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:07.636 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:07.636 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:35:07.636 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:07.636 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:35:07.636 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:35:07.636 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:35:07.636 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:35:07.636 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:35:07.636 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:07.637 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:07.637 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:07.637 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:07.637 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:07.637 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:07.637 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.350 ms 00:35:07.637 00:35:07.637 --- 10.0.0.2 ping statistics --- 00:35:07.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:07.637 rtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 ms 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:07.637 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:07.637 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:35:07.637 00:35:07.637 --- 10.0.0.1 ping statistics --- 00:35:07.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:07.637 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:35:07.637 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:35:07.638 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:35:07.638 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:07.638 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:07.638 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:07.638 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:07.638 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:07.638 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:07.638 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:07.638 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:07.638 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:07.638 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:35:07.638 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:35:07.638 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:35:07.638 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:35:07.638 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:07.638 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:07.638 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:07.638 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:35:07.638 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:35:07.895 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:35:07.895 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:07.895 07:58:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:08.831 Waiting for block devices as requested 00:35:08.831 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:35:09.090 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:09.090 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:09.349 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:09.349 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:09.349 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:09.349 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:09.609 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:09.609 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:09.609 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:09.609 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:09.867 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:09.867 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:09.867 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:09.867 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:10.125 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:10.125 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:10.125 07:59:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:35:10.125 07:59:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:10.125 07:59:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:35:10.125 07:59:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:35:10.125 07:59:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:10.125 07:59:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:35:10.125 07:59:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:35:10.125 07:59:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:35:10.125 07:59:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:10.125 No valid GPT data, bailing 00:35:10.384 07:59:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:10.384 07:59:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:35:10.384 07:59:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:35:10.384 07:59:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:35:10.384 07:59:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:35:10.384 07:59:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:10.384 07:59:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:10.384 07:59:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:10.384 07:59:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:35:10.384 07:59:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:35:10.384 07:59:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:35:10.384 07:59:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:35:10.384 07:59:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:35:10.384 07:59:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:35:10.384 07:59:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:35:10.384 07:59:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:35:10.384 07:59:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:10.384 07:59:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:35:10.384 00:35:10.384 Discovery Log Number of Records 2, Generation counter 2 00:35:10.384 =====Discovery Log Entry 0====== 00:35:10.384 trtype: tcp 00:35:10.384 adrfam: ipv4 00:35:10.384 subtype: current discovery subsystem 00:35:10.384 treq: not specified, sq flow control disable supported 00:35:10.384 portid: 1 00:35:10.384 trsvcid: 4420 00:35:10.384 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:10.384 traddr: 10.0.0.1 00:35:10.384 eflags: none 00:35:10.384 sectype: none 00:35:10.384 =====Discovery Log Entry 1====== 00:35:10.384 trtype: tcp 00:35:10.384 adrfam: ipv4 00:35:10.384 subtype: nvme subsystem 00:35:10.384 treq: not specified, sq flow control disable supported 00:35:10.384 portid: 1 00:35:10.384 trsvcid: 4420 00:35:10.384 subnqn: nqn.2016-06.io.spdk:testnqn 00:35:10.384 traddr: 10.0.0.1 00:35:10.384 eflags: none 00:35:10.384 sectype: none 00:35:10.384 07:59:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:35:10.384 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:35:10.644 ===================================================== 00:35:10.644 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:35:10.644 ===================================================== 00:35:10.644 Controller Capabilities/Features 00:35:10.644 ================================ 00:35:10.644 Vendor ID: 0000 00:35:10.644 Subsystem Vendor ID: 0000 00:35:10.644 Serial Number: dd999cfd15a689469cfb 00:35:10.644 Model Number: Linux 00:35:10.644 Firmware Version: 6.8.9-20 00:35:10.644 Recommended Arb Burst: 0 00:35:10.644 IEEE OUI Identifier: 00 00 00 00:35:10.644 Multi-path I/O 00:35:10.644 May have multiple subsystem ports: No 00:35:10.644 May have multiple controllers: No 00:35:10.644 Associated with SR-IOV VF: No 00:35:10.644 Max Data Transfer Size: Unlimited 00:35:10.644 Max Number of Namespaces: 0 00:35:10.644 Max Number of I/O Queues: 1024 00:35:10.644 NVMe Specification Version (VS): 1.3 00:35:10.644 NVMe Specification Version (Identify): 1.3 00:35:10.644 Maximum Queue Entries: 1024 00:35:10.644 Contiguous Queues Required: No 00:35:10.644 Arbitration Mechanisms Supported 00:35:10.644 Weighted Round Robin: Not Supported 00:35:10.644 Vendor Specific: Not Supported 00:35:10.644 Reset Timeout: 7500 ms 00:35:10.644 Doorbell Stride: 4 bytes 00:35:10.644 NVM Subsystem Reset: Not Supported 00:35:10.644 Command Sets Supported 00:35:10.644 NVM Command Set: Supported 00:35:10.644 Boot Partition: Not Supported 00:35:10.644 Memory Page Size Minimum: 4096 bytes 00:35:10.644 Memory Page Size Maximum: 4096 bytes 00:35:10.644 Persistent Memory Region: Not Supported 00:35:10.644 Optional Asynchronous Events Supported 00:35:10.644 Namespace Attribute Notices: Not Supported 00:35:10.644 Firmware Activation Notices: Not Supported 00:35:10.644 ANA Change Notices: Not Supported 00:35:10.644 PLE Aggregate Log Change Notices: Not Supported 00:35:10.644 LBA Status Info Alert Notices: Not Supported 00:35:10.644 EGE Aggregate Log Change Notices: Not Supported 00:35:10.644 Normal NVM Subsystem Shutdown event: Not Supported 00:35:10.644 Zone Descriptor Change Notices: Not Supported 00:35:10.644 Discovery Log Change Notices: Supported 00:35:10.644 Controller Attributes 00:35:10.644 128-bit Host Identifier: Not Supported 00:35:10.644 Non-Operational Permissive Mode: Not Supported 00:35:10.644 NVM Sets: Not Supported 00:35:10.644 Read Recovery Levels: Not Supported 00:35:10.644 Endurance Groups: Not Supported 00:35:10.644 Predictable Latency Mode: Not Supported 00:35:10.644 Traffic Based Keep ALive: Not Supported 00:35:10.644 Namespace Granularity: Not Supported 00:35:10.644 SQ Associations: Not Supported 00:35:10.644 UUID List: Not Supported 00:35:10.644 Multi-Domain Subsystem: Not Supported 00:35:10.644 Fixed Capacity Management: Not Supported 00:35:10.644 Variable Capacity Management: Not Supported 00:35:10.644 Delete Endurance Group: Not Supported 00:35:10.644 Delete NVM Set: Not Supported 00:35:10.644 Extended LBA Formats Supported: Not Supported 00:35:10.644 Flexible Data Placement Supported: Not Supported 00:35:10.644 00:35:10.644 Controller Memory Buffer Support 00:35:10.644 ================================ 00:35:10.644 Supported: No 00:35:10.644 00:35:10.644 Persistent Memory Region Support 00:35:10.644 ================================ 00:35:10.644 Supported: No 00:35:10.644 00:35:10.644 Admin Command Set Attributes 00:35:10.644 ============================ 00:35:10.644 Security Send/Receive: Not Supported 00:35:10.644 Format NVM: Not Supported 00:35:10.644 Firmware Activate/Download: Not Supported 00:35:10.644 Namespace Management: Not Supported 00:35:10.644 Device Self-Test: Not Supported 00:35:10.645 Directives: Not Supported 00:35:10.645 NVMe-MI: Not Supported 00:35:10.645 Virtualization Management: Not Supported 00:35:10.645 Doorbell Buffer Config: Not Supported 00:35:10.645 Get LBA Status Capability: Not Supported 00:35:10.645 Command & Feature Lockdown Capability: Not Supported 00:35:10.645 Abort Command Limit: 1 00:35:10.645 Async Event Request Limit: 1 00:35:10.645 Number of Firmware Slots: N/A 00:35:10.645 Firmware Slot 1 Read-Only: N/A 00:35:10.645 Firmware Activation Without Reset: N/A 00:35:10.645 Multiple Update Detection Support: N/A 00:35:10.645 Firmware Update Granularity: No Information Provided 00:35:10.645 Per-Namespace SMART Log: No 00:35:10.645 Asymmetric Namespace Access Log Page: Not Supported 00:35:10.645 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:35:10.645 Command Effects Log Page: Not Supported 00:35:10.645 Get Log Page Extended Data: Supported 00:35:10.645 Telemetry Log Pages: Not Supported 00:35:10.645 Persistent Event Log Pages: Not Supported 00:35:10.645 Supported Log Pages Log Page: May Support 00:35:10.645 Commands Supported & Effects Log Page: Not Supported 00:35:10.645 Feature Identifiers & Effects Log Page:May Support 00:35:10.645 NVMe-MI Commands & Effects Log Page: May Support 00:35:10.645 Data Area 4 for Telemetry Log: Not Supported 00:35:10.645 Error Log Page Entries Supported: 1 00:35:10.645 Keep Alive: Not Supported 00:35:10.645 00:35:10.645 NVM Command Set Attributes 00:35:10.645 ========================== 00:35:10.645 Submission Queue Entry Size 00:35:10.645 Max: 1 00:35:10.645 Min: 1 00:35:10.645 Completion Queue Entry Size 00:35:10.645 Max: 1 00:35:10.645 Min: 1 00:35:10.645 Number of Namespaces: 0 00:35:10.645 Compare Command: Not Supported 00:35:10.645 Write Uncorrectable Command: Not Supported 00:35:10.645 Dataset Management Command: Not Supported 00:35:10.645 Write Zeroes Command: Not Supported 00:35:10.645 Set Features Save Field: Not Supported 00:35:10.645 Reservations: Not Supported 00:35:10.645 Timestamp: Not Supported 00:35:10.645 Copy: Not Supported 00:35:10.645 Volatile Write Cache: Not Present 00:35:10.645 Atomic Write Unit (Normal): 1 00:35:10.645 Atomic Write Unit (PFail): 1 00:35:10.645 Atomic Compare & Write Unit: 1 00:35:10.645 Fused Compare & Write: Not Supported 00:35:10.645 Scatter-Gather List 00:35:10.645 SGL Command Set: Supported 00:35:10.645 SGL Keyed: Not Supported 00:35:10.645 SGL Bit Bucket Descriptor: Not Supported 00:35:10.645 SGL Metadata Pointer: Not Supported 00:35:10.645 Oversized SGL: Not Supported 00:35:10.645 SGL Metadata Address: Not Supported 00:35:10.645 SGL Offset: Supported 00:35:10.645 Transport SGL Data Block: Not Supported 00:35:10.645 Replay Protected Memory Block: Not Supported 00:35:10.645 00:35:10.645 Firmware Slot Information 00:35:10.645 ========================= 00:35:10.645 Active slot: 0 00:35:10.645 00:35:10.645 00:35:10.645 Error Log 00:35:10.645 ========= 00:35:10.645 00:35:10.645 Active Namespaces 00:35:10.645 ================= 00:35:10.645 Discovery Log Page 00:35:10.645 ================== 00:35:10.645 Generation Counter: 2 00:35:10.645 Number of Records: 2 00:35:10.645 Record Format: 0 00:35:10.645 00:35:10.645 Discovery Log Entry 0 00:35:10.645 ---------------------- 00:35:10.645 Transport Type: 3 (TCP) 00:35:10.645 Address Family: 1 (IPv4) 00:35:10.645 Subsystem Type: 3 (Current Discovery Subsystem) 00:35:10.645 Entry Flags: 00:35:10.645 Duplicate Returned Information: 0 00:35:10.645 Explicit Persistent Connection Support for Discovery: 0 00:35:10.645 Transport Requirements: 00:35:10.645 Secure Channel: Not Specified 00:35:10.645 Port ID: 1 (0x0001) 00:35:10.645 Controller ID: 65535 (0xffff) 00:35:10.645 Admin Max SQ Size: 32 00:35:10.645 Transport Service Identifier: 4420 00:35:10.645 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:35:10.645 Transport Address: 10.0.0.1 00:35:10.645 Discovery Log Entry 1 00:35:10.645 ---------------------- 00:35:10.645 Transport Type: 3 (TCP) 00:35:10.645 Address Family: 1 (IPv4) 00:35:10.645 Subsystem Type: 2 (NVM Subsystem) 00:35:10.645 Entry Flags: 00:35:10.645 Duplicate Returned Information: 0 00:35:10.645 Explicit Persistent Connection Support for Discovery: 0 00:35:10.645 Transport Requirements: 00:35:10.645 Secure Channel: Not Specified 00:35:10.645 Port ID: 1 (0x0001) 00:35:10.645 Controller ID: 65535 (0xffff) 00:35:10.645 Admin Max SQ Size: 32 00:35:10.645 Transport Service Identifier: 4420 00:35:10.645 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:35:10.645 Transport Address: 10.0.0.1 00:35:10.645 07:59:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:10.645 get_feature(0x01) failed 00:35:10.645 get_feature(0x02) failed 00:35:10.645 get_feature(0x04) failed 00:35:10.645 ===================================================== 00:35:10.645 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:10.645 ===================================================== 00:35:10.645 Controller Capabilities/Features 00:35:10.645 ================================ 00:35:10.645 Vendor ID: 0000 00:35:10.645 Subsystem Vendor ID: 0000 00:35:10.645 Serial Number: a23724f896a882a52a41 00:35:10.645 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:35:10.645 Firmware Version: 6.8.9-20 00:35:10.645 Recommended Arb Burst: 6 00:35:10.645 IEEE OUI Identifier: 00 00 00 00:35:10.645 Multi-path I/O 00:35:10.645 May have multiple subsystem ports: Yes 00:35:10.645 May have multiple controllers: Yes 00:35:10.645 Associated with SR-IOV VF: No 00:35:10.645 Max Data Transfer Size: Unlimited 00:35:10.645 Max Number of Namespaces: 1024 00:35:10.645 Max Number of I/O Queues: 128 00:35:10.645 NVMe Specification Version (VS): 1.3 00:35:10.645 NVMe Specification Version (Identify): 1.3 00:35:10.645 Maximum Queue Entries: 1024 00:35:10.645 Contiguous Queues Required: No 00:35:10.645 Arbitration Mechanisms Supported 00:35:10.645 Weighted Round Robin: Not Supported 00:35:10.645 Vendor Specific: Not Supported 00:35:10.645 Reset Timeout: 7500 ms 00:35:10.645 Doorbell Stride: 4 bytes 00:35:10.645 NVM Subsystem Reset: Not Supported 00:35:10.645 Command Sets Supported 00:35:10.645 NVM Command Set: Supported 00:35:10.645 Boot Partition: Not Supported 00:35:10.645 Memory Page Size Minimum: 4096 bytes 00:35:10.645 Memory Page Size Maximum: 4096 bytes 00:35:10.645 Persistent Memory Region: Not Supported 00:35:10.645 Optional Asynchronous Events Supported 00:35:10.645 Namespace Attribute Notices: Supported 00:35:10.645 Firmware Activation Notices: Not Supported 00:35:10.646 ANA Change Notices: Supported 00:35:10.646 PLE Aggregate Log Change Notices: Not Supported 00:35:10.646 LBA Status Info Alert Notices: Not Supported 00:35:10.646 EGE Aggregate Log Change Notices: Not Supported 00:35:10.646 Normal NVM Subsystem Shutdown event: Not Supported 00:35:10.646 Zone Descriptor Change Notices: Not Supported 00:35:10.646 Discovery Log Change Notices: Not Supported 00:35:10.646 Controller Attributes 00:35:10.646 128-bit Host Identifier: Supported 00:35:10.646 Non-Operational Permissive Mode: Not Supported 00:35:10.646 NVM Sets: Not Supported 00:35:10.646 Read Recovery Levels: Not Supported 00:35:10.646 Endurance Groups: Not Supported 00:35:10.646 Predictable Latency Mode: Not Supported 00:35:10.646 Traffic Based Keep ALive: Supported 00:35:10.646 Namespace Granularity: Not Supported 00:35:10.646 SQ Associations: Not Supported 00:35:10.646 UUID List: Not Supported 00:35:10.646 Multi-Domain Subsystem: Not Supported 00:35:10.646 Fixed Capacity Management: Not Supported 00:35:10.646 Variable Capacity Management: Not Supported 00:35:10.646 Delete Endurance Group: Not Supported 00:35:10.646 Delete NVM Set: Not Supported 00:35:10.646 Extended LBA Formats Supported: Not Supported 00:35:10.646 Flexible Data Placement Supported: Not Supported 00:35:10.646 00:35:10.646 Controller Memory Buffer Support 00:35:10.646 ================================ 00:35:10.646 Supported: No 00:35:10.646 00:35:10.646 Persistent Memory Region Support 00:35:10.646 ================================ 00:35:10.646 Supported: No 00:35:10.646 00:35:10.646 Admin Command Set Attributes 00:35:10.646 ============================ 00:35:10.646 Security Send/Receive: Not Supported 00:35:10.646 Format NVM: Not Supported 00:35:10.646 Firmware Activate/Download: Not Supported 00:35:10.646 Namespace Management: Not Supported 00:35:10.646 Device Self-Test: Not Supported 00:35:10.646 Directives: Not Supported 00:35:10.646 NVMe-MI: Not Supported 00:35:10.646 Virtualization Management: Not Supported 00:35:10.646 Doorbell Buffer Config: Not Supported 00:35:10.646 Get LBA Status Capability: Not Supported 00:35:10.646 Command & Feature Lockdown Capability: Not Supported 00:35:10.646 Abort Command Limit: 4 00:35:10.646 Async Event Request Limit: 4 00:35:10.646 Number of Firmware Slots: N/A 00:35:10.646 Firmware Slot 1 Read-Only: N/A 00:35:10.646 Firmware Activation Without Reset: N/A 00:35:10.646 Multiple Update Detection Support: N/A 00:35:10.646 Firmware Update Granularity: No Information Provided 00:35:10.646 Per-Namespace SMART Log: Yes 00:35:10.646 Asymmetric Namespace Access Log Page: Supported 00:35:10.646 ANA Transition Time : 10 sec 00:35:10.646 00:35:10.646 Asymmetric Namespace Access Capabilities 00:35:10.646 ANA Optimized State : Supported 00:35:10.646 ANA Non-Optimized State : Supported 00:35:10.646 ANA Inaccessible State : Supported 00:35:10.646 ANA Persistent Loss State : Supported 00:35:10.646 ANA Change State : Supported 00:35:10.646 ANAGRPID is not changed : No 00:35:10.646 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:35:10.646 00:35:10.646 ANA Group Identifier Maximum : 128 00:35:10.646 Number of ANA Group Identifiers : 128 00:35:10.646 Max Number of Allowed Namespaces : 1024 00:35:10.646 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:35:10.646 Command Effects Log Page: Supported 00:35:10.646 Get Log Page Extended Data: Supported 00:35:10.646 Telemetry Log Pages: Not Supported 00:35:10.646 Persistent Event Log Pages: Not Supported 00:35:10.646 Supported Log Pages Log Page: May Support 00:35:10.646 Commands Supported & Effects Log Page: Not Supported 00:35:10.646 Feature Identifiers & Effects Log Page:May Support 00:35:10.646 NVMe-MI Commands & Effects Log Page: May Support 00:35:10.646 Data Area 4 for Telemetry Log: Not Supported 00:35:10.646 Error Log Page Entries Supported: 128 00:35:10.646 Keep Alive: Supported 00:35:10.646 Keep Alive Granularity: 1000 ms 00:35:10.646 00:35:10.646 NVM Command Set Attributes 00:35:10.646 ========================== 00:35:10.646 Submission Queue Entry Size 00:35:10.646 Max: 64 00:35:10.646 Min: 64 00:35:10.646 Completion Queue Entry Size 00:35:10.646 Max: 16 00:35:10.646 Min: 16 00:35:10.646 Number of Namespaces: 1024 00:35:10.646 Compare Command: Not Supported 00:35:10.646 Write Uncorrectable Command: Not Supported 00:35:10.646 Dataset Management Command: Supported 00:35:10.646 Write Zeroes Command: Supported 00:35:10.646 Set Features Save Field: Not Supported 00:35:10.646 Reservations: Not Supported 00:35:10.646 Timestamp: Not Supported 00:35:10.646 Copy: Not Supported 00:35:10.646 Volatile Write Cache: Present 00:35:10.646 Atomic Write Unit (Normal): 1 00:35:10.646 Atomic Write Unit (PFail): 1 00:35:10.646 Atomic Compare & Write Unit: 1 00:35:10.646 Fused Compare & Write: Not Supported 00:35:10.646 Scatter-Gather List 00:35:10.646 SGL Command Set: Supported 00:35:10.646 SGL Keyed: Not Supported 00:35:10.646 SGL Bit Bucket Descriptor: Not Supported 00:35:10.646 SGL Metadata Pointer: Not Supported 00:35:10.646 Oversized SGL: Not Supported 00:35:10.646 SGL Metadata Address: Not Supported 00:35:10.646 SGL Offset: Supported 00:35:10.646 Transport SGL Data Block: Not Supported 00:35:10.646 Replay Protected Memory Block: Not Supported 00:35:10.646 00:35:10.646 Firmware Slot Information 00:35:10.646 ========================= 00:35:10.646 Active slot: 0 00:35:10.646 00:35:10.646 Asymmetric Namespace Access 00:35:10.646 =========================== 00:35:10.646 Change Count : 0 00:35:10.646 Number of ANA Group Descriptors : 1 00:35:10.646 ANA Group Descriptor : 0 00:35:10.646 ANA Group ID : 1 00:35:10.646 Number of NSID Values : 1 00:35:10.646 Change Count : 0 00:35:10.646 ANA State : 1 00:35:10.646 Namespace Identifier : 1 00:35:10.646 00:35:10.646 Commands Supported and Effects 00:35:10.646 ============================== 00:35:10.646 Admin Commands 00:35:10.646 -------------- 00:35:10.646 Get Log Page (02h): Supported 00:35:10.646 Identify (06h): Supported 00:35:10.646 Abort (08h): Supported 00:35:10.646 Set Features (09h): Supported 00:35:10.646 Get Features (0Ah): Supported 00:35:10.646 Asynchronous Event Request (0Ch): Supported 00:35:10.646 Keep Alive (18h): Supported 00:35:10.646 I/O Commands 00:35:10.646 ------------ 00:35:10.646 Flush (00h): Supported 00:35:10.646 Write (01h): Supported LBA-Change 00:35:10.646 Read (02h): Supported 00:35:10.646 Write Zeroes (08h): Supported LBA-Change 00:35:10.647 Dataset Management (09h): Supported 00:35:10.647 00:35:10.647 Error Log 00:35:10.647 ========= 00:35:10.647 Entry: 0 00:35:10.647 Error Count: 0x3 00:35:10.647 Submission Queue Id: 0x0 00:35:10.647 Command Id: 0x5 00:35:10.647 Phase Bit: 0 00:35:10.647 Status Code: 0x2 00:35:10.647 Status Code Type: 0x0 00:35:10.647 Do Not Retry: 1 00:35:10.647 Error Location: 0x28 00:35:10.647 LBA: 0x0 00:35:10.647 Namespace: 0x0 00:35:10.647 Vendor Log Page: 0x0 00:35:10.647 ----------- 00:35:10.647 Entry: 1 00:35:10.647 Error Count: 0x2 00:35:10.647 Submission Queue Id: 0x0 00:35:10.647 Command Id: 0x5 00:35:10.647 Phase Bit: 0 00:35:10.647 Status Code: 0x2 00:35:10.647 Status Code Type: 0x0 00:35:10.647 Do Not Retry: 1 00:35:10.647 Error Location: 0x28 00:35:10.647 LBA: 0x0 00:35:10.647 Namespace: 0x0 00:35:10.647 Vendor Log Page: 0x0 00:35:10.647 ----------- 00:35:10.647 Entry: 2 00:35:10.647 Error Count: 0x1 00:35:10.647 Submission Queue Id: 0x0 00:35:10.647 Command Id: 0x4 00:35:10.647 Phase Bit: 0 00:35:10.647 Status Code: 0x2 00:35:10.647 Status Code Type: 0x0 00:35:10.647 Do Not Retry: 1 00:35:10.647 Error Location: 0x28 00:35:10.647 LBA: 0x0 00:35:10.647 Namespace: 0x0 00:35:10.647 Vendor Log Page: 0x0 00:35:10.647 00:35:10.647 Number of Queues 00:35:10.647 ================ 00:35:10.647 Number of I/O Submission Queues: 128 00:35:10.647 Number of I/O Completion Queues: 128 00:35:10.647 00:35:10.647 ZNS Specific Controller Data 00:35:10.647 ============================ 00:35:10.647 Zone Append Size Limit: 0 00:35:10.647 00:35:10.647 00:35:10.647 Active Namespaces 00:35:10.647 ================= 00:35:10.647 get_feature(0x05) failed 00:35:10.647 Namespace ID:1 00:35:10.647 Command Set Identifier: NVM (00h) 00:35:10.647 Deallocate: Supported 00:35:10.647 Deallocated/Unwritten Error: Not Supported 00:35:10.647 Deallocated Read Value: Unknown 00:35:10.647 Deallocate in Write Zeroes: Not Supported 00:35:10.647 Deallocated Guard Field: 0xFFFF 00:35:10.647 Flush: Supported 00:35:10.647 Reservation: Not Supported 00:35:10.647 Namespace Sharing Capabilities: Multiple Controllers 00:35:10.647 Size (in LBAs): 1953525168 (931GiB) 00:35:10.647 Capacity (in LBAs): 1953525168 (931GiB) 00:35:10.647 Utilization (in LBAs): 1953525168 (931GiB) 00:35:10.647 UUID: db98be88-f9fe-4360-8b9b-cee63f00e42d 00:35:10.647 Thin Provisioning: Not Supported 00:35:10.647 Per-NS Atomic Units: Yes 00:35:10.647 Atomic Boundary Size (Normal): 0 00:35:10.647 Atomic Boundary Size (PFail): 0 00:35:10.647 Atomic Boundary Offset: 0 00:35:10.647 NGUID/EUI64 Never Reused: No 00:35:10.647 ANA group ID: 1 00:35:10.647 Namespace Write Protected: No 00:35:10.647 Number of LBA Formats: 1 00:35:10.647 Current LBA Format: LBA Format #00 00:35:10.647 LBA Format #00: Data Size: 512 Metadata Size: 0 00:35:10.647 00:35:10.905 07:59:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:35:10.906 07:59:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:10.906 07:59:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:35:10.906 07:59:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:10.906 07:59:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:35:10.906 07:59:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:10.906 07:59:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:10.906 rmmod nvme_tcp 00:35:10.906 rmmod nvme_fabrics 00:35:10.906 07:59:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:10.906 07:59:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:35:10.906 07:59:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:35:10.906 07:59:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:35:10.906 07:59:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:10.906 07:59:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:10.906 07:59:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:10.906 07:59:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:35:10.906 07:59:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:35:10.906 07:59:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:10.906 07:59:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:35:10.906 07:59:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:10.906 07:59:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:10.906 07:59:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:10.906 07:59:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:10.906 07:59:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:12.808 07:59:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:12.808 07:59:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:35:12.808 07:59:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:35:12.808 07:59:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:35:12.808 07:59:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:12.808 07:59:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:12.808 07:59:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:12.808 07:59:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:12.808 07:59:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:35:12.808 07:59:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:35:12.808 07:59:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:14.244 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:14.244 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:14.244 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:14.244 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:14.244 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:14.244 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:14.244 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:14.244 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:14.244 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:14.244 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:14.244 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:14.244 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:14.244 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:14.244 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:14.244 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:14.244 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:15.178 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:35:15.178 00:35:15.178 real 0m9.733s 00:35:15.178 user 0m2.200s 00:35:15.178 sys 0m3.556s 00:35:15.178 07:59:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:15.178 07:59:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:35:15.178 ************************************ 00:35:15.178 END TEST nvmf_identify_kernel_target 00:35:15.178 ************************************ 00:35:15.178 07:59:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:35:15.178 07:59:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:15.178 07:59:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:15.178 07:59:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.178 ************************************ 00:35:15.178 START TEST nvmf_auth_host 00:35:15.178 ************************************ 00:35:15.178 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:35:15.439 * Looking for test storage... 00:35:15.439 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:15.439 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:15.439 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:35:15.439 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:15.439 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:15.439 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:15.439 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:15.439 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:15.439 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:35:15.440 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:35:15.440 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:35:15.440 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:35:15.440 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:35:15.440 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:35:15.440 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:35:15.440 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:15.440 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:35:15.440 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:35:15.440 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:15.440 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:15.440 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:35:15.440 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:35:15.440 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:15.440 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:35:15.440 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:35:15.440 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:35:15.440 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:35:15.440 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:15.440 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:35:15.440 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:35:15.440 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:15.440 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:15.440 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:35:15.440 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:15.440 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:15.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:15.440 --rc genhtml_branch_coverage=1 00:35:15.440 --rc genhtml_function_coverage=1 00:35:15.440 --rc genhtml_legend=1 00:35:15.440 --rc geninfo_all_blocks=1 00:35:15.440 --rc geninfo_unexecuted_blocks=1 00:35:15.440 00:35:15.440 ' 00:35:15.440 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:15.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:15.440 --rc genhtml_branch_coverage=1 00:35:15.440 --rc genhtml_function_coverage=1 00:35:15.440 --rc genhtml_legend=1 00:35:15.440 --rc geninfo_all_blocks=1 00:35:15.440 --rc geninfo_unexecuted_blocks=1 00:35:15.440 00:35:15.440 ' 00:35:15.440 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:15.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:15.440 --rc genhtml_branch_coverage=1 00:35:15.440 --rc genhtml_function_coverage=1 00:35:15.440 --rc genhtml_legend=1 00:35:15.440 --rc geninfo_all_blocks=1 00:35:15.440 --rc geninfo_unexecuted_blocks=1 00:35:15.440 00:35:15.440 ' 00:35:15.440 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:15.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:15.440 --rc genhtml_branch_coverage=1 00:35:15.440 --rc genhtml_function_coverage=1 00:35:15.440 --rc genhtml_legend=1 00:35:15.440 --rc geninfo_all_blocks=1 00:35:15.440 --rc geninfo_unexecuted_blocks=1 00:35:15.440 00:35:15.440 ' 00:35:15.440 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:15.440 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:35:15.440 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:15.440 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:15.440 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:15.440 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:15.440 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:15.440 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:15.440 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:15.440 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:15.440 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:15.440 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:15.440 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:15.440 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:15.440 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:15.440 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:15.440 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:15.440 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:15.440 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:15.440 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:35:15.440 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:15.440 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:15.440 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:15.440 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:15.440 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:15.440 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:15.440 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:35:15.440 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:15.440 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:35:15.440 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:15.440 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:15.440 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:15.440 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:15.441 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:15.441 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:15.441 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:15.441 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:15.441 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:15.441 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:15.441 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:35:15.441 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:35:15.441 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:35:15.441 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:35:15.441 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:15.441 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:35:15.441 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:35:15.441 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:35:15.441 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:35:15.441 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:15.441 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:15.441 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:15.441 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:15.441 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:15.441 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:15.441 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:15.441 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:15.441 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:15.441 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:15.441 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:35:15.441 07:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.344 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:17.344 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:35:17.344 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:17.344 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:17.344 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:17.344 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:17.344 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:17.344 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:35:17.344 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:17.344 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:35:17.344 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:35:17.344 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:35:17.344 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:35:17.344 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:35:17.344 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:35:17.344 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:17.344 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:17.344 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:17.344 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:17.344 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:17.344 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:17.344 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:17.344 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:17.344 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:17.344 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:17.344 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:17.344 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:17.344 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:17.344 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:17.344 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:17.344 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:17.344 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:17.344 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:17.344 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:17.344 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:17.344 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:17.344 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:17.344 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:17.344 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:17.344 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:17.344 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:17.344 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:17.344 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:17.344 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:17.344 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:17.344 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:17.344 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:17.344 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:17.344 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:17.344 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:17.344 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:17.344 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:17.344 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:17.344 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:17.344 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:17.344 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:17.344 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:17.344 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:17.344 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:17.344 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:17.344 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:17.344 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:17.344 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:17.344 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:17.344 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:17.345 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:17.345 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:17.345 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:17.345 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:17.345 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:17.345 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:17.345 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:17.345 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:17.345 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:35:17.345 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:17.345 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:17.345 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:17.345 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:17.345 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:17.345 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:17.345 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:17.345 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:17.345 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:17.345 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:17.345 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:17.345 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:17.345 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:17.345 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:17.345 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:17.345 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:17.345 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:17.345 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:17.345 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:17.345 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:17.345 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:17.345 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:17.345 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:17.345 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:17.345 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:17.345 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:17.345 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:17.345 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:35:17.345 00:35:17.345 --- 10.0.0.2 ping statistics --- 00:35:17.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:17.345 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:35:17.345 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:17.345 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:17.345 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:35:17.345 00:35:17.345 --- 10.0.0.1 ping statistics --- 00:35:17.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:17.345 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:35:17.345 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:17.345 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:35:17.345 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:17.345 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:17.345 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:17.345 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:17.345 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:17.345 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:17.345 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:17.345 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:35:17.345 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:17.345 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:17.345 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.345 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=3115789 00:35:17.345 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:35:17.345 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 3115789 00:35:17.345 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 3115789 ']' 00:35:17.345 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:17.345 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:17.345 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:17.345 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:17.345 07:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.724 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:18.724 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:35:18.724 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:18.724 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:18.724 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.724 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:18.724 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:35:18.724 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:35:18.724 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:18.724 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:18.724 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:18.724 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:35:18.724 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:35:18.724 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:18.724 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0b0054d15903da4d713bd151ea657022 00:35:18.724 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:35:18.724 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.isU 00:35:18.724 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0b0054d15903da4d713bd151ea657022 0 00:35:18.724 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0b0054d15903da4d713bd151ea657022 0 00:35:18.724 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:18.724 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:18.724 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0b0054d15903da4d713bd151ea657022 00:35:18.724 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:35:18.724 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:18.724 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.isU 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.isU 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.isU 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=53356975ddcb652b4a9bc919c2e81b927f3f3a0b2d033675be9932eb19936685 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.hVF 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 53356975ddcb652b4a9bc919c2e81b927f3f3a0b2d033675be9932eb19936685 3 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 53356975ddcb652b4a9bc919c2e81b927f3f3a0b2d033675be9932eb19936685 3 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=53356975ddcb652b4a9bc919c2e81b927f3f3a0b2d033675be9932eb19936685 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.hVF 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.hVF 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.hVF 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b4b6a31dc5d3f89cb56a9caf3a8a9b74a462b364710e338b 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Yld 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b4b6a31dc5d3f89cb56a9caf3a8a9b74a462b364710e338b 0 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b4b6a31dc5d3f89cb56a9caf3a8a9b74a462b364710e338b 0 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b4b6a31dc5d3f89cb56a9caf3a8a9b74a462b364710e338b 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Yld 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Yld 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.Yld 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=661c43c29b32050d8170eab267f1d08771ee2194cd987539 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.pxg 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 661c43c29b32050d8170eab267f1d08771ee2194cd987539 2 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 661c43c29b32050d8170eab267f1d08771ee2194cd987539 2 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=661c43c29b32050d8170eab267f1d08771ee2194cd987539 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.pxg 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.pxg 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.pxg 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=c686a35e12a716c3b011ab802570226a 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.I8C 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key c686a35e12a716c3b011ab802570226a 1 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 c686a35e12a716c3b011ab802570226a 1 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=c686a35e12a716c3b011ab802570226a 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.I8C 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.I8C 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.I8C 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a6f88aabfd31ccd9cd732f78b608c419 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.7tn 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a6f88aabfd31ccd9cd732f78b608c419 1 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a6f88aabfd31ccd9cd732f78b608c419 1 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a6f88aabfd31ccd9cd732f78b608c419 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:35:18.725 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:18.984 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.7tn 00:35:18.984 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.7tn 00:35:18.984 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.7tn 00:35:18.984 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:35:18.984 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:18.984 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:18.984 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:18.984 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:35:18.984 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:35:18.984 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:35:18.984 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=2dbad99c41fad96f96b9c77573729946647287582785aa4a 00:35:18.984 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:35:18.984 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.4Bs 00:35:18.984 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 2dbad99c41fad96f96b9c77573729946647287582785aa4a 2 00:35:18.984 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 2dbad99c41fad96f96b9c77573729946647287582785aa4a 2 00:35:18.984 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:18.984 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:18.984 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=2dbad99c41fad96f96b9c77573729946647287582785aa4a 00:35:18.984 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:35:18.984 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:18.984 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.4Bs 00:35:18.984 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.4Bs 00:35:18.984 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.4Bs 00:35:18.984 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:35:18.984 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:18.984 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:18.984 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:18.984 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:35:18.984 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:35:18.984 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:18.984 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ba8c24c73ad14bdfe6ea412d33bd008c 00:35:18.984 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:35:18.984 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.u23 00:35:18.984 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ba8c24c73ad14bdfe6ea412d33bd008c 0 00:35:18.984 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ba8c24c73ad14bdfe6ea412d33bd008c 0 00:35:18.984 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:18.984 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:18.984 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ba8c24c73ad14bdfe6ea412d33bd008c 00:35:18.984 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:35:18.984 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:18.984 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.u23 00:35:18.984 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.u23 00:35:18.984 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.u23 00:35:18.984 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:35:18.984 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:18.984 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:18.984 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:18.985 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:35:18.985 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:35:18.985 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:35:18.985 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b8ce7ab9486a4053271c737463fafe7fcfb415d27a1768b241f303f8a05415eb 00:35:18.985 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:35:18.985 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.lrZ 00:35:18.985 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b8ce7ab9486a4053271c737463fafe7fcfb415d27a1768b241f303f8a05415eb 3 00:35:18.985 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b8ce7ab9486a4053271c737463fafe7fcfb415d27a1768b241f303f8a05415eb 3 00:35:18.985 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:18.985 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:18.985 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b8ce7ab9486a4053271c737463fafe7fcfb415d27a1768b241f303f8a05415eb 00:35:18.985 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:35:18.985 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:18.985 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.lrZ 00:35:18.985 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.lrZ 00:35:18.985 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.lrZ 00:35:18.985 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:35:18.985 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3115789 00:35:18.985 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 3115789 ']' 00:35:18.985 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:18.985 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:18.985 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:18.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:18.985 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:18.985 07:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.245 07:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:19.245 07:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:35:19.245 07:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:19.245 07:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.isU 00:35:19.245 07:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.245 07:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.245 07:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.245 07:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.hVF ]] 00:35:19.245 07:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.hVF 00:35:19.245 07:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.245 07:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.245 07:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.245 07:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:19.245 07:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.Yld 00:35:19.245 07:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.245 07:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.245 07:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.245 07:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.pxg ]] 00:35:19.245 07:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.pxg 00:35:19.245 07:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.245 07:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.245 07:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.245 07:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:19.245 07:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.I8C 00:35:19.245 07:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.245 07:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.245 07:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.245 07:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.7tn ]] 00:35:19.245 07:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.7tn 00:35:19.245 07:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.245 07:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.245 07:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.245 07:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:19.245 07:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.4Bs 00:35:19.245 07:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.245 07:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.245 07:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.245 07:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.u23 ]] 00:35:19.245 07:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.u23 00:35:19.245 07:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.245 07:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.245 07:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.245 07:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:19.245 07:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.lrZ 00:35:19.245 07:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.245 07:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.245 07:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.245 07:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:35:19.245 07:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:35:19.245 07:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:35:19.245 07:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:19.245 07:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:19.245 07:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:19.245 07:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:19.245 07:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:19.245 07:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:19.245 07:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:19.245 07:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:19.245 07:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:19.245 07:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:19.245 07:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:35:19.245 07:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:35:19.245 07:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:35:19.245 07:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:19.245 07:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:35:19.245 07:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:19.245 07:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:35:19.245 07:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:35:19.245 07:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:35:19.505 07:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:19.505 07:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:20.442 Waiting for block devices as requested 00:35:20.442 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:35:20.442 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:20.699 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:20.699 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:20.699 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:20.958 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:20.958 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:20.958 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:20.958 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:21.216 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:21.216 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:21.216 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:21.216 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:21.475 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:21.475 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:21.475 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:21.475 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:22.044 07:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:35:22.044 07:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:22.044 07:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:35:22.044 07:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:35:22.044 07:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:22.044 07:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:35:22.044 07:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:35:22.044 07:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:35:22.044 07:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:22.044 No valid GPT data, bailing 00:35:22.044 07:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:22.044 07:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:35:22.044 07:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:35:22.044 07:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:35:22.044 07:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:35:22.044 07:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:22.044 07:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:35:22.044 07:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:22.044 07:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:35:22.044 07:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:35:22.044 07:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:35:22.044 07:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:35:22.044 07:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:35:22.044 07:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:35:22.044 07:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:35:22.044 07:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:35:22.044 07:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:22.044 07:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:35:22.044 00:35:22.044 Discovery Log Number of Records 2, Generation counter 2 00:35:22.044 =====Discovery Log Entry 0====== 00:35:22.044 trtype: tcp 00:35:22.044 adrfam: ipv4 00:35:22.044 subtype: current discovery subsystem 00:35:22.044 treq: not specified, sq flow control disable supported 00:35:22.044 portid: 1 00:35:22.044 trsvcid: 4420 00:35:22.044 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:22.044 traddr: 10.0.0.1 00:35:22.044 eflags: none 00:35:22.044 sectype: none 00:35:22.044 =====Discovery Log Entry 1====== 00:35:22.044 trtype: tcp 00:35:22.044 adrfam: ipv4 00:35:22.044 subtype: nvme subsystem 00:35:22.044 treq: not specified, sq flow control disable supported 00:35:22.044 portid: 1 00:35:22.044 trsvcid: 4420 00:35:22.044 subnqn: nqn.2024-02.io.spdk:cnode0 00:35:22.044 traddr: 10.0.0.1 00:35:22.044 eflags: none 00:35:22.044 sectype: none 00:35:22.044 07:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:35:22.044 07:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:35:22.044 07:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:35:22.044 07:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:22.044 07:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:22.044 07:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:22.044 07:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:22.044 07:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:22.044 07:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjRiNmEzMWRjNWQzZjg5Y2I1NmE5Y2FmM2E4YTliNzRhNDYyYjM2NDcxMGUzMzhip9bbxA==: 00:35:22.044 07:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjYxYzQzYzI5YjMyMDUwZDgxNzBlYWIyNjdmMWQwODc3MWVlMjE5NGNkOTg3NTM5U7rzEg==: 00:35:22.044 07:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:22.044 07:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:22.044 07:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjRiNmEzMWRjNWQzZjg5Y2I1NmE5Y2FmM2E4YTliNzRhNDYyYjM2NDcxMGUzMzhip9bbxA==: 00:35:22.044 07:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjYxYzQzYzI5YjMyMDUwZDgxNzBlYWIyNjdmMWQwODc3MWVlMjE5NGNkOTg3NTM5U7rzEg==: ]] 00:35:22.044 07:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjYxYzQzYzI5YjMyMDUwZDgxNzBlYWIyNjdmMWQwODc3MWVlMjE5NGNkOTg3NTM5U7rzEg==: 00:35:22.044 07:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:35:22.044 07:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:35:22.044 07:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:35:22.044 07:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:35:22.045 07:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:35:22.045 07:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:22.045 07:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:35:22.045 07:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:35:22.045 07:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:22.045 07:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:22.045 07:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:35:22.045 07:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.045 07:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.045 07:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.045 07:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:22.045 07:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:22.045 07:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:22.045 07:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:22.045 07:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:22.045 07:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:22.045 07:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:22.045 07:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:22.045 07:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:22.045 07:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:22.045 07:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:22.045 07:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:22.045 07:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.045 07:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.303 nvme0n1 00:35:22.303 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.303 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:22.303 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.304 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.304 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:22.304 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.304 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:22.304 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:22.304 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.304 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.304 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.304 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:35:22.304 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:22.304 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:22.304 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:35:22.304 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:22.304 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:22.304 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:22.304 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:22.304 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGIwMDU0ZDE1OTAzZGE0ZDcxM2JkMTUxZWE2NTcwMjItaPJq: 00:35:22.304 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTMzNTY5NzVkZGNiNjUyYjRhOWJjOTE5YzJlODFiOTI3ZjNmM2EwYjJkMDMzNjc1YmU5OTMyZWIxOTkzNjY4NU19gpU=: 00:35:22.304 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:22.304 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:22.304 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGIwMDU0ZDE1OTAzZGE0ZDcxM2JkMTUxZWE2NTcwMjItaPJq: 00:35:22.304 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTMzNTY5NzVkZGNiNjUyYjRhOWJjOTE5YzJlODFiOTI3ZjNmM2EwYjJkMDMzNjc1YmU5OTMyZWIxOTkzNjY4NU19gpU=: ]] 00:35:22.304 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTMzNTY5NzVkZGNiNjUyYjRhOWJjOTE5YzJlODFiOTI3ZjNmM2EwYjJkMDMzNjc1YmU5OTMyZWIxOTkzNjY4NU19gpU=: 00:35:22.304 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:35:22.304 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:22.304 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:22.304 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:22.304 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:22.304 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:22.304 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:22.304 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.304 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.304 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.304 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:22.304 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:22.304 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:22.304 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:22.304 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:22.304 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:22.304 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:22.304 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:22.304 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:22.304 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:22.304 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:22.304 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:22.304 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.304 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.562 nvme0n1 00:35:22.562 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.562 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:22.562 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:22.562 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.562 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.562 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.562 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:22.562 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:22.562 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.562 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.563 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.563 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:22.563 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:22.563 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:22.563 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:22.563 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:22.563 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:22.563 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjRiNmEzMWRjNWQzZjg5Y2I1NmE5Y2FmM2E4YTliNzRhNDYyYjM2NDcxMGUzMzhip9bbxA==: 00:35:22.563 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjYxYzQzYzI5YjMyMDUwZDgxNzBlYWIyNjdmMWQwODc3MWVlMjE5NGNkOTg3NTM5U7rzEg==: 00:35:22.563 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:22.563 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:22.563 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjRiNmEzMWRjNWQzZjg5Y2I1NmE5Y2FmM2E4YTliNzRhNDYyYjM2NDcxMGUzMzhip9bbxA==: 00:35:22.563 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjYxYzQzYzI5YjMyMDUwZDgxNzBlYWIyNjdmMWQwODc3MWVlMjE5NGNkOTg3NTM5U7rzEg==: ]] 00:35:22.563 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjYxYzQzYzI5YjMyMDUwZDgxNzBlYWIyNjdmMWQwODc3MWVlMjE5NGNkOTg3NTM5U7rzEg==: 00:35:22.563 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:35:22.563 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:22.563 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:22.563 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:22.563 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:22.563 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:22.563 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:22.563 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.563 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.563 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.563 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:22.563 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:22.563 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:22.563 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:22.563 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:22.563 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:22.563 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:22.563 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:22.563 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:22.563 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:22.563 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:22.563 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:22.563 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.563 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.821 nvme0n1 00:35:22.821 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.821 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:22.821 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:22.821 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.821 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.821 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.821 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:22.821 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:22.821 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.821 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.821 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.821 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:22.821 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:35:22.821 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:22.821 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:22.821 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:22.821 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:22.821 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzY4NmEzNWUxMmE3MTZjM2IwMTFhYjgwMjU3MDIyNmHHYUbr: 00:35:22.821 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTZmODhhYWJmZDMxY2NkOWNkNzMyZjc4YjYwOGM0MTm6KS/l: 00:35:22.821 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:22.821 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:22.821 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzY4NmEzNWUxMmE3MTZjM2IwMTFhYjgwMjU3MDIyNmHHYUbr: 00:35:22.821 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTZmODhhYWJmZDMxY2NkOWNkNzMyZjc4YjYwOGM0MTm6KS/l: ]] 00:35:22.821 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTZmODhhYWJmZDMxY2NkOWNkNzMyZjc4YjYwOGM0MTm6KS/l: 00:35:22.821 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:35:22.821 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:22.821 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:22.821 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:22.821 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:22.821 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:22.821 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:22.821 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.821 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.821 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.821 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:22.821 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:22.821 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:22.821 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:22.821 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:22.822 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:22.822 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:22.822 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:22.822 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:22.822 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:22.822 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:22.822 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:22.822 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.822 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.080 nvme0n1 00:35:23.080 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.080 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:23.080 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.080 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:23.080 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.080 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.080 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:23.080 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:23.080 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.080 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.080 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.080 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:23.080 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:35:23.080 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:23.080 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:23.080 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:23.080 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:23.080 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmRiYWQ5OWM0MWZhZDk2Zjk2YjljNzc1NzM3Mjk5NDY2NDcyODc1ODI3ODVhYTRhnCzDFA==: 00:35:23.080 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmE4YzI0YzczYWQxNGJkZmU2ZWE0MTJkMzNiZDAwOGMeuw00: 00:35:23.080 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:23.080 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:23.080 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmRiYWQ5OWM0MWZhZDk2Zjk2YjljNzc1NzM3Mjk5NDY2NDcyODc1ODI3ODVhYTRhnCzDFA==: 00:35:23.080 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmE4YzI0YzczYWQxNGJkZmU2ZWE0MTJkMzNiZDAwOGMeuw00: ]] 00:35:23.080 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmE4YzI0YzczYWQxNGJkZmU2ZWE0MTJkMzNiZDAwOGMeuw00: 00:35:23.080 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:35:23.080 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:23.080 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:23.080 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:23.080 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:23.080 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:23.080 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:23.080 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.080 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.080 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.080 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:23.080 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:23.080 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:23.080 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:23.080 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:23.080 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:23.080 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:23.080 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:23.080 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:23.080 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:23.080 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:23.080 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:23.080 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.080 07:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.340 nvme0n1 00:35:23.340 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.340 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:23.340 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:23.340 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.340 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.340 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.340 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:23.340 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:23.340 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.340 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.340 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.340 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:23.340 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:35:23.340 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:23.340 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:23.340 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:23.340 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:23.340 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjhjZTdhYjk0ODZhNDA1MzI3MWM3Mzc0NjNmYWZlN2ZjZmI0MTVkMjdhMTc2OGIyNDFmMzAzZjhhMDU0MTVlYsnxLbQ=: 00:35:23.340 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:23.340 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:23.340 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:23.340 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjhjZTdhYjk0ODZhNDA1MzI3MWM3Mzc0NjNmYWZlN2ZjZmI0MTVkMjdhMTc2OGIyNDFmMzAzZjhhMDU0MTVlYsnxLbQ=: 00:35:23.340 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:23.340 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:35:23.340 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:23.340 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:23.340 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:23.340 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:23.340 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:23.340 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:23.340 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.340 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.340 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.340 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:23.340 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:23.340 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:23.340 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:23.340 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:23.340 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:23.340 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:23.340 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:23.340 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:23.340 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:23.340 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:23.340 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:23.340 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.340 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.340 nvme0n1 00:35:23.340 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.340 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:23.340 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.340 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:23.340 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.340 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.340 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:23.340 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:23.340 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.340 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.340 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.340 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:23.340 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:23.340 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:35:23.340 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:23.340 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:23.340 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:23.340 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:23.340 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGIwMDU0ZDE1OTAzZGE0ZDcxM2JkMTUxZWE2NTcwMjItaPJq: 00:35:23.340 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTMzNTY5NzVkZGNiNjUyYjRhOWJjOTE5YzJlODFiOTI3ZjNmM2EwYjJkMDMzNjc1YmU5OTMyZWIxOTkzNjY4NU19gpU=: 00:35:23.340 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:23.340 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:23.340 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGIwMDU0ZDE1OTAzZGE0ZDcxM2JkMTUxZWE2NTcwMjItaPJq: 00:35:23.340 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTMzNTY5NzVkZGNiNjUyYjRhOWJjOTE5YzJlODFiOTI3ZjNmM2EwYjJkMDMzNjc1YmU5OTMyZWIxOTkzNjY4NU19gpU=: ]] 00:35:23.340 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTMzNTY5NzVkZGNiNjUyYjRhOWJjOTE5YzJlODFiOTI3ZjNmM2EwYjJkMDMzNjc1YmU5OTMyZWIxOTkzNjY4NU19gpU=: 00:35:23.340 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:35:23.340 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:23.599 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:23.599 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:23.599 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:23.599 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:23.599 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:23.599 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.599 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.599 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.599 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:23.599 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:23.599 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:23.599 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:23.599 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:23.599 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:23.599 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:23.599 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:23.599 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:23.599 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:23.599 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:23.599 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:23.599 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.599 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.599 nvme0n1 00:35:23.599 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.600 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:23.600 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.600 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.600 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:23.600 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.600 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:23.600 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:23.600 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.600 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.859 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.859 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:23.859 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:35:23.859 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:23.859 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:23.859 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:23.859 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:23.859 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjRiNmEzMWRjNWQzZjg5Y2I1NmE5Y2FmM2E4YTliNzRhNDYyYjM2NDcxMGUzMzhip9bbxA==: 00:35:23.859 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjYxYzQzYzI5YjMyMDUwZDgxNzBlYWIyNjdmMWQwODc3MWVlMjE5NGNkOTg3NTM5U7rzEg==: 00:35:23.859 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:23.859 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:23.859 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjRiNmEzMWRjNWQzZjg5Y2I1NmE5Y2FmM2E4YTliNzRhNDYyYjM2NDcxMGUzMzhip9bbxA==: 00:35:23.859 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjYxYzQzYzI5YjMyMDUwZDgxNzBlYWIyNjdmMWQwODc3MWVlMjE5NGNkOTg3NTM5U7rzEg==: ]] 00:35:23.859 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjYxYzQzYzI5YjMyMDUwZDgxNzBlYWIyNjdmMWQwODc3MWVlMjE5NGNkOTg3NTM5U7rzEg==: 00:35:23.859 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:35:23.859 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:23.859 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:23.859 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:23.859 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:23.859 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:23.859 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:23.859 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.860 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.860 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.860 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:23.860 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:23.860 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:23.860 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:23.860 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:23.860 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:23.860 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:23.860 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:23.860 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:23.860 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:23.860 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:23.860 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:23.860 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.860 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.860 nvme0n1 00:35:23.860 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.860 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:23.860 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.860 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.860 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:23.860 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.860 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:23.860 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:23.860 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.860 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.119 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.119 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:24.119 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:35:24.119 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:24.119 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:24.119 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:24.119 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:24.119 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzY4NmEzNWUxMmE3MTZjM2IwMTFhYjgwMjU3MDIyNmHHYUbr: 00:35:24.119 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTZmODhhYWJmZDMxY2NkOWNkNzMyZjc4YjYwOGM0MTm6KS/l: 00:35:24.119 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:24.119 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:24.119 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzY4NmEzNWUxMmE3MTZjM2IwMTFhYjgwMjU3MDIyNmHHYUbr: 00:35:24.119 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTZmODhhYWJmZDMxY2NkOWNkNzMyZjc4YjYwOGM0MTm6KS/l: ]] 00:35:24.119 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTZmODhhYWJmZDMxY2NkOWNkNzMyZjc4YjYwOGM0MTm6KS/l: 00:35:24.119 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:35:24.119 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:24.119 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:24.119 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:24.119 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:24.119 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:24.119 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:24.119 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.119 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.119 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.119 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:24.119 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:24.119 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:24.119 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:24.119 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:24.119 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:24.119 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:24.119 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:24.119 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:24.119 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:24.119 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:24.119 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:24.119 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.119 07:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.119 nvme0n1 00:35:24.119 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.119 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:24.119 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.119 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.119 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:24.119 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.378 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:24.378 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:24.378 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.378 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.378 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.378 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:24.378 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:35:24.378 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:24.378 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:24.378 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:24.378 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:24.378 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmRiYWQ5OWM0MWZhZDk2Zjk2YjljNzc1NzM3Mjk5NDY2NDcyODc1ODI3ODVhYTRhnCzDFA==: 00:35:24.378 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmE4YzI0YzczYWQxNGJkZmU2ZWE0MTJkMzNiZDAwOGMeuw00: 00:35:24.378 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:24.378 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:24.378 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmRiYWQ5OWM0MWZhZDk2Zjk2YjljNzc1NzM3Mjk5NDY2NDcyODc1ODI3ODVhYTRhnCzDFA==: 00:35:24.378 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmE4YzI0YzczYWQxNGJkZmU2ZWE0MTJkMzNiZDAwOGMeuw00: ]] 00:35:24.378 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmE4YzI0YzczYWQxNGJkZmU2ZWE0MTJkMzNiZDAwOGMeuw00: 00:35:24.378 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:35:24.378 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:24.378 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:24.378 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:24.378 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:24.378 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:24.378 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:24.378 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.378 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.378 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.378 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:24.378 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:24.378 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:24.378 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:24.378 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:24.378 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:24.378 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:24.378 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:24.378 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:24.378 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:24.378 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:24.378 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:24.378 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.378 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.378 nvme0n1 00:35:24.378 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.378 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:24.378 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.378 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.378 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:24.378 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.636 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:24.637 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:24.637 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.637 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.637 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.637 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:24.637 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:35:24.637 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:24.637 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:24.637 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:24.637 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:24.637 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjhjZTdhYjk0ODZhNDA1MzI3MWM3Mzc0NjNmYWZlN2ZjZmI0MTVkMjdhMTc2OGIyNDFmMzAzZjhhMDU0MTVlYsnxLbQ=: 00:35:24.637 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:24.637 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:24.637 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:24.637 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjhjZTdhYjk0ODZhNDA1MzI3MWM3Mzc0NjNmYWZlN2ZjZmI0MTVkMjdhMTc2OGIyNDFmMzAzZjhhMDU0MTVlYsnxLbQ=: 00:35:24.637 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:24.637 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:35:24.637 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:24.637 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:24.637 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:24.637 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:24.637 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:24.637 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:24.637 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.637 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.637 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.637 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:24.637 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:24.637 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:24.637 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:24.637 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:24.637 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:24.637 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:24.637 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:24.637 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:24.637 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:24.637 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:24.637 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:24.637 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.637 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.637 nvme0n1 00:35:24.637 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.637 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:24.637 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.637 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:24.637 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.637 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.895 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:24.895 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:24.895 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.895 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.895 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.895 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:24.895 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:24.895 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:35:24.895 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:24.895 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:24.895 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:24.895 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:24.895 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGIwMDU0ZDE1OTAzZGE0ZDcxM2JkMTUxZWE2NTcwMjItaPJq: 00:35:24.895 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTMzNTY5NzVkZGNiNjUyYjRhOWJjOTE5YzJlODFiOTI3ZjNmM2EwYjJkMDMzNjc1YmU5OTMyZWIxOTkzNjY4NU19gpU=: 00:35:24.895 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:24.895 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:24.895 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGIwMDU0ZDE1OTAzZGE0ZDcxM2JkMTUxZWE2NTcwMjItaPJq: 00:35:24.895 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTMzNTY5NzVkZGNiNjUyYjRhOWJjOTE5YzJlODFiOTI3ZjNmM2EwYjJkMDMzNjc1YmU5OTMyZWIxOTkzNjY4NU19gpU=: ]] 00:35:24.895 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTMzNTY5NzVkZGNiNjUyYjRhOWJjOTE5YzJlODFiOTI3ZjNmM2EwYjJkMDMzNjc1YmU5OTMyZWIxOTkzNjY4NU19gpU=: 00:35:24.895 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:35:24.895 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:24.895 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:24.895 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:24.895 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:24.895 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:24.895 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:24.895 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.895 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.895 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.895 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:24.895 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:24.895 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:24.895 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:24.895 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:24.895 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:24.895 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:24.895 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:24.895 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:24.895 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:24.895 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:24.895 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:24.895 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.895 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.154 nvme0n1 00:35:25.154 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.154 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:25.154 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.154 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:25.154 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.154 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.154 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:25.154 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:25.154 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.154 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.154 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.154 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:25.154 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:35:25.154 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:25.154 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:25.154 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:25.154 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:25.154 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjRiNmEzMWRjNWQzZjg5Y2I1NmE5Y2FmM2E4YTliNzRhNDYyYjM2NDcxMGUzMzhip9bbxA==: 00:35:25.154 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjYxYzQzYzI5YjMyMDUwZDgxNzBlYWIyNjdmMWQwODc3MWVlMjE5NGNkOTg3NTM5U7rzEg==: 00:35:25.154 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:25.154 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:25.154 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjRiNmEzMWRjNWQzZjg5Y2I1NmE5Y2FmM2E4YTliNzRhNDYyYjM2NDcxMGUzMzhip9bbxA==: 00:35:25.154 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjYxYzQzYzI5YjMyMDUwZDgxNzBlYWIyNjdmMWQwODc3MWVlMjE5NGNkOTg3NTM5U7rzEg==: ]] 00:35:25.154 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjYxYzQzYzI5YjMyMDUwZDgxNzBlYWIyNjdmMWQwODc3MWVlMjE5NGNkOTg3NTM5U7rzEg==: 00:35:25.154 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:35:25.154 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:25.154 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:25.154 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:25.154 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:25.154 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:25.154 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:25.154 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.154 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.154 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.154 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:25.154 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:25.154 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:25.154 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:25.154 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:25.154 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:25.154 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:25.154 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:25.154 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:25.154 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:25.154 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:25.154 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:25.154 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.154 07:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.414 nvme0n1 00:35:25.414 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.414 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:25.414 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.414 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:25.414 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.414 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.414 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:25.414 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:25.414 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.414 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.414 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.414 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:25.414 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:35:25.414 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:25.414 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:25.414 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:25.414 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:25.414 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzY4NmEzNWUxMmE3MTZjM2IwMTFhYjgwMjU3MDIyNmHHYUbr: 00:35:25.414 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTZmODhhYWJmZDMxY2NkOWNkNzMyZjc4YjYwOGM0MTm6KS/l: 00:35:25.414 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:25.414 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:25.414 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzY4NmEzNWUxMmE3MTZjM2IwMTFhYjgwMjU3MDIyNmHHYUbr: 00:35:25.414 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTZmODhhYWJmZDMxY2NkOWNkNzMyZjc4YjYwOGM0MTm6KS/l: ]] 00:35:25.414 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTZmODhhYWJmZDMxY2NkOWNkNzMyZjc4YjYwOGM0MTm6KS/l: 00:35:25.414 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:35:25.414 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:25.414 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:25.414 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:25.414 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:25.414 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:25.414 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:25.414 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.414 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.414 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.414 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:25.414 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:25.414 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:25.414 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:25.414 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:25.414 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:25.414 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:25.414 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:25.414 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:25.414 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:25.414 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:25.414 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:25.414 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.414 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.673 nvme0n1 00:35:25.673 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.673 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:25.673 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.673 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.673 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:25.673 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.673 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:25.673 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:25.673 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.673 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.933 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.933 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:25.933 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:35:25.933 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:25.933 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:25.933 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:25.933 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:25.933 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmRiYWQ5OWM0MWZhZDk2Zjk2YjljNzc1NzM3Mjk5NDY2NDcyODc1ODI3ODVhYTRhnCzDFA==: 00:35:25.933 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmE4YzI0YzczYWQxNGJkZmU2ZWE0MTJkMzNiZDAwOGMeuw00: 00:35:25.933 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:25.933 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:25.933 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmRiYWQ5OWM0MWZhZDk2Zjk2YjljNzc1NzM3Mjk5NDY2NDcyODc1ODI3ODVhYTRhnCzDFA==: 00:35:25.933 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmE4YzI0YzczYWQxNGJkZmU2ZWE0MTJkMzNiZDAwOGMeuw00: ]] 00:35:25.933 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmE4YzI0YzczYWQxNGJkZmU2ZWE0MTJkMzNiZDAwOGMeuw00: 00:35:25.933 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:35:25.933 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:25.933 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:25.933 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:25.933 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:25.933 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:25.933 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:25.933 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.933 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.933 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.933 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:25.933 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:25.933 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:25.933 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:25.933 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:25.933 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:25.933 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:25.933 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:25.933 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:25.933 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:25.933 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:25.933 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:25.933 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.933 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.195 nvme0n1 00:35:26.195 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.195 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:26.195 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.195 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.195 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:26.195 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.195 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:26.195 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:26.195 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.195 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.195 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.195 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:26.195 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:35:26.195 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:26.195 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:26.195 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:26.195 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:26.195 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjhjZTdhYjk0ODZhNDA1MzI3MWM3Mzc0NjNmYWZlN2ZjZmI0MTVkMjdhMTc2OGIyNDFmMzAzZjhhMDU0MTVlYsnxLbQ=: 00:35:26.195 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:26.195 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:26.195 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:26.195 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjhjZTdhYjk0ODZhNDA1MzI3MWM3Mzc0NjNmYWZlN2ZjZmI0MTVkMjdhMTc2OGIyNDFmMzAzZjhhMDU0MTVlYsnxLbQ=: 00:35:26.195 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:26.195 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:35:26.195 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:26.195 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:26.195 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:26.195 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:26.195 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:26.195 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:26.195 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.195 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.195 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.195 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:26.195 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:26.195 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:26.195 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:26.195 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:26.195 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:26.195 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:26.195 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:26.195 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:26.195 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:26.195 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:26.195 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:26.195 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.195 07:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.456 nvme0n1 00:35:26.456 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.456 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:26.456 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.456 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.456 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:26.456 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.456 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:26.456 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:26.456 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.456 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.456 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.456 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:26.456 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:26.456 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:35:26.456 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:26.456 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:26.456 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:26.456 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:26.456 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGIwMDU0ZDE1OTAzZGE0ZDcxM2JkMTUxZWE2NTcwMjItaPJq: 00:35:26.456 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTMzNTY5NzVkZGNiNjUyYjRhOWJjOTE5YzJlODFiOTI3ZjNmM2EwYjJkMDMzNjc1YmU5OTMyZWIxOTkzNjY4NU19gpU=: 00:35:26.456 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:26.456 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:26.456 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGIwMDU0ZDE1OTAzZGE0ZDcxM2JkMTUxZWE2NTcwMjItaPJq: 00:35:26.456 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTMzNTY5NzVkZGNiNjUyYjRhOWJjOTE5YzJlODFiOTI3ZjNmM2EwYjJkMDMzNjc1YmU5OTMyZWIxOTkzNjY4NU19gpU=: ]] 00:35:26.456 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTMzNTY5NzVkZGNiNjUyYjRhOWJjOTE5YzJlODFiOTI3ZjNmM2EwYjJkMDMzNjc1YmU5OTMyZWIxOTkzNjY4NU19gpU=: 00:35:26.456 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:35:26.456 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:26.456 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:26.456 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:26.456 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:26.456 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:26.456 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:26.456 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.456 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.456 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.456 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:26.456 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:26.456 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:26.456 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:26.456 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:26.456 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:26.456 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:26.456 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:26.456 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:26.456 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:26.456 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:26.456 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:26.456 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.456 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.024 nvme0n1 00:35:27.024 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.024 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:27.024 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.024 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:27.024 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.024 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.024 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:27.024 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:27.024 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.024 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.024 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.024 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:27.024 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:35:27.024 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:27.024 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:27.024 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:27.024 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:27.024 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjRiNmEzMWRjNWQzZjg5Y2I1NmE5Y2FmM2E4YTliNzRhNDYyYjM2NDcxMGUzMzhip9bbxA==: 00:35:27.024 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjYxYzQzYzI5YjMyMDUwZDgxNzBlYWIyNjdmMWQwODc3MWVlMjE5NGNkOTg3NTM5U7rzEg==: 00:35:27.024 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:27.024 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:27.024 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjRiNmEzMWRjNWQzZjg5Y2I1NmE5Y2FmM2E4YTliNzRhNDYyYjM2NDcxMGUzMzhip9bbxA==: 00:35:27.024 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjYxYzQzYzI5YjMyMDUwZDgxNzBlYWIyNjdmMWQwODc3MWVlMjE5NGNkOTg3NTM5U7rzEg==: ]] 00:35:27.024 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjYxYzQzYzI5YjMyMDUwZDgxNzBlYWIyNjdmMWQwODc3MWVlMjE5NGNkOTg3NTM5U7rzEg==: 00:35:27.024 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:35:27.024 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:27.024 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:27.024 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:27.024 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:27.024 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:27.024 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:27.024 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.024 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.024 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.024 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:27.024 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:27.024 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:27.024 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:27.024 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:27.024 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:27.024 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:27.024 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:27.024 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:27.024 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:27.024 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:27.024 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:27.024 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.024 07:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.591 nvme0n1 00:35:27.591 07:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.591 07:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:27.591 07:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.591 07:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:27.591 07:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.591 07:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.591 07:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:27.591 07:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:27.591 07:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.591 07:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.591 07:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.591 07:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:27.591 07:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:35:27.591 07:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:27.591 07:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:27.591 07:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:27.591 07:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:27.591 07:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzY4NmEzNWUxMmE3MTZjM2IwMTFhYjgwMjU3MDIyNmHHYUbr: 00:35:27.591 07:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTZmODhhYWJmZDMxY2NkOWNkNzMyZjc4YjYwOGM0MTm6KS/l: 00:35:27.591 07:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:27.591 07:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:27.591 07:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzY4NmEzNWUxMmE3MTZjM2IwMTFhYjgwMjU3MDIyNmHHYUbr: 00:35:27.591 07:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTZmODhhYWJmZDMxY2NkOWNkNzMyZjc4YjYwOGM0MTm6KS/l: ]] 00:35:27.591 07:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTZmODhhYWJmZDMxY2NkOWNkNzMyZjc4YjYwOGM0MTm6KS/l: 00:35:27.591 07:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:35:27.591 07:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:27.591 07:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:27.591 07:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:27.591 07:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:27.591 07:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:27.591 07:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:27.591 07:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.591 07:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.591 07:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.591 07:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:27.591 07:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:27.591 07:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:27.591 07:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:27.591 07:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:27.591 07:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:27.591 07:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:27.591 07:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:27.591 07:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:27.591 07:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:27.591 07:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:27.591 07:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:27.591 07:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.591 07:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.158 nvme0n1 00:35:28.158 07:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.158 07:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:28.158 07:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.158 07:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.158 07:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:28.158 07:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.158 07:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:28.158 07:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:28.158 07:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.158 07:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.158 07:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.158 07:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:28.158 07:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:35:28.158 07:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:28.158 07:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:28.158 07:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:28.158 07:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:28.158 07:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmRiYWQ5OWM0MWZhZDk2Zjk2YjljNzc1NzM3Mjk5NDY2NDcyODc1ODI3ODVhYTRhnCzDFA==: 00:35:28.158 07:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmE4YzI0YzczYWQxNGJkZmU2ZWE0MTJkMzNiZDAwOGMeuw00: 00:35:28.158 07:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:28.158 07:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:28.158 07:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmRiYWQ5OWM0MWZhZDk2Zjk2YjljNzc1NzM3Mjk5NDY2NDcyODc1ODI3ODVhYTRhnCzDFA==: 00:35:28.158 07:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmE4YzI0YzczYWQxNGJkZmU2ZWE0MTJkMzNiZDAwOGMeuw00: ]] 00:35:28.158 07:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmE4YzI0YzczYWQxNGJkZmU2ZWE0MTJkMzNiZDAwOGMeuw00: 00:35:28.158 07:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:35:28.158 07:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:28.158 07:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:28.158 07:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:28.158 07:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:28.158 07:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:28.158 07:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:28.158 07:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.158 07:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.158 07:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.158 07:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:28.158 07:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:28.158 07:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:28.158 07:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:28.158 07:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:28.158 07:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:28.158 07:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:28.158 07:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:28.158 07:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:28.158 07:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:28.158 07:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:28.158 07:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:28.158 07:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.158 07:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.728 nvme0n1 00:35:28.728 07:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.728 07:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:28.728 07:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:28.728 07:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.729 07:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.729 07:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.729 07:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:28.729 07:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:28.729 07:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.729 07:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.729 07:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.729 07:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:28.729 07:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:35:28.729 07:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:28.729 07:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:28.729 07:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:28.729 07:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:28.729 07:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjhjZTdhYjk0ODZhNDA1MzI3MWM3Mzc0NjNmYWZlN2ZjZmI0MTVkMjdhMTc2OGIyNDFmMzAzZjhhMDU0MTVlYsnxLbQ=: 00:35:28.729 07:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:28.729 07:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:28.729 07:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:28.729 07:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjhjZTdhYjk0ODZhNDA1MzI3MWM3Mzc0NjNmYWZlN2ZjZmI0MTVkMjdhMTc2OGIyNDFmMzAzZjhhMDU0MTVlYsnxLbQ=: 00:35:28.729 07:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:28.729 07:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:35:28.729 07:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:28.729 07:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:28.729 07:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:28.729 07:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:28.729 07:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:28.729 07:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:28.729 07:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.729 07:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.729 07:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.729 07:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:28.729 07:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:28.729 07:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:28.729 07:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:28.729 07:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:28.729 07:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:28.729 07:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:28.729 07:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:28.729 07:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:28.729 07:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:28.729 07:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:28.729 07:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:28.729 07:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.729 07:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.296 nvme0n1 00:35:29.296 07:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.296 07:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:29.296 07:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:29.296 07:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.296 07:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.296 07:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.296 07:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:29.556 07:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:29.556 07:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.556 07:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.556 07:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.556 07:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:29.556 07:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:29.556 07:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:35:29.556 07:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:29.556 07:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:29.556 07:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:29.556 07:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:29.556 07:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGIwMDU0ZDE1OTAzZGE0ZDcxM2JkMTUxZWE2NTcwMjItaPJq: 00:35:29.556 07:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTMzNTY5NzVkZGNiNjUyYjRhOWJjOTE5YzJlODFiOTI3ZjNmM2EwYjJkMDMzNjc1YmU5OTMyZWIxOTkzNjY4NU19gpU=: 00:35:29.556 07:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:29.556 07:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:29.556 07:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGIwMDU0ZDE1OTAzZGE0ZDcxM2JkMTUxZWE2NTcwMjItaPJq: 00:35:29.556 07:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTMzNTY5NzVkZGNiNjUyYjRhOWJjOTE5YzJlODFiOTI3ZjNmM2EwYjJkMDMzNjc1YmU5OTMyZWIxOTkzNjY4NU19gpU=: ]] 00:35:29.556 07:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTMzNTY5NzVkZGNiNjUyYjRhOWJjOTE5YzJlODFiOTI3ZjNmM2EwYjJkMDMzNjc1YmU5OTMyZWIxOTkzNjY4NU19gpU=: 00:35:29.556 07:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:35:29.556 07:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:29.556 07:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:29.556 07:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:29.556 07:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:29.556 07:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:29.556 07:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:29.556 07:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.556 07:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.556 07:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.556 07:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:29.556 07:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:29.556 07:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:29.556 07:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:29.556 07:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:29.556 07:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:29.556 07:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:29.556 07:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:29.556 07:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:29.556 07:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:29.556 07:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:29.556 07:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:29.556 07:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.556 07:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.495 nvme0n1 00:35:30.496 07:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.496 07:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:30.496 07:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.496 07:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.496 07:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:30.496 07:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.496 07:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:30.496 07:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:30.496 07:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.496 07:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.496 07:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.496 07:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:30.496 07:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:35:30.496 07:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:30.496 07:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:30.496 07:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:30.496 07:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:30.496 07:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjRiNmEzMWRjNWQzZjg5Y2I1NmE5Y2FmM2E4YTliNzRhNDYyYjM2NDcxMGUzMzhip9bbxA==: 00:35:30.496 07:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjYxYzQzYzI5YjMyMDUwZDgxNzBlYWIyNjdmMWQwODc3MWVlMjE5NGNkOTg3NTM5U7rzEg==: 00:35:30.496 07:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:30.496 07:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:30.496 07:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjRiNmEzMWRjNWQzZjg5Y2I1NmE5Y2FmM2E4YTliNzRhNDYyYjM2NDcxMGUzMzhip9bbxA==: 00:35:30.496 07:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjYxYzQzYzI5YjMyMDUwZDgxNzBlYWIyNjdmMWQwODc3MWVlMjE5NGNkOTg3NTM5U7rzEg==: ]] 00:35:30.496 07:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjYxYzQzYzI5YjMyMDUwZDgxNzBlYWIyNjdmMWQwODc3MWVlMjE5NGNkOTg3NTM5U7rzEg==: 00:35:30.496 07:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:35:30.496 07:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:30.496 07:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:30.496 07:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:30.496 07:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:30.496 07:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:30.496 07:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:30.496 07:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.496 07:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.496 07:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.496 07:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:30.496 07:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:30.496 07:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:30.496 07:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:30.496 07:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:30.496 07:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:30.496 07:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:30.496 07:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:30.496 07:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:30.496 07:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:30.496 07:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:30.496 07:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:30.496 07:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.496 07:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.433 nvme0n1 00:35:31.433 07:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.433 07:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:31.433 07:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:31.433 07:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.433 07:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.433 07:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.433 07:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:31.433 07:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:31.433 07:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.433 07:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.433 07:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.433 07:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:31.433 07:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:35:31.433 07:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:31.433 07:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:31.433 07:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:31.433 07:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:31.433 07:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzY4NmEzNWUxMmE3MTZjM2IwMTFhYjgwMjU3MDIyNmHHYUbr: 00:35:31.433 07:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTZmODhhYWJmZDMxY2NkOWNkNzMyZjc4YjYwOGM0MTm6KS/l: 00:35:31.433 07:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:31.433 07:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:31.433 07:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzY4NmEzNWUxMmE3MTZjM2IwMTFhYjgwMjU3MDIyNmHHYUbr: 00:35:31.433 07:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTZmODhhYWJmZDMxY2NkOWNkNzMyZjc4YjYwOGM0MTm6KS/l: ]] 00:35:31.433 07:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTZmODhhYWJmZDMxY2NkOWNkNzMyZjc4YjYwOGM0MTm6KS/l: 00:35:31.433 07:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:35:31.433 07:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:31.433 07:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:31.433 07:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:31.433 07:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:31.433 07:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:31.433 07:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:31.433 07:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.433 07:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.433 07:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.433 07:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:31.433 07:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:31.433 07:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:31.433 07:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:31.433 07:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:31.433 07:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:31.433 07:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:31.433 07:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:31.433 07:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:31.433 07:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:31.433 07:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:31.433 07:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:31.433 07:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.433 07:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.813 nvme0n1 00:35:32.813 07:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.813 07:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:32.813 07:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.813 07:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.813 07:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:32.813 07:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.813 07:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:32.813 07:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:32.813 07:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.813 07:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.813 07:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.813 07:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:32.813 07:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:35:32.813 07:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:32.813 07:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:32.813 07:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:32.813 07:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:32.813 07:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmRiYWQ5OWM0MWZhZDk2Zjk2YjljNzc1NzM3Mjk5NDY2NDcyODc1ODI3ODVhYTRhnCzDFA==: 00:35:32.813 07:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmE4YzI0YzczYWQxNGJkZmU2ZWE0MTJkMzNiZDAwOGMeuw00: 00:35:32.813 07:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:32.813 07:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:32.813 07:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmRiYWQ5OWM0MWZhZDk2Zjk2YjljNzc1NzM3Mjk5NDY2NDcyODc1ODI3ODVhYTRhnCzDFA==: 00:35:32.813 07:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmE4YzI0YzczYWQxNGJkZmU2ZWE0MTJkMzNiZDAwOGMeuw00: ]] 00:35:32.813 07:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmE4YzI0YzczYWQxNGJkZmU2ZWE0MTJkMzNiZDAwOGMeuw00: 00:35:32.813 07:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:35:32.813 07:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:32.813 07:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:32.813 07:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:32.813 07:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:32.813 07:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:32.813 07:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:32.813 07:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.813 07:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.813 07:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.813 07:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:32.813 07:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:32.813 07:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:32.813 07:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:32.813 07:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:32.813 07:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:32.813 07:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:32.813 07:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:32.813 07:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:32.813 07:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:32.813 07:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:32.813 07:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:32.813 07:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.813 07:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.751 nvme0n1 00:35:33.751 07:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.751 07:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:33.751 07:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:33.751 07:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.751 07:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.751 07:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.751 07:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:33.751 07:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:33.751 07:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.751 07:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.751 07:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.751 07:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:33.751 07:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:35:33.751 07:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:33.751 07:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:33.751 07:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:33.751 07:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:33.751 07:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjhjZTdhYjk0ODZhNDA1MzI3MWM3Mzc0NjNmYWZlN2ZjZmI0MTVkMjdhMTc2OGIyNDFmMzAzZjhhMDU0MTVlYsnxLbQ=: 00:35:33.751 07:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:33.751 07:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:33.751 07:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:33.751 07:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjhjZTdhYjk0ODZhNDA1MzI3MWM3Mzc0NjNmYWZlN2ZjZmI0MTVkMjdhMTc2OGIyNDFmMzAzZjhhMDU0MTVlYsnxLbQ=: 00:35:33.751 07:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:33.751 07:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:35:33.751 07:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:33.751 07:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:33.751 07:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:33.751 07:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:33.751 07:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:33.751 07:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:33.751 07:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.751 07:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.751 07:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.751 07:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:33.751 07:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:33.751 07:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:33.751 07:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:33.751 07:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:33.751 07:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:33.751 07:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:33.751 07:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:33.751 07:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:33.751 07:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:33.751 07:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:33.751 07:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:33.751 07:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.751 07:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.688 nvme0n1 00:35:34.688 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.688 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:34.688 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.688 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:34.688 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.688 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.688 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:34.688 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:34.688 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.688 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.688 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.688 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:35:34.688 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:34.688 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:34.688 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:35:34.688 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:34.688 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:34.688 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:34.688 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:34.688 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGIwMDU0ZDE1OTAzZGE0ZDcxM2JkMTUxZWE2NTcwMjItaPJq: 00:35:34.688 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTMzNTY5NzVkZGNiNjUyYjRhOWJjOTE5YzJlODFiOTI3ZjNmM2EwYjJkMDMzNjc1YmU5OTMyZWIxOTkzNjY4NU19gpU=: 00:35:34.688 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:34.688 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:34.688 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGIwMDU0ZDE1OTAzZGE0ZDcxM2JkMTUxZWE2NTcwMjItaPJq: 00:35:34.688 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTMzNTY5NzVkZGNiNjUyYjRhOWJjOTE5YzJlODFiOTI3ZjNmM2EwYjJkMDMzNjc1YmU5OTMyZWIxOTkzNjY4NU19gpU=: ]] 00:35:34.688 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTMzNTY5NzVkZGNiNjUyYjRhOWJjOTE5YzJlODFiOTI3ZjNmM2EwYjJkMDMzNjc1YmU5OTMyZWIxOTkzNjY4NU19gpU=: 00:35:34.688 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:35:34.688 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:34.688 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:34.688 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:34.688 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:34.688 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:34.688 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:34.688 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.688 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.688 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.688 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:34.688 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:34.688 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:34.688 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:34.688 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:34.688 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:34.688 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:34.688 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:34.688 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:34.688 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:34.688 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:34.688 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:34.688 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.688 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.947 nvme0n1 00:35:34.947 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.947 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:34.947 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.947 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:34.948 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.948 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.948 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:34.948 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:34.948 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.948 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.948 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.948 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:34.948 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:35:34.948 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:34.948 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:34.948 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:34.948 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:34.948 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjRiNmEzMWRjNWQzZjg5Y2I1NmE5Y2FmM2E4YTliNzRhNDYyYjM2NDcxMGUzMzhip9bbxA==: 00:35:34.948 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjYxYzQzYzI5YjMyMDUwZDgxNzBlYWIyNjdmMWQwODc3MWVlMjE5NGNkOTg3NTM5U7rzEg==: 00:35:34.948 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:34.948 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:34.948 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjRiNmEzMWRjNWQzZjg5Y2I1NmE5Y2FmM2E4YTliNzRhNDYyYjM2NDcxMGUzMzhip9bbxA==: 00:35:34.948 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjYxYzQzYzI5YjMyMDUwZDgxNzBlYWIyNjdmMWQwODc3MWVlMjE5NGNkOTg3NTM5U7rzEg==: ]] 00:35:34.948 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjYxYzQzYzI5YjMyMDUwZDgxNzBlYWIyNjdmMWQwODc3MWVlMjE5NGNkOTg3NTM5U7rzEg==: 00:35:34.948 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:35:34.948 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:34.948 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:34.948 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:34.948 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:34.948 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:34.948 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:34.948 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.948 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.948 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.948 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:34.948 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:34.948 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:34.948 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:34.948 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:34.948 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:34.948 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:34.948 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:34.948 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:34.948 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:34.948 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:34.948 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:34.948 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.948 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.948 nvme0n1 00:35:34.948 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.948 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:34.948 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.948 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.948 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:34.948 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.207 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:35.207 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:35.207 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.207 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.207 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.207 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:35.207 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:35:35.207 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:35.207 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:35.207 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:35.207 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:35.207 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzY4NmEzNWUxMmE3MTZjM2IwMTFhYjgwMjU3MDIyNmHHYUbr: 00:35:35.207 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTZmODhhYWJmZDMxY2NkOWNkNzMyZjc4YjYwOGM0MTm6KS/l: 00:35:35.207 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:35.207 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:35.207 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzY4NmEzNWUxMmE3MTZjM2IwMTFhYjgwMjU3MDIyNmHHYUbr: 00:35:35.207 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTZmODhhYWJmZDMxY2NkOWNkNzMyZjc4YjYwOGM0MTm6KS/l: ]] 00:35:35.207 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTZmODhhYWJmZDMxY2NkOWNkNzMyZjc4YjYwOGM0MTm6KS/l: 00:35:35.207 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:35:35.207 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:35.207 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:35.207 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:35.207 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:35.207 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:35.207 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:35.207 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.207 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.207 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.207 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:35.207 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:35.207 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:35.207 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:35.207 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:35.207 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:35.207 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:35.207 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:35.207 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:35.207 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:35.207 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:35.207 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:35.207 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.207 07:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.207 nvme0n1 00:35:35.207 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.207 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:35.207 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.207 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.207 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:35.207 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.468 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:35.468 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:35.468 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.468 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.468 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.468 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:35.468 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:35:35.468 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:35.468 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:35.468 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:35.468 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:35.468 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmRiYWQ5OWM0MWZhZDk2Zjk2YjljNzc1NzM3Mjk5NDY2NDcyODc1ODI3ODVhYTRhnCzDFA==: 00:35:35.468 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmE4YzI0YzczYWQxNGJkZmU2ZWE0MTJkMzNiZDAwOGMeuw00: 00:35:35.468 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:35.468 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:35.468 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmRiYWQ5OWM0MWZhZDk2Zjk2YjljNzc1NzM3Mjk5NDY2NDcyODc1ODI3ODVhYTRhnCzDFA==: 00:35:35.468 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmE4YzI0YzczYWQxNGJkZmU2ZWE0MTJkMzNiZDAwOGMeuw00: ]] 00:35:35.468 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmE4YzI0YzczYWQxNGJkZmU2ZWE0MTJkMzNiZDAwOGMeuw00: 00:35:35.468 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:35:35.468 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:35.468 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:35.468 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:35.468 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:35.468 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:35.468 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:35.468 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.468 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.468 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.468 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:35.468 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:35.468 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:35.468 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:35.468 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:35.468 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:35.468 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:35.468 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:35.468 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:35.468 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:35.468 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:35.468 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:35.468 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.468 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.468 nvme0n1 00:35:35.468 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.468 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:35.468 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:35.468 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.468 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.468 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.468 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:35.468 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:35.468 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.468 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.468 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.468 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:35.468 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:35:35.468 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:35.468 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:35.468 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:35.468 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:35.468 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjhjZTdhYjk0ODZhNDA1MzI3MWM3Mzc0NjNmYWZlN2ZjZmI0MTVkMjdhMTc2OGIyNDFmMzAzZjhhMDU0MTVlYsnxLbQ=: 00:35:35.468 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:35.468 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:35.468 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:35.468 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjhjZTdhYjk0ODZhNDA1MzI3MWM3Mzc0NjNmYWZlN2ZjZmI0MTVkMjdhMTc2OGIyNDFmMzAzZjhhMDU0MTVlYsnxLbQ=: 00:35:35.468 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:35.468 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:35:35.468 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:35.468 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:35.468 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:35.468 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:35.468 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:35.468 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:35.468 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.468 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.728 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.728 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:35.728 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:35.728 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:35.728 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:35.728 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:35.728 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:35.728 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:35.728 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:35.728 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:35.728 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:35.728 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:35.728 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:35.728 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.728 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.728 nvme0n1 00:35:35.728 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.728 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:35.728 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:35.728 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.728 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.728 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.728 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:35.728 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:35.728 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.728 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.728 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.728 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:35.728 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:35.728 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:35:35.728 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:35.728 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:35.728 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:35.728 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:35.728 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGIwMDU0ZDE1OTAzZGE0ZDcxM2JkMTUxZWE2NTcwMjItaPJq: 00:35:35.728 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTMzNTY5NzVkZGNiNjUyYjRhOWJjOTE5YzJlODFiOTI3ZjNmM2EwYjJkMDMzNjc1YmU5OTMyZWIxOTkzNjY4NU19gpU=: 00:35:35.728 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:35.728 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:35.728 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGIwMDU0ZDE1OTAzZGE0ZDcxM2JkMTUxZWE2NTcwMjItaPJq: 00:35:35.728 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTMzNTY5NzVkZGNiNjUyYjRhOWJjOTE5YzJlODFiOTI3ZjNmM2EwYjJkMDMzNjc1YmU5OTMyZWIxOTkzNjY4NU19gpU=: ]] 00:35:35.728 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTMzNTY5NzVkZGNiNjUyYjRhOWJjOTE5YzJlODFiOTI3ZjNmM2EwYjJkMDMzNjc1YmU5OTMyZWIxOTkzNjY4NU19gpU=: 00:35:35.728 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:35:35.728 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:35.728 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:35.728 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:35.728 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:35.728 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:35.728 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:35.728 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.728 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.728 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.728 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:35.728 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:35.728 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:35.728 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:35.728 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:35.728 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:35.728 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:35.728 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:35.728 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:35.728 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:35.728 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:35.728 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:35.728 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.728 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.988 nvme0n1 00:35:35.988 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.988 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:35.988 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:35.988 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.988 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.988 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.988 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:35.988 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:35.988 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.988 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.988 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.988 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:35.988 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:35:35.988 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:35.988 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:35.988 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:35.988 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:35.988 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjRiNmEzMWRjNWQzZjg5Y2I1NmE5Y2FmM2E4YTliNzRhNDYyYjM2NDcxMGUzMzhip9bbxA==: 00:35:35.988 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjYxYzQzYzI5YjMyMDUwZDgxNzBlYWIyNjdmMWQwODc3MWVlMjE5NGNkOTg3NTM5U7rzEg==: 00:35:35.988 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:35.988 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:35.988 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjRiNmEzMWRjNWQzZjg5Y2I1NmE5Y2FmM2E4YTliNzRhNDYyYjM2NDcxMGUzMzhip9bbxA==: 00:35:35.988 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjYxYzQzYzI5YjMyMDUwZDgxNzBlYWIyNjdmMWQwODc3MWVlMjE5NGNkOTg3NTM5U7rzEg==: ]] 00:35:35.988 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjYxYzQzYzI5YjMyMDUwZDgxNzBlYWIyNjdmMWQwODc3MWVlMjE5NGNkOTg3NTM5U7rzEg==: 00:35:35.988 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:35:35.988 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:35.988 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:35.988 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:35.988 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:35.988 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:35.988 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:35.988 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.988 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.988 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.988 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:35.988 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:35.988 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:35.988 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:35.988 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:35.988 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:35.988 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:35.988 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:35.988 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:35.988 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:35.988 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:35.988 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:35.988 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.988 07:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.249 nvme0n1 00:35:36.249 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.249 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:36.249 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:36.249 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.249 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.249 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.249 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:36.249 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:36.249 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.249 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.249 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.249 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:36.249 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:35:36.249 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:36.249 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:36.249 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:36.249 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:36.249 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzY4NmEzNWUxMmE3MTZjM2IwMTFhYjgwMjU3MDIyNmHHYUbr: 00:35:36.249 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTZmODhhYWJmZDMxY2NkOWNkNzMyZjc4YjYwOGM0MTm6KS/l: 00:35:36.249 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:36.249 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:36.249 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzY4NmEzNWUxMmE3MTZjM2IwMTFhYjgwMjU3MDIyNmHHYUbr: 00:35:36.249 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTZmODhhYWJmZDMxY2NkOWNkNzMyZjc4YjYwOGM0MTm6KS/l: ]] 00:35:36.249 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTZmODhhYWJmZDMxY2NkOWNkNzMyZjc4YjYwOGM0MTm6KS/l: 00:35:36.249 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:35:36.249 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:36.249 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:36.249 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:36.249 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:36.249 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:36.249 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:36.249 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.249 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.249 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.249 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:36.249 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:36.249 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:36.249 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:36.249 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:36.249 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:36.249 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:36.249 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:36.249 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:36.249 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:36.249 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:36.249 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:36.249 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.249 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.508 nvme0n1 00:35:36.508 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.508 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:36.508 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.508 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.508 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:36.508 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.508 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:36.508 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:36.508 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.508 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.508 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.508 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:36.508 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:35:36.508 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:36.508 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:36.508 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:36.508 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:36.508 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmRiYWQ5OWM0MWZhZDk2Zjk2YjljNzc1NzM3Mjk5NDY2NDcyODc1ODI3ODVhYTRhnCzDFA==: 00:35:36.508 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmE4YzI0YzczYWQxNGJkZmU2ZWE0MTJkMzNiZDAwOGMeuw00: 00:35:36.508 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:36.508 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:36.508 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmRiYWQ5OWM0MWZhZDk2Zjk2YjljNzc1NzM3Mjk5NDY2NDcyODc1ODI3ODVhYTRhnCzDFA==: 00:35:36.508 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmE4YzI0YzczYWQxNGJkZmU2ZWE0MTJkMzNiZDAwOGMeuw00: ]] 00:35:36.508 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmE4YzI0YzczYWQxNGJkZmU2ZWE0MTJkMzNiZDAwOGMeuw00: 00:35:36.508 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:35:36.508 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:36.508 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:36.508 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:36.508 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:36.508 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:36.508 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:36.508 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.508 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.508 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.508 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:36.508 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:36.508 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:36.508 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:36.509 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:36.509 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:36.509 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:36.509 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:36.509 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:36.509 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:36.509 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:36.509 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:36.509 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.509 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.767 nvme0n1 00:35:36.767 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.767 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:36.767 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.767 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.767 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:36.767 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.767 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:36.767 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:36.767 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.767 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.767 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.767 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:36.767 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:35:36.767 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:36.767 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:36.767 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:36.767 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:36.767 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjhjZTdhYjk0ODZhNDA1MzI3MWM3Mzc0NjNmYWZlN2ZjZmI0MTVkMjdhMTc2OGIyNDFmMzAzZjhhMDU0MTVlYsnxLbQ=: 00:35:36.767 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:36.767 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:36.767 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:36.767 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjhjZTdhYjk0ODZhNDA1MzI3MWM3Mzc0NjNmYWZlN2ZjZmI0MTVkMjdhMTc2OGIyNDFmMzAzZjhhMDU0MTVlYsnxLbQ=: 00:35:36.767 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:36.767 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:35:36.767 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:36.767 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:36.767 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:36.767 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:36.767 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:36.767 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:36.767 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.767 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.026 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.026 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:37.026 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:37.027 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:37.027 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:37.027 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:37.027 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:37.027 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:37.027 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:37.027 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:37.027 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:37.027 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:37.027 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:37.027 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.027 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.027 nvme0n1 00:35:37.027 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.027 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:37.027 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.027 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.027 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:37.027 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.027 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:37.027 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:37.027 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.027 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.027 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.027 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:37.027 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:37.027 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:35:37.027 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:37.027 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:37.027 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:37.027 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:37.027 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGIwMDU0ZDE1OTAzZGE0ZDcxM2JkMTUxZWE2NTcwMjItaPJq: 00:35:37.027 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTMzNTY5NzVkZGNiNjUyYjRhOWJjOTE5YzJlODFiOTI3ZjNmM2EwYjJkMDMzNjc1YmU5OTMyZWIxOTkzNjY4NU19gpU=: 00:35:37.027 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:37.027 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:37.027 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGIwMDU0ZDE1OTAzZGE0ZDcxM2JkMTUxZWE2NTcwMjItaPJq: 00:35:37.027 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTMzNTY5NzVkZGNiNjUyYjRhOWJjOTE5YzJlODFiOTI3ZjNmM2EwYjJkMDMzNjc1YmU5OTMyZWIxOTkzNjY4NU19gpU=: ]] 00:35:37.027 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTMzNTY5NzVkZGNiNjUyYjRhOWJjOTE5YzJlODFiOTI3ZjNmM2EwYjJkMDMzNjc1YmU5OTMyZWIxOTkzNjY4NU19gpU=: 00:35:37.027 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:35:37.027 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:37.027 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:37.027 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:37.027 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:37.027 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:37.027 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:37.027 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.027 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.287 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.287 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:37.287 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:37.287 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:37.287 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:37.287 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:37.287 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:37.287 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:37.287 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:37.287 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:37.287 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:37.287 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:37.287 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:37.287 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.287 07:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.546 nvme0n1 00:35:37.546 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.546 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:37.546 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.546 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.546 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:37.546 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.546 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:37.546 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:37.546 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.546 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.546 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.546 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:37.546 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:35:37.546 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:37.546 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:37.546 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:37.546 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:37.546 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjRiNmEzMWRjNWQzZjg5Y2I1NmE5Y2FmM2E4YTliNzRhNDYyYjM2NDcxMGUzMzhip9bbxA==: 00:35:37.546 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjYxYzQzYzI5YjMyMDUwZDgxNzBlYWIyNjdmMWQwODc3MWVlMjE5NGNkOTg3NTM5U7rzEg==: 00:35:37.546 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:37.546 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:37.546 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjRiNmEzMWRjNWQzZjg5Y2I1NmE5Y2FmM2E4YTliNzRhNDYyYjM2NDcxMGUzMzhip9bbxA==: 00:35:37.546 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjYxYzQzYzI5YjMyMDUwZDgxNzBlYWIyNjdmMWQwODc3MWVlMjE5NGNkOTg3NTM5U7rzEg==: ]] 00:35:37.546 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjYxYzQzYzI5YjMyMDUwZDgxNzBlYWIyNjdmMWQwODc3MWVlMjE5NGNkOTg3NTM5U7rzEg==: 00:35:37.546 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:35:37.546 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:37.546 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:37.546 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:37.546 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:37.546 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:37.546 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:37.546 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.546 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.546 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.546 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:37.546 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:37.546 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:37.546 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:37.546 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:37.546 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:37.546 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:37.546 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:37.546 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:37.546 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:37.546 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:37.546 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:37.546 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.546 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.805 nvme0n1 00:35:37.805 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.805 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:37.805 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:37.805 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.805 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.805 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.805 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:37.805 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:37.805 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.805 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.805 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.805 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:37.805 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:35:37.805 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:37.805 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:37.805 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:37.805 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:37.805 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzY4NmEzNWUxMmE3MTZjM2IwMTFhYjgwMjU3MDIyNmHHYUbr: 00:35:37.805 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTZmODhhYWJmZDMxY2NkOWNkNzMyZjc4YjYwOGM0MTm6KS/l: 00:35:37.805 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:37.805 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:37.805 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzY4NmEzNWUxMmE3MTZjM2IwMTFhYjgwMjU3MDIyNmHHYUbr: 00:35:37.805 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTZmODhhYWJmZDMxY2NkOWNkNzMyZjc4YjYwOGM0MTm6KS/l: ]] 00:35:37.805 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTZmODhhYWJmZDMxY2NkOWNkNzMyZjc4YjYwOGM0MTm6KS/l: 00:35:37.805 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:35:37.805 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:37.805 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:37.805 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:37.805 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:37.805 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:37.805 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:37.805 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.805 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.805 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.806 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:37.806 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:37.806 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:37.806 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:37.806 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:37.806 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:37.806 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:37.806 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:37.806 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:37.806 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:37.806 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:37.806 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:37.806 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.806 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.064 nvme0n1 00:35:38.064 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.064 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:38.064 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:38.064 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.064 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.064 07:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.324 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:38.324 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:38.324 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.324 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.324 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.324 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:38.324 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:35:38.324 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:38.324 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:38.324 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:38.324 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:38.324 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmRiYWQ5OWM0MWZhZDk2Zjk2YjljNzc1NzM3Mjk5NDY2NDcyODc1ODI3ODVhYTRhnCzDFA==: 00:35:38.324 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmE4YzI0YzczYWQxNGJkZmU2ZWE0MTJkMzNiZDAwOGMeuw00: 00:35:38.324 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:38.324 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:38.324 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmRiYWQ5OWM0MWZhZDk2Zjk2YjljNzc1NzM3Mjk5NDY2NDcyODc1ODI3ODVhYTRhnCzDFA==: 00:35:38.324 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmE4YzI0YzczYWQxNGJkZmU2ZWE0MTJkMzNiZDAwOGMeuw00: ]] 00:35:38.324 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmE4YzI0YzczYWQxNGJkZmU2ZWE0MTJkMzNiZDAwOGMeuw00: 00:35:38.324 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:35:38.324 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:38.325 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:38.325 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:38.325 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:38.325 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:38.325 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:38.325 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.325 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.325 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.325 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:38.325 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:38.325 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:38.325 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:38.325 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:38.325 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:38.325 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:38.325 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:38.325 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:38.325 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:38.325 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:38.325 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:38.325 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.325 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.585 nvme0n1 00:35:38.586 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.586 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:38.586 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:38.586 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.586 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.586 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.586 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:38.586 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:38.586 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.586 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.586 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.586 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:38.586 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:35:38.586 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:38.586 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:38.586 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:38.586 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:38.586 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjhjZTdhYjk0ODZhNDA1MzI3MWM3Mzc0NjNmYWZlN2ZjZmI0MTVkMjdhMTc2OGIyNDFmMzAzZjhhMDU0MTVlYsnxLbQ=: 00:35:38.586 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:38.586 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:38.586 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:38.586 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjhjZTdhYjk0ODZhNDA1MzI3MWM3Mzc0NjNmYWZlN2ZjZmI0MTVkMjdhMTc2OGIyNDFmMzAzZjhhMDU0MTVlYsnxLbQ=: 00:35:38.586 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:38.586 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:35:38.586 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:38.586 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:38.586 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:38.586 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:38.586 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:38.586 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:38.586 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.586 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.586 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.586 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:38.586 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:38.586 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:38.586 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:38.586 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:38.586 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:38.586 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:38.586 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:38.586 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:38.586 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:38.586 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:38.586 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:38.586 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.586 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.847 nvme0n1 00:35:38.847 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.847 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:38.847 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:38.847 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.847 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.847 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.847 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:38.847 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:38.847 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.847 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.847 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.847 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:38.847 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:38.847 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:35:38.847 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:38.847 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:38.847 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:38.847 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:38.847 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGIwMDU0ZDE1OTAzZGE0ZDcxM2JkMTUxZWE2NTcwMjItaPJq: 00:35:38.847 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTMzNTY5NzVkZGNiNjUyYjRhOWJjOTE5YzJlODFiOTI3ZjNmM2EwYjJkMDMzNjc1YmU5OTMyZWIxOTkzNjY4NU19gpU=: 00:35:38.847 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:38.847 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:38.847 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGIwMDU0ZDE1OTAzZGE0ZDcxM2JkMTUxZWE2NTcwMjItaPJq: 00:35:38.847 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTMzNTY5NzVkZGNiNjUyYjRhOWJjOTE5YzJlODFiOTI3ZjNmM2EwYjJkMDMzNjc1YmU5OTMyZWIxOTkzNjY4NU19gpU=: ]] 00:35:38.847 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTMzNTY5NzVkZGNiNjUyYjRhOWJjOTE5YzJlODFiOTI3ZjNmM2EwYjJkMDMzNjc1YmU5OTMyZWIxOTkzNjY4NU19gpU=: 00:35:38.847 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:35:38.847 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:38.847 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:38.847 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:38.847 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:38.847 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:38.847 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:38.847 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.847 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.847 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.847 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:38.847 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:38.847 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:38.847 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:38.847 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:38.847 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:38.847 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:38.847 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:38.847 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:38.847 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:38.847 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:38.847 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:38.847 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.847 07:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.416 nvme0n1 00:35:39.416 07:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.416 07:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:39.416 07:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.416 07:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:39.416 07:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.416 07:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.416 07:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:39.416 07:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:39.416 07:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.416 07:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.676 07:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.676 07:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:39.676 07:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:35:39.676 07:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:39.676 07:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:39.676 07:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:39.676 07:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:39.676 07:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjRiNmEzMWRjNWQzZjg5Y2I1NmE5Y2FmM2E4YTliNzRhNDYyYjM2NDcxMGUzMzhip9bbxA==: 00:35:39.676 07:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjYxYzQzYzI5YjMyMDUwZDgxNzBlYWIyNjdmMWQwODc3MWVlMjE5NGNkOTg3NTM5U7rzEg==: 00:35:39.676 07:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:39.676 07:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:39.676 07:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjRiNmEzMWRjNWQzZjg5Y2I1NmE5Y2FmM2E4YTliNzRhNDYyYjM2NDcxMGUzMzhip9bbxA==: 00:35:39.676 07:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjYxYzQzYzI5YjMyMDUwZDgxNzBlYWIyNjdmMWQwODc3MWVlMjE5NGNkOTg3NTM5U7rzEg==: ]] 00:35:39.676 07:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjYxYzQzYzI5YjMyMDUwZDgxNzBlYWIyNjdmMWQwODc3MWVlMjE5NGNkOTg3NTM5U7rzEg==: 00:35:39.676 07:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:35:39.676 07:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:39.676 07:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:39.676 07:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:39.676 07:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:39.676 07:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:39.676 07:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:39.676 07:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.676 07:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.676 07:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.676 07:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:39.676 07:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:39.676 07:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:39.676 07:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:39.676 07:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:39.676 07:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:39.676 07:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:39.676 07:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:39.676 07:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:39.676 07:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:39.676 07:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:39.676 07:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:39.676 07:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.676 07:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.247 nvme0n1 00:35:40.247 07:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.247 07:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:40.247 07:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:40.247 07:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.247 07:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.247 07:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.247 07:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:40.247 07:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:40.247 07:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.247 07:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.247 07:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.247 07:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:40.247 07:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:35:40.247 07:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:40.247 07:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:40.247 07:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:40.247 07:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:40.247 07:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzY4NmEzNWUxMmE3MTZjM2IwMTFhYjgwMjU3MDIyNmHHYUbr: 00:35:40.247 07:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTZmODhhYWJmZDMxY2NkOWNkNzMyZjc4YjYwOGM0MTm6KS/l: 00:35:40.247 07:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:40.247 07:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:40.247 07:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzY4NmEzNWUxMmE3MTZjM2IwMTFhYjgwMjU3MDIyNmHHYUbr: 00:35:40.247 07:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTZmODhhYWJmZDMxY2NkOWNkNzMyZjc4YjYwOGM0MTm6KS/l: ]] 00:35:40.247 07:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTZmODhhYWJmZDMxY2NkOWNkNzMyZjc4YjYwOGM0MTm6KS/l: 00:35:40.247 07:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:35:40.247 07:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:40.247 07:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:40.247 07:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:40.247 07:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:40.247 07:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:40.248 07:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:40.248 07:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.248 07:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.248 07:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.248 07:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:40.248 07:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:40.248 07:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:40.248 07:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:40.248 07:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:40.248 07:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:40.248 07:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:40.248 07:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:40.248 07:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:40.248 07:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:40.248 07:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:40.248 07:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:40.248 07:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.248 07:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.817 nvme0n1 00:35:40.817 07:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.817 07:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:40.817 07:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:40.817 07:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.817 07:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.817 07:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.817 07:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:40.817 07:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:40.817 07:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.817 07:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.817 07:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.817 07:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:40.817 07:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:35:40.817 07:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:40.817 07:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:40.817 07:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:40.817 07:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:40.817 07:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmRiYWQ5OWM0MWZhZDk2Zjk2YjljNzc1NzM3Mjk5NDY2NDcyODc1ODI3ODVhYTRhnCzDFA==: 00:35:40.817 07:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmE4YzI0YzczYWQxNGJkZmU2ZWE0MTJkMzNiZDAwOGMeuw00: 00:35:40.817 07:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:40.817 07:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:40.817 07:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmRiYWQ5OWM0MWZhZDk2Zjk2YjljNzc1NzM3Mjk5NDY2NDcyODc1ODI3ODVhYTRhnCzDFA==: 00:35:40.817 07:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmE4YzI0YzczYWQxNGJkZmU2ZWE0MTJkMzNiZDAwOGMeuw00: ]] 00:35:40.817 07:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmE4YzI0YzczYWQxNGJkZmU2ZWE0MTJkMzNiZDAwOGMeuw00: 00:35:40.817 07:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:35:40.817 07:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:40.817 07:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:40.817 07:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:40.817 07:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:40.817 07:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:40.817 07:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:40.817 07:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.817 07:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.817 07:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.817 07:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:40.817 07:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:40.817 07:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:40.817 07:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:40.817 07:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:40.817 07:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:40.817 07:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:40.817 07:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:40.817 07:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:40.817 07:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:40.817 07:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:40.818 07:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:40.818 07:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.818 07:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.389 nvme0n1 00:35:41.389 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.389 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:41.389 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:41.389 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.389 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.389 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.389 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:41.389 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:41.389 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.389 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.389 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.389 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:41.389 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:35:41.389 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:41.389 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:41.389 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:41.389 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:41.389 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjhjZTdhYjk0ODZhNDA1MzI3MWM3Mzc0NjNmYWZlN2ZjZmI0MTVkMjdhMTc2OGIyNDFmMzAzZjhhMDU0MTVlYsnxLbQ=: 00:35:41.389 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:41.389 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:41.389 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:41.389 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjhjZTdhYjk0ODZhNDA1MzI3MWM3Mzc0NjNmYWZlN2ZjZmI0MTVkMjdhMTc2OGIyNDFmMzAzZjhhMDU0MTVlYsnxLbQ=: 00:35:41.389 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:41.389 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:35:41.389 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:41.389 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:41.389 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:41.389 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:41.389 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:41.389 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:41.389 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.389 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.389 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.389 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:41.389 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:41.389 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:41.389 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:41.389 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:41.389 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:41.389 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:41.389 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:41.389 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:41.389 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:41.389 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:41.389 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:41.389 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.389 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.960 nvme0n1 00:35:41.960 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.960 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:41.960 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.960 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.960 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:41.960 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.960 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:41.960 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:41.960 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.960 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.960 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.960 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:41.960 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:41.960 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:35:41.960 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:41.960 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:41.960 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:41.960 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:41.960 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGIwMDU0ZDE1OTAzZGE0ZDcxM2JkMTUxZWE2NTcwMjItaPJq: 00:35:41.960 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTMzNTY5NzVkZGNiNjUyYjRhOWJjOTE5YzJlODFiOTI3ZjNmM2EwYjJkMDMzNjc1YmU5OTMyZWIxOTkzNjY4NU19gpU=: 00:35:41.960 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:41.960 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:41.960 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGIwMDU0ZDE1OTAzZGE0ZDcxM2JkMTUxZWE2NTcwMjItaPJq: 00:35:41.960 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTMzNTY5NzVkZGNiNjUyYjRhOWJjOTE5YzJlODFiOTI3ZjNmM2EwYjJkMDMzNjc1YmU5OTMyZWIxOTkzNjY4NU19gpU=: ]] 00:35:41.960 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTMzNTY5NzVkZGNiNjUyYjRhOWJjOTE5YzJlODFiOTI3ZjNmM2EwYjJkMDMzNjc1YmU5OTMyZWIxOTkzNjY4NU19gpU=: 00:35:41.960 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:35:41.960 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:41.960 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:41.960 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:41.960 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:41.960 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:41.960 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:41.960 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.960 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.960 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.960 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:41.960 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:41.960 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:41.960 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:41.960 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:41.960 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:41.960 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:41.960 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:41.960 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:41.960 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:41.960 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:41.960 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:41.960 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.960 07:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.928 nvme0n1 00:35:42.928 07:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.928 07:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:42.928 07:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:42.928 07:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.928 07:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.928 07:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.928 07:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:42.928 07:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:42.928 07:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.928 07:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.928 07:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.928 07:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:42.928 07:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:35:42.928 07:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:42.928 07:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:42.928 07:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:42.928 07:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:42.928 07:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjRiNmEzMWRjNWQzZjg5Y2I1NmE5Y2FmM2E4YTliNzRhNDYyYjM2NDcxMGUzMzhip9bbxA==: 00:35:42.928 07:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjYxYzQzYzI5YjMyMDUwZDgxNzBlYWIyNjdmMWQwODc3MWVlMjE5NGNkOTg3NTM5U7rzEg==: 00:35:42.928 07:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:42.928 07:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:42.928 07:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjRiNmEzMWRjNWQzZjg5Y2I1NmE5Y2FmM2E4YTliNzRhNDYyYjM2NDcxMGUzMzhip9bbxA==: 00:35:42.928 07:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjYxYzQzYzI5YjMyMDUwZDgxNzBlYWIyNjdmMWQwODc3MWVlMjE5NGNkOTg3NTM5U7rzEg==: ]] 00:35:42.928 07:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjYxYzQzYzI5YjMyMDUwZDgxNzBlYWIyNjdmMWQwODc3MWVlMjE5NGNkOTg3NTM5U7rzEg==: 00:35:42.928 07:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:35:42.929 07:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:42.929 07:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:42.929 07:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:42.929 07:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:42.929 07:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:42.929 07:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:42.929 07:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.929 07:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.929 07:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.929 07:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:42.929 07:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:42.929 07:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:42.929 07:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:42.929 07:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:42.929 07:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:42.929 07:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:42.929 07:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:42.929 07:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:42.929 07:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:42.929 07:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:42.929 07:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:42.929 07:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.929 07:59:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.889 nvme0n1 00:35:43.889 07:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.889 07:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:43.889 07:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.889 07:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.889 07:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:43.889 07:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.889 07:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:43.889 07:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:43.889 07:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.889 07:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.150 07:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.150 07:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:44.150 07:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:35:44.150 07:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:44.150 07:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:44.150 07:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:44.150 07:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:44.150 07:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzY4NmEzNWUxMmE3MTZjM2IwMTFhYjgwMjU3MDIyNmHHYUbr: 00:35:44.150 07:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTZmODhhYWJmZDMxY2NkOWNkNzMyZjc4YjYwOGM0MTm6KS/l: 00:35:44.150 07:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:44.150 07:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:44.150 07:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzY4NmEzNWUxMmE3MTZjM2IwMTFhYjgwMjU3MDIyNmHHYUbr: 00:35:44.150 07:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTZmODhhYWJmZDMxY2NkOWNkNzMyZjc4YjYwOGM0MTm6KS/l: ]] 00:35:44.150 07:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTZmODhhYWJmZDMxY2NkOWNkNzMyZjc4YjYwOGM0MTm6KS/l: 00:35:44.150 07:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:35:44.150 07:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:44.150 07:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:44.150 07:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:44.150 07:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:44.150 07:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:44.150 07:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:44.150 07:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.150 07:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.150 07:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.150 07:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:44.150 07:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:44.150 07:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:44.150 07:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:44.150 07:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:44.150 07:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:44.150 07:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:44.150 07:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:44.150 07:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:44.150 07:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:44.150 07:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:44.150 07:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:44.150 07:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.150 07:59:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.088 nvme0n1 00:35:45.088 07:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.088 07:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:45.088 07:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.088 07:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:45.088 07:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.088 07:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.088 07:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:45.088 07:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:45.088 07:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.088 07:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.088 07:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.088 07:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:45.088 07:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:35:45.088 07:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:45.088 07:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:45.088 07:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:45.088 07:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:45.088 07:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmRiYWQ5OWM0MWZhZDk2Zjk2YjljNzc1NzM3Mjk5NDY2NDcyODc1ODI3ODVhYTRhnCzDFA==: 00:35:45.088 07:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmE4YzI0YzczYWQxNGJkZmU2ZWE0MTJkMzNiZDAwOGMeuw00: 00:35:45.088 07:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:45.088 07:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:45.088 07:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmRiYWQ5OWM0MWZhZDk2Zjk2YjljNzc1NzM3Mjk5NDY2NDcyODc1ODI3ODVhYTRhnCzDFA==: 00:35:45.088 07:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmE4YzI0YzczYWQxNGJkZmU2ZWE0MTJkMzNiZDAwOGMeuw00: ]] 00:35:45.088 07:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmE4YzI0YzczYWQxNGJkZmU2ZWE0MTJkMzNiZDAwOGMeuw00: 00:35:45.088 07:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:35:45.088 07:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:45.088 07:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:45.088 07:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:45.088 07:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:45.088 07:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:45.088 07:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:45.088 07:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.088 07:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.088 07:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.088 07:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:45.088 07:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:45.088 07:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:45.088 07:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:45.088 07:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:45.088 07:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:45.088 07:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:45.088 07:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:45.088 07:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:45.088 07:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:45.088 07:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:45.088 07:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:45.088 07:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.088 07:59:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.026 nvme0n1 00:35:46.026 07:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.026 07:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:46.026 07:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.026 07:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.026 07:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:46.026 07:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.026 07:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:46.026 07:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:46.026 07:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.026 07:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.026 07:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.027 07:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:46.027 07:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:35:46.027 07:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:46.027 07:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:46.027 07:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:46.027 07:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:46.027 07:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjhjZTdhYjk0ODZhNDA1MzI3MWM3Mzc0NjNmYWZlN2ZjZmI0MTVkMjdhMTc2OGIyNDFmMzAzZjhhMDU0MTVlYsnxLbQ=: 00:35:46.027 07:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:46.027 07:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:46.027 07:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:46.027 07:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjhjZTdhYjk0ODZhNDA1MzI3MWM3Mzc0NjNmYWZlN2ZjZmI0MTVkMjdhMTc2OGIyNDFmMzAzZjhhMDU0MTVlYsnxLbQ=: 00:35:46.027 07:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:46.027 07:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:35:46.027 07:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:46.027 07:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:46.027 07:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:46.027 07:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:46.027 07:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:46.027 07:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:46.027 07:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.027 07:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.027 07:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.027 07:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:46.027 07:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:46.027 07:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:46.027 07:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:46.027 07:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:46.027 07:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:46.027 07:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:46.027 07:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:46.027 07:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:46.027 07:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:46.027 07:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:46.027 07:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:46.027 07:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.027 07:59:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.965 nvme0n1 00:35:46.965 07:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.965 07:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:46.965 07:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:46.965 07:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.965 07:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.965 07:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.965 07:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:46.965 07:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:46.965 07:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.965 07:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.225 07:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.225 07:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:35:47.225 07:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:47.225 07:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:47.225 07:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:35:47.225 07:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:47.225 07:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:47.225 07:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:47.225 07:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:47.225 07:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGIwMDU0ZDE1OTAzZGE0ZDcxM2JkMTUxZWE2NTcwMjItaPJq: 00:35:47.225 07:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTMzNTY5NzVkZGNiNjUyYjRhOWJjOTE5YzJlODFiOTI3ZjNmM2EwYjJkMDMzNjc1YmU5OTMyZWIxOTkzNjY4NU19gpU=: 00:35:47.225 07:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:47.225 07:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:47.225 07:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGIwMDU0ZDE1OTAzZGE0ZDcxM2JkMTUxZWE2NTcwMjItaPJq: 00:35:47.225 07:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTMzNTY5NzVkZGNiNjUyYjRhOWJjOTE5YzJlODFiOTI3ZjNmM2EwYjJkMDMzNjc1YmU5OTMyZWIxOTkzNjY4NU19gpU=: ]] 00:35:47.225 07:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTMzNTY5NzVkZGNiNjUyYjRhOWJjOTE5YzJlODFiOTI3ZjNmM2EwYjJkMDMzNjc1YmU5OTMyZWIxOTkzNjY4NU19gpU=: 00:35:47.225 07:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:35:47.225 07:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:47.225 07:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:47.225 07:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:47.225 07:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:47.225 07:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:47.225 07:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:47.225 07:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.225 07:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.225 07:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.225 07:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:47.225 07:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:47.225 07:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:47.225 07:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:47.225 07:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:47.225 07:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:47.225 07:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:47.225 07:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:47.225 07:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:47.225 07:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:47.225 07:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:47.225 07:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:47.225 07:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.225 07:59:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.225 nvme0n1 00:35:47.225 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.225 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:47.225 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:47.225 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.225 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.225 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.225 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:47.225 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:47.225 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.225 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.225 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.225 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:47.225 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:35:47.225 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:47.225 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:47.225 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:47.225 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:47.225 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjRiNmEzMWRjNWQzZjg5Y2I1NmE5Y2FmM2E4YTliNzRhNDYyYjM2NDcxMGUzMzhip9bbxA==: 00:35:47.225 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjYxYzQzYzI5YjMyMDUwZDgxNzBlYWIyNjdmMWQwODc3MWVlMjE5NGNkOTg3NTM5U7rzEg==: 00:35:47.225 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:47.225 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:47.226 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjRiNmEzMWRjNWQzZjg5Y2I1NmE5Y2FmM2E4YTliNzRhNDYyYjM2NDcxMGUzMzhip9bbxA==: 00:35:47.226 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjYxYzQzYzI5YjMyMDUwZDgxNzBlYWIyNjdmMWQwODc3MWVlMjE5NGNkOTg3NTM5U7rzEg==: ]] 00:35:47.226 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjYxYzQzYzI5YjMyMDUwZDgxNzBlYWIyNjdmMWQwODc3MWVlMjE5NGNkOTg3NTM5U7rzEg==: 00:35:47.226 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:35:47.226 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:47.226 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:47.226 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:47.226 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:47.226 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:47.226 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:47.226 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.226 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.485 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.485 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:47.485 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:47.485 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:47.485 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:47.485 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:47.485 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:47.485 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:47.485 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:47.485 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:47.485 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:47.485 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:47.485 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:47.485 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.485 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.485 nvme0n1 00:35:47.485 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.485 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:47.485 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:47.485 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.485 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.485 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.485 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:47.485 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:47.485 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.485 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.485 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.485 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:47.485 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:35:47.485 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:47.485 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:47.485 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:47.485 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:47.485 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzY4NmEzNWUxMmE3MTZjM2IwMTFhYjgwMjU3MDIyNmHHYUbr: 00:35:47.485 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTZmODhhYWJmZDMxY2NkOWNkNzMyZjc4YjYwOGM0MTm6KS/l: 00:35:47.485 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:47.485 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:47.485 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzY4NmEzNWUxMmE3MTZjM2IwMTFhYjgwMjU3MDIyNmHHYUbr: 00:35:47.485 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTZmODhhYWJmZDMxY2NkOWNkNzMyZjc4YjYwOGM0MTm6KS/l: ]] 00:35:47.485 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTZmODhhYWJmZDMxY2NkOWNkNzMyZjc4YjYwOGM0MTm6KS/l: 00:35:47.485 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:35:47.485 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:47.485 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:47.485 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:47.485 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:47.485 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:47.485 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:47.485 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.485 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.485 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.485 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:47.485 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:47.485 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:47.485 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:47.485 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:47.485 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:47.485 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:47.485 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:47.485 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:47.485 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:47.485 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:47.485 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:47.485 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.485 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.745 nvme0n1 00:35:47.745 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.745 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:47.745 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:47.745 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.745 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.745 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.745 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:47.745 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:47.745 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.745 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.745 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.745 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:47.745 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:35:47.745 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:47.745 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:47.745 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:47.745 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:47.745 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmRiYWQ5OWM0MWZhZDk2Zjk2YjljNzc1NzM3Mjk5NDY2NDcyODc1ODI3ODVhYTRhnCzDFA==: 00:35:47.745 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmE4YzI0YzczYWQxNGJkZmU2ZWE0MTJkMzNiZDAwOGMeuw00: 00:35:47.745 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:47.745 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:47.745 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmRiYWQ5OWM0MWZhZDk2Zjk2YjljNzc1NzM3Mjk5NDY2NDcyODc1ODI3ODVhYTRhnCzDFA==: 00:35:47.745 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmE4YzI0YzczYWQxNGJkZmU2ZWE0MTJkMzNiZDAwOGMeuw00: ]] 00:35:47.746 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmE4YzI0YzczYWQxNGJkZmU2ZWE0MTJkMzNiZDAwOGMeuw00: 00:35:47.746 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:35:47.746 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:47.746 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:47.746 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:47.746 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:47.746 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:47.746 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:47.746 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.746 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.746 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.746 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:47.746 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:47.746 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:47.746 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:47.746 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:47.746 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:47.746 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:47.746 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:47.746 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:47.746 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:47.746 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:47.746 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:47.746 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.746 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.006 nvme0n1 00:35:48.006 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.006 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:48.006 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:48.006 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.006 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.006 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.006 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:48.006 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:48.006 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.006 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.006 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.006 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:48.006 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:35:48.006 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:48.006 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:48.006 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:48.006 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:48.006 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjhjZTdhYjk0ODZhNDA1MzI3MWM3Mzc0NjNmYWZlN2ZjZmI0MTVkMjdhMTc2OGIyNDFmMzAzZjhhMDU0MTVlYsnxLbQ=: 00:35:48.006 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:48.006 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:48.006 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:48.006 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjhjZTdhYjk0ODZhNDA1MzI3MWM3Mzc0NjNmYWZlN2ZjZmI0MTVkMjdhMTc2OGIyNDFmMzAzZjhhMDU0MTVlYsnxLbQ=: 00:35:48.006 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:48.006 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:35:48.006 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:48.006 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:48.006 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:48.006 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:48.006 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:48.006 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:48.006 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.006 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.006 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.006 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:48.006 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:48.006 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:48.006 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:48.006 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:48.006 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:48.006 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:48.006 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:48.006 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:48.006 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:48.006 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:48.006 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:48.006 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.007 07:59:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.268 nvme0n1 00:35:48.268 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.268 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:48.268 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:48.268 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.268 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.268 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.268 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:48.268 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:48.268 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.268 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.268 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.268 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:48.268 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:48.268 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:35:48.268 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:48.268 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:48.268 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:48.268 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:48.268 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGIwMDU0ZDE1OTAzZGE0ZDcxM2JkMTUxZWE2NTcwMjItaPJq: 00:35:48.268 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTMzNTY5NzVkZGNiNjUyYjRhOWJjOTE5YzJlODFiOTI3ZjNmM2EwYjJkMDMzNjc1YmU5OTMyZWIxOTkzNjY4NU19gpU=: 00:35:48.268 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:48.268 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:48.268 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGIwMDU0ZDE1OTAzZGE0ZDcxM2JkMTUxZWE2NTcwMjItaPJq: 00:35:48.268 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTMzNTY5NzVkZGNiNjUyYjRhOWJjOTE5YzJlODFiOTI3ZjNmM2EwYjJkMDMzNjc1YmU5OTMyZWIxOTkzNjY4NU19gpU=: ]] 00:35:48.268 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTMzNTY5NzVkZGNiNjUyYjRhOWJjOTE5YzJlODFiOTI3ZjNmM2EwYjJkMDMzNjc1YmU5OTMyZWIxOTkzNjY4NU19gpU=: 00:35:48.268 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:35:48.268 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:48.268 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:48.268 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:48.268 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:48.268 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:48.268 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:48.268 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.268 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.268 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.268 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:48.268 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:48.268 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:48.268 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:48.268 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:48.268 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:48.268 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:48.268 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:48.268 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:48.268 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:48.268 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:48.268 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:48.268 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.268 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.528 nvme0n1 00:35:48.528 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.528 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:48.528 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.528 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.528 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:48.528 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.528 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:48.528 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:48.528 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.528 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.528 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.528 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:48.528 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:35:48.528 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:48.528 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:48.528 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:48.528 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:48.528 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjRiNmEzMWRjNWQzZjg5Y2I1NmE5Y2FmM2E4YTliNzRhNDYyYjM2NDcxMGUzMzhip9bbxA==: 00:35:48.528 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjYxYzQzYzI5YjMyMDUwZDgxNzBlYWIyNjdmMWQwODc3MWVlMjE5NGNkOTg3NTM5U7rzEg==: 00:35:48.528 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:48.528 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:48.528 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjRiNmEzMWRjNWQzZjg5Y2I1NmE5Y2FmM2E4YTliNzRhNDYyYjM2NDcxMGUzMzhip9bbxA==: 00:35:48.528 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjYxYzQzYzI5YjMyMDUwZDgxNzBlYWIyNjdmMWQwODc3MWVlMjE5NGNkOTg3NTM5U7rzEg==: ]] 00:35:48.528 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjYxYzQzYzI5YjMyMDUwZDgxNzBlYWIyNjdmMWQwODc3MWVlMjE5NGNkOTg3NTM5U7rzEg==: 00:35:48.528 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:35:48.528 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:48.528 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:48.528 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:48.528 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:48.528 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:48.528 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:48.528 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.528 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.528 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.528 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:48.528 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:48.528 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:48.528 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:48.528 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:48.528 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:48.528 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:48.528 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:48.528 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:48.528 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:48.528 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:48.528 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:48.528 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.528 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.788 nvme0n1 00:35:48.788 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.788 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:48.788 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.788 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.788 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:48.788 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.788 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:48.788 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:48.788 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.788 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.788 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.788 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:48.788 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:35:48.788 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:48.788 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:48.788 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:48.788 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:48.789 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzY4NmEzNWUxMmE3MTZjM2IwMTFhYjgwMjU3MDIyNmHHYUbr: 00:35:48.789 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTZmODhhYWJmZDMxY2NkOWNkNzMyZjc4YjYwOGM0MTm6KS/l: 00:35:48.789 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:48.789 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:48.789 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzY4NmEzNWUxMmE3MTZjM2IwMTFhYjgwMjU3MDIyNmHHYUbr: 00:35:48.789 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTZmODhhYWJmZDMxY2NkOWNkNzMyZjc4YjYwOGM0MTm6KS/l: ]] 00:35:48.789 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTZmODhhYWJmZDMxY2NkOWNkNzMyZjc4YjYwOGM0MTm6KS/l: 00:35:48.789 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:35:48.789 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:48.789 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:48.789 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:48.789 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:48.789 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:48.789 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:48.789 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.789 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.789 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.789 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:48.789 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:48.789 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:48.789 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:48.789 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:48.789 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:48.789 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:48.789 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:48.789 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:48.789 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:48.789 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:48.789 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:48.789 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.789 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.048 nvme0n1 00:35:49.048 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.048 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:49.048 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.048 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.048 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:49.048 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.048 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:49.048 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:49.048 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.048 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.048 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.048 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:49.048 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:35:49.048 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:49.048 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:49.048 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:49.048 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:49.048 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmRiYWQ5OWM0MWZhZDk2Zjk2YjljNzc1NzM3Mjk5NDY2NDcyODc1ODI3ODVhYTRhnCzDFA==: 00:35:49.048 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmE4YzI0YzczYWQxNGJkZmU2ZWE0MTJkMzNiZDAwOGMeuw00: 00:35:49.048 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:49.048 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:49.048 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmRiYWQ5OWM0MWZhZDk2Zjk2YjljNzc1NzM3Mjk5NDY2NDcyODc1ODI3ODVhYTRhnCzDFA==: 00:35:49.048 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmE4YzI0YzczYWQxNGJkZmU2ZWE0MTJkMzNiZDAwOGMeuw00: ]] 00:35:49.048 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmE4YzI0YzczYWQxNGJkZmU2ZWE0MTJkMzNiZDAwOGMeuw00: 00:35:49.048 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:35:49.048 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:49.048 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:49.048 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:49.048 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:49.048 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:49.048 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:49.048 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.048 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.048 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.048 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:49.048 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:49.048 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:49.048 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:49.048 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:49.048 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:49.048 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:49.048 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:49.048 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:49.048 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:49.048 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:49.048 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:49.048 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.048 07:59:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.307 nvme0n1 00:35:49.307 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.307 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:49.307 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:49.307 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.307 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.307 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.307 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:49.307 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:49.307 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.307 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.307 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.307 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:49.307 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:35:49.307 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:49.307 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:49.307 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:49.307 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:49.307 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjhjZTdhYjk0ODZhNDA1MzI3MWM3Mzc0NjNmYWZlN2ZjZmI0MTVkMjdhMTc2OGIyNDFmMzAzZjhhMDU0MTVlYsnxLbQ=: 00:35:49.307 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:49.307 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:49.307 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:49.307 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjhjZTdhYjk0ODZhNDA1MzI3MWM3Mzc0NjNmYWZlN2ZjZmI0MTVkMjdhMTc2OGIyNDFmMzAzZjhhMDU0MTVlYsnxLbQ=: 00:35:49.307 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:49.307 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:35:49.307 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:49.307 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:49.307 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:49.307 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:49.307 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:49.307 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:49.307 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.307 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.307 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.307 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:49.307 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:49.307 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:49.307 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:49.307 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:49.307 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:49.307 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:49.308 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:49.308 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:49.308 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:49.308 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:49.308 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:49.308 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.308 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.566 nvme0n1 00:35:49.566 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.566 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:49.566 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.566 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.566 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:49.566 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.566 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:49.566 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:49.566 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.566 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.566 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.566 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:49.566 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:49.566 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:35:49.566 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:49.566 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:49.566 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:49.566 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:49.566 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGIwMDU0ZDE1OTAzZGE0ZDcxM2JkMTUxZWE2NTcwMjItaPJq: 00:35:49.566 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTMzNTY5NzVkZGNiNjUyYjRhOWJjOTE5YzJlODFiOTI3ZjNmM2EwYjJkMDMzNjc1YmU5OTMyZWIxOTkzNjY4NU19gpU=: 00:35:49.566 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:49.566 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:49.566 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGIwMDU0ZDE1OTAzZGE0ZDcxM2JkMTUxZWE2NTcwMjItaPJq: 00:35:49.566 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTMzNTY5NzVkZGNiNjUyYjRhOWJjOTE5YzJlODFiOTI3ZjNmM2EwYjJkMDMzNjc1YmU5OTMyZWIxOTkzNjY4NU19gpU=: ]] 00:35:49.566 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTMzNTY5NzVkZGNiNjUyYjRhOWJjOTE5YzJlODFiOTI3ZjNmM2EwYjJkMDMzNjc1YmU5OTMyZWIxOTkzNjY4NU19gpU=: 00:35:49.566 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:35:49.566 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:49.566 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:49.566 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:49.566 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:49.566 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:49.566 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:49.566 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.566 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.566 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.566 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:49.566 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:49.566 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:49.566 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:49.566 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:49.566 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:49.566 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:49.566 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:49.566 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:49.566 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:49.566 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:49.566 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:49.566 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.566 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.824 nvme0n1 00:35:49.824 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.824 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:49.824 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.824 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:49.824 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.824 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.824 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:49.824 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:49.824 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.824 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.824 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.824 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:49.824 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:35:49.824 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:49.824 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:49.824 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:49.824 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:49.824 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjRiNmEzMWRjNWQzZjg5Y2I1NmE5Y2FmM2E4YTliNzRhNDYyYjM2NDcxMGUzMzhip9bbxA==: 00:35:49.824 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjYxYzQzYzI5YjMyMDUwZDgxNzBlYWIyNjdmMWQwODc3MWVlMjE5NGNkOTg3NTM5U7rzEg==: 00:35:49.825 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:49.825 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:49.825 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjRiNmEzMWRjNWQzZjg5Y2I1NmE5Y2FmM2E4YTliNzRhNDYyYjM2NDcxMGUzMzhip9bbxA==: 00:35:49.825 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjYxYzQzYzI5YjMyMDUwZDgxNzBlYWIyNjdmMWQwODc3MWVlMjE5NGNkOTg3NTM5U7rzEg==: ]] 00:35:49.825 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjYxYzQzYzI5YjMyMDUwZDgxNzBlYWIyNjdmMWQwODc3MWVlMjE5NGNkOTg3NTM5U7rzEg==: 00:35:49.825 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:35:49.825 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:49.825 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:49.825 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:49.825 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:49.825 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:49.825 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:49.825 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.825 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.082 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.082 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:50.082 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:50.082 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:50.082 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:50.082 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:50.082 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:50.082 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:50.082 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:50.082 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:50.082 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:50.082 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:50.082 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:50.082 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.082 07:59:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.342 nvme0n1 00:35:50.342 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.342 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:50.342 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.342 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:50.342 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.342 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.342 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:50.342 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:50.342 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.342 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.342 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.342 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:50.342 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:35:50.342 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:50.342 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:50.342 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:50.342 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:50.342 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzY4NmEzNWUxMmE3MTZjM2IwMTFhYjgwMjU3MDIyNmHHYUbr: 00:35:50.342 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTZmODhhYWJmZDMxY2NkOWNkNzMyZjc4YjYwOGM0MTm6KS/l: 00:35:50.342 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:50.342 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:50.342 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzY4NmEzNWUxMmE3MTZjM2IwMTFhYjgwMjU3MDIyNmHHYUbr: 00:35:50.342 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTZmODhhYWJmZDMxY2NkOWNkNzMyZjc4YjYwOGM0MTm6KS/l: ]] 00:35:50.342 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTZmODhhYWJmZDMxY2NkOWNkNzMyZjc4YjYwOGM0MTm6KS/l: 00:35:50.342 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:35:50.342 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:50.342 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:50.342 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:50.342 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:50.342 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:50.342 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:50.342 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.342 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.342 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.342 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:50.342 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:50.342 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:50.342 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:50.342 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:50.342 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:50.342 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:50.342 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:50.342 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:50.342 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:50.343 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:50.343 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:50.343 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.343 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.603 nvme0n1 00:35:50.603 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.603 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:50.603 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.603 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.603 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:50.603 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.603 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:50.603 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:50.603 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.603 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.603 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.603 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:50.603 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:35:50.603 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:50.603 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:50.603 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:50.603 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:50.603 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmRiYWQ5OWM0MWZhZDk2Zjk2YjljNzc1NzM3Mjk5NDY2NDcyODc1ODI3ODVhYTRhnCzDFA==: 00:35:50.603 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmE4YzI0YzczYWQxNGJkZmU2ZWE0MTJkMzNiZDAwOGMeuw00: 00:35:50.603 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:50.603 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:50.603 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmRiYWQ5OWM0MWZhZDk2Zjk2YjljNzc1NzM3Mjk5NDY2NDcyODc1ODI3ODVhYTRhnCzDFA==: 00:35:50.603 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmE4YzI0YzczYWQxNGJkZmU2ZWE0MTJkMzNiZDAwOGMeuw00: ]] 00:35:50.603 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmE4YzI0YzczYWQxNGJkZmU2ZWE0MTJkMzNiZDAwOGMeuw00: 00:35:50.603 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:35:50.603 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:50.603 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:50.603 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:50.603 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:50.603 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:50.603 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:50.603 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.603 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.603 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.603 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:50.603 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:50.603 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:50.603 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:50.603 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:50.603 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:50.603 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:50.603 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:50.603 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:50.603 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:50.603 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:50.603 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:50.603 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.603 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.862 nvme0n1 00:35:50.862 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.862 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:50.862 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.862 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:50.862 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.122 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.122 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:51.122 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:51.122 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.122 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.122 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.122 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:51.122 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:35:51.122 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:51.122 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:51.122 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:51.122 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:51.122 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjhjZTdhYjk0ODZhNDA1MzI3MWM3Mzc0NjNmYWZlN2ZjZmI0MTVkMjdhMTc2OGIyNDFmMzAzZjhhMDU0MTVlYsnxLbQ=: 00:35:51.122 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:51.122 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:51.122 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:51.122 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjhjZTdhYjk0ODZhNDA1MzI3MWM3Mzc0NjNmYWZlN2ZjZmI0MTVkMjdhMTc2OGIyNDFmMzAzZjhhMDU0MTVlYsnxLbQ=: 00:35:51.122 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:51.122 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:35:51.122 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:51.122 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:51.122 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:51.122 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:51.122 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:51.122 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:51.122 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.122 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.122 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.122 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:51.122 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:51.122 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:51.122 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:51.122 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:51.122 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:51.122 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:51.122 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:51.122 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:51.122 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:51.122 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:51.122 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:51.122 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.122 07:59:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.383 nvme0n1 00:35:51.383 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.383 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:51.383 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:51.383 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.383 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.383 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.383 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:51.383 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:51.383 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.383 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.383 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.383 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:51.383 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:51.383 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:35:51.383 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:51.383 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:51.383 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:51.383 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:51.383 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGIwMDU0ZDE1OTAzZGE0ZDcxM2JkMTUxZWE2NTcwMjItaPJq: 00:35:51.383 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTMzNTY5NzVkZGNiNjUyYjRhOWJjOTE5YzJlODFiOTI3ZjNmM2EwYjJkMDMzNjc1YmU5OTMyZWIxOTkzNjY4NU19gpU=: 00:35:51.383 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:51.383 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:51.383 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGIwMDU0ZDE1OTAzZGE0ZDcxM2JkMTUxZWE2NTcwMjItaPJq: 00:35:51.383 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTMzNTY5NzVkZGNiNjUyYjRhOWJjOTE5YzJlODFiOTI3ZjNmM2EwYjJkMDMzNjc1YmU5OTMyZWIxOTkzNjY4NU19gpU=: ]] 00:35:51.383 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTMzNTY5NzVkZGNiNjUyYjRhOWJjOTE5YzJlODFiOTI3ZjNmM2EwYjJkMDMzNjc1YmU5OTMyZWIxOTkzNjY4NU19gpU=: 00:35:51.383 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:35:51.383 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:51.383 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:51.383 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:51.383 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:51.383 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:51.383 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:51.383 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.383 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.383 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.383 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:51.383 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:51.383 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:51.383 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:51.383 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:51.383 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:51.383 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:51.383 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:51.383 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:51.383 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:51.383 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:51.383 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:51.383 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.383 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.954 nvme0n1 00:35:51.954 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.954 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:51.954 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:51.954 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.954 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.954 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.954 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:51.954 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:51.954 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.954 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.954 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.954 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:51.954 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:35:51.954 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:51.954 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:51.954 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:51.954 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:51.954 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjRiNmEzMWRjNWQzZjg5Y2I1NmE5Y2FmM2E4YTliNzRhNDYyYjM2NDcxMGUzMzhip9bbxA==: 00:35:51.954 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjYxYzQzYzI5YjMyMDUwZDgxNzBlYWIyNjdmMWQwODc3MWVlMjE5NGNkOTg3NTM5U7rzEg==: 00:35:51.954 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:51.954 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:51.955 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjRiNmEzMWRjNWQzZjg5Y2I1NmE5Y2FmM2E4YTliNzRhNDYyYjM2NDcxMGUzMzhip9bbxA==: 00:35:51.955 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjYxYzQzYzI5YjMyMDUwZDgxNzBlYWIyNjdmMWQwODc3MWVlMjE5NGNkOTg3NTM5U7rzEg==: ]] 00:35:51.955 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjYxYzQzYzI5YjMyMDUwZDgxNzBlYWIyNjdmMWQwODc3MWVlMjE5NGNkOTg3NTM5U7rzEg==: 00:35:51.955 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:35:51.955 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:51.955 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:51.955 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:51.955 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:51.955 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:51.955 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:51.955 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.955 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.955 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.955 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:51.955 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:51.955 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:51.955 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:51.955 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:51.955 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:51.955 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:51.955 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:51.955 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:51.955 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:51.955 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:51.955 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:51.955 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.955 07:59:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.524 nvme0n1 00:35:52.524 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.524 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:52.524 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:52.524 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.524 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.524 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.524 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:52.524 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:52.524 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.524 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.524 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.524 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:52.524 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:35:52.524 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:52.524 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:52.524 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:52.524 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:52.524 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzY4NmEzNWUxMmE3MTZjM2IwMTFhYjgwMjU3MDIyNmHHYUbr: 00:35:52.524 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTZmODhhYWJmZDMxY2NkOWNkNzMyZjc4YjYwOGM0MTm6KS/l: 00:35:52.524 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:52.525 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:52.525 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzY4NmEzNWUxMmE3MTZjM2IwMTFhYjgwMjU3MDIyNmHHYUbr: 00:35:52.525 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTZmODhhYWJmZDMxY2NkOWNkNzMyZjc4YjYwOGM0MTm6KS/l: ]] 00:35:52.525 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTZmODhhYWJmZDMxY2NkOWNkNzMyZjc4YjYwOGM0MTm6KS/l: 00:35:52.525 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:35:52.525 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:52.525 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:52.525 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:52.525 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:52.525 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:52.525 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:52.525 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.525 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.525 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.525 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:52.525 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:52.525 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:52.525 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:52.525 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:52.525 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:52.525 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:52.525 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:52.525 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:52.525 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:52.525 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:52.525 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:52.525 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.525 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.094 nvme0n1 00:35:53.094 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.094 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:53.094 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.094 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:53.094 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.094 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.094 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:53.094 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:53.094 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.094 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.094 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.094 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:53.094 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:35:53.094 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:53.094 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:53.094 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:53.094 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:53.094 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmRiYWQ5OWM0MWZhZDk2Zjk2YjljNzc1NzM3Mjk5NDY2NDcyODc1ODI3ODVhYTRhnCzDFA==: 00:35:53.094 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmE4YzI0YzczYWQxNGJkZmU2ZWE0MTJkMzNiZDAwOGMeuw00: 00:35:53.094 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:53.094 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:53.094 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmRiYWQ5OWM0MWZhZDk2Zjk2YjljNzc1NzM3Mjk5NDY2NDcyODc1ODI3ODVhYTRhnCzDFA==: 00:35:53.094 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmE4YzI0YzczYWQxNGJkZmU2ZWE0MTJkMzNiZDAwOGMeuw00: ]] 00:35:53.094 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmE4YzI0YzczYWQxNGJkZmU2ZWE0MTJkMzNiZDAwOGMeuw00: 00:35:53.094 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:35:53.094 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:53.094 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:53.094 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:53.094 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:53.094 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:53.094 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:53.095 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.095 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.095 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.095 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:53.095 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:53.095 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:53.095 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:53.095 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:53.095 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:53.095 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:53.095 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:53.095 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:53.095 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:53.095 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:53.095 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:53.095 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.095 07:59:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.662 nvme0n1 00:35:53.662 07:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.662 07:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:53.662 07:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.662 07:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:53.662 07:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.662 07:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.662 07:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:53.662 07:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:53.662 07:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.662 07:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.662 07:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.662 07:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:53.662 07:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:35:53.662 07:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:53.662 07:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:53.662 07:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:53.920 07:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:53.920 07:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjhjZTdhYjk0ODZhNDA1MzI3MWM3Mzc0NjNmYWZlN2ZjZmI0MTVkMjdhMTc2OGIyNDFmMzAzZjhhMDU0MTVlYsnxLbQ=: 00:35:53.920 07:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:53.920 07:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:53.920 07:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:53.921 07:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjhjZTdhYjk0ODZhNDA1MzI3MWM3Mzc0NjNmYWZlN2ZjZmI0MTVkMjdhMTc2OGIyNDFmMzAzZjhhMDU0MTVlYsnxLbQ=: 00:35:53.921 07:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:53.921 07:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:35:53.921 07:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:53.921 07:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:53.921 07:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:53.921 07:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:53.921 07:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:53.921 07:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:53.921 07:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.921 07:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.921 07:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.921 07:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:53.921 07:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:53.921 07:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:53.921 07:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:53.921 07:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:53.921 07:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:53.921 07:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:53.921 07:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:53.921 07:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:53.921 07:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:53.921 07:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:53.921 07:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:53.921 07:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.921 07:59:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.487 nvme0n1 00:35:54.487 07:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.487 07:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:54.487 07:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:54.487 07:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.487 07:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.487 07:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.487 07:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:54.487 07:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:54.487 07:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.487 07:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.487 07:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.487 07:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:54.487 07:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:54.487 07:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:35:54.487 07:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:54.487 07:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:54.487 07:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:54.487 07:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:54.487 07:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGIwMDU0ZDE1OTAzZGE0ZDcxM2JkMTUxZWE2NTcwMjItaPJq: 00:35:54.487 07:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTMzNTY5NzVkZGNiNjUyYjRhOWJjOTE5YzJlODFiOTI3ZjNmM2EwYjJkMDMzNjc1YmU5OTMyZWIxOTkzNjY4NU19gpU=: 00:35:54.487 07:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:54.487 07:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:54.487 07:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGIwMDU0ZDE1OTAzZGE0ZDcxM2JkMTUxZWE2NTcwMjItaPJq: 00:35:54.487 07:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTMzNTY5NzVkZGNiNjUyYjRhOWJjOTE5YzJlODFiOTI3ZjNmM2EwYjJkMDMzNjc1YmU5OTMyZWIxOTkzNjY4NU19gpU=: ]] 00:35:54.487 07:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTMzNTY5NzVkZGNiNjUyYjRhOWJjOTE5YzJlODFiOTI3ZjNmM2EwYjJkMDMzNjc1YmU5OTMyZWIxOTkzNjY4NU19gpU=: 00:35:54.487 07:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:35:54.487 07:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:54.487 07:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:54.487 07:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:54.487 07:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:54.487 07:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:54.487 07:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:54.487 07:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.487 07:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.487 07:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.487 07:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:54.487 07:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:54.487 07:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:54.487 07:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:54.487 07:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:54.487 07:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:54.487 07:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:54.487 07:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:54.487 07:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:54.487 07:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:54.487 07:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:54.487 07:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:54.487 07:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.487 07:59:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.426 nvme0n1 00:35:55.426 07:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.426 07:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:55.426 07:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.426 07:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.426 07:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:55.426 07:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.426 07:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:55.426 07:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:55.426 07:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.426 07:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.426 07:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.426 07:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:55.426 07:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:35:55.426 07:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:55.426 07:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:55.426 07:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:55.426 07:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:55.426 07:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjRiNmEzMWRjNWQzZjg5Y2I1NmE5Y2FmM2E4YTliNzRhNDYyYjM2NDcxMGUzMzhip9bbxA==: 00:35:55.427 07:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjYxYzQzYzI5YjMyMDUwZDgxNzBlYWIyNjdmMWQwODc3MWVlMjE5NGNkOTg3NTM5U7rzEg==: 00:35:55.427 07:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:55.427 07:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:55.427 07:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjRiNmEzMWRjNWQzZjg5Y2I1NmE5Y2FmM2E4YTliNzRhNDYyYjM2NDcxMGUzMzhip9bbxA==: 00:35:55.427 07:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjYxYzQzYzI5YjMyMDUwZDgxNzBlYWIyNjdmMWQwODc3MWVlMjE5NGNkOTg3NTM5U7rzEg==: ]] 00:35:55.427 07:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjYxYzQzYzI5YjMyMDUwZDgxNzBlYWIyNjdmMWQwODc3MWVlMjE5NGNkOTg3NTM5U7rzEg==: 00:35:55.427 07:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:35:55.427 07:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:55.427 07:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:55.427 07:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:55.427 07:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:55.427 07:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:55.427 07:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:55.427 07:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.427 07:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.427 07:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.427 07:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:55.427 07:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:55.427 07:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:55.427 07:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:55.427 07:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:55.427 07:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:55.427 07:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:55.427 07:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:55.427 07:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:55.427 07:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:55.427 07:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:55.427 07:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:55.427 07:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.427 07:59:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.376 nvme0n1 00:35:56.376 07:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.376 07:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:56.376 07:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:56.376 07:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.376 07:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.376 07:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.376 07:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:56.376 07:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:56.376 07:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.376 07:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.376 07:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.376 07:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:56.376 07:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:35:56.376 07:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:56.376 07:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:56.376 07:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:56.376 07:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:56.376 07:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzY4NmEzNWUxMmE3MTZjM2IwMTFhYjgwMjU3MDIyNmHHYUbr: 00:35:56.376 07:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTZmODhhYWJmZDMxY2NkOWNkNzMyZjc4YjYwOGM0MTm6KS/l: 00:35:56.376 07:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:56.376 07:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:56.376 07:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzY4NmEzNWUxMmE3MTZjM2IwMTFhYjgwMjU3MDIyNmHHYUbr: 00:35:56.376 07:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTZmODhhYWJmZDMxY2NkOWNkNzMyZjc4YjYwOGM0MTm6KS/l: ]] 00:35:56.376 07:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTZmODhhYWJmZDMxY2NkOWNkNzMyZjc4YjYwOGM0MTm6KS/l: 00:35:56.376 07:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:35:56.376 07:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:56.376 07:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:56.376 07:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:56.376 07:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:56.376 07:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:56.376 07:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:56.376 07:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.376 07:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.376 07:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.376 07:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:56.376 07:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:56.376 07:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:56.376 07:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:56.376 07:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:56.376 07:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:56.376 07:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:56.376 07:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:56.376 07:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:56.376 07:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:56.376 07:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:56.376 07:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:56.376 07:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.376 07:59:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.316 nvme0n1 00:35:57.316 07:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.316 07:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:57.316 07:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:57.316 07:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.317 07:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.317 07:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.317 07:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:57.317 07:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:57.317 07:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.317 07:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.575 07:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.575 07:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:57.575 07:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:35:57.575 07:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:57.575 07:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:57.575 07:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:57.575 07:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:57.575 07:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmRiYWQ5OWM0MWZhZDk2Zjk2YjljNzc1NzM3Mjk5NDY2NDcyODc1ODI3ODVhYTRhnCzDFA==: 00:35:57.575 07:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmE4YzI0YzczYWQxNGJkZmU2ZWE0MTJkMzNiZDAwOGMeuw00: 00:35:57.575 07:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:57.575 07:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:57.575 07:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmRiYWQ5OWM0MWZhZDk2Zjk2YjljNzc1NzM3Mjk5NDY2NDcyODc1ODI3ODVhYTRhnCzDFA==: 00:35:57.575 07:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmE4YzI0YzczYWQxNGJkZmU2ZWE0MTJkMzNiZDAwOGMeuw00: ]] 00:35:57.575 07:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmE4YzI0YzczYWQxNGJkZmU2ZWE0MTJkMzNiZDAwOGMeuw00: 00:35:57.575 07:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:35:57.575 07:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:57.575 07:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:57.575 07:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:57.575 07:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:57.575 07:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:57.575 07:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:57.575 07:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.575 07:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.575 07:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.575 07:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:57.575 07:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:57.575 07:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:57.575 07:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:57.575 07:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:57.575 07:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:57.575 07:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:57.575 07:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:57.575 07:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:57.575 07:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:57.575 07:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:57.575 07:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:57.575 07:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.575 07:59:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.512 nvme0n1 00:35:58.512 07:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.512 07:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:58.512 07:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.512 07:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.512 07:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:58.512 07:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.512 07:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:58.512 07:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:58.512 07:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.512 07:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.512 07:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.512 07:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:58.512 07:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:35:58.512 07:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:58.512 07:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:58.512 07:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:58.512 07:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:58.512 07:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjhjZTdhYjk0ODZhNDA1MzI3MWM3Mzc0NjNmYWZlN2ZjZmI0MTVkMjdhMTc2OGIyNDFmMzAzZjhhMDU0MTVlYsnxLbQ=: 00:35:58.512 07:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:58.512 07:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:58.512 07:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:58.512 07:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjhjZTdhYjk0ODZhNDA1MzI3MWM3Mzc0NjNmYWZlN2ZjZmI0MTVkMjdhMTc2OGIyNDFmMzAzZjhhMDU0MTVlYsnxLbQ=: 00:35:58.512 07:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:58.512 07:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:35:58.512 07:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:58.512 07:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:58.512 07:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:58.512 07:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:58.512 07:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:58.512 07:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:58.512 07:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.512 07:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.512 07:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.512 07:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:58.512 07:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:58.512 07:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:58.512 07:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:58.512 07:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:58.512 07:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:58.512 07:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:58.512 07:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:58.512 07:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:58.512 07:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:58.512 07:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:58.512 07:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:58.512 07:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.512 07:59:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.448 nvme0n1 00:35:59.448 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.448 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:59.448 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.448 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.448 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:59.448 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.448 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:59.448 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:59.448 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.448 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.448 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.448 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:59.448 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:59.448 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:59.448 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:59.448 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:59.448 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjRiNmEzMWRjNWQzZjg5Y2I1NmE5Y2FmM2E4YTliNzRhNDYyYjM2NDcxMGUzMzhip9bbxA==: 00:35:59.448 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjYxYzQzYzI5YjMyMDUwZDgxNzBlYWIyNjdmMWQwODc3MWVlMjE5NGNkOTg3NTM5U7rzEg==: 00:35:59.448 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:59.448 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:59.448 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjRiNmEzMWRjNWQzZjg5Y2I1NmE5Y2FmM2E4YTliNzRhNDYyYjM2NDcxMGUzMzhip9bbxA==: 00:35:59.448 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjYxYzQzYzI5YjMyMDUwZDgxNzBlYWIyNjdmMWQwODc3MWVlMjE5NGNkOTg3NTM5U7rzEg==: ]] 00:35:59.448 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjYxYzQzYzI5YjMyMDUwZDgxNzBlYWIyNjdmMWQwODc3MWVlMjE5NGNkOTg3NTM5U7rzEg==: 00:35:59.448 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:59.448 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.448 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.708 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.708 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:35:59.708 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:59.708 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:59.708 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:59.708 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:59.708 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:59.708 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:59.708 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:59.708 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:59.708 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:59.708 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:59.708 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:35:59.708 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:35:59.708 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:35:59.708 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:59.708 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:59.708 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:59.708 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:59.708 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:35:59.708 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.708 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.708 request: 00:35:59.708 { 00:35:59.708 "name": "nvme0", 00:35:59.708 "trtype": "tcp", 00:35:59.708 "traddr": "10.0.0.1", 00:35:59.708 "adrfam": "ipv4", 00:35:59.708 "trsvcid": "4420", 00:35:59.708 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:35:59.708 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:35:59.708 "prchk_reftag": false, 00:35:59.708 "prchk_guard": false, 00:35:59.708 "hdgst": false, 00:35:59.708 "ddgst": false, 00:35:59.708 "allow_unrecognized_csi": false, 00:35:59.708 "method": "bdev_nvme_attach_controller", 00:35:59.708 "req_id": 1 00:35:59.708 } 00:35:59.708 Got JSON-RPC error response 00:35:59.708 response: 00:35:59.708 { 00:35:59.708 "code": -5, 00:35:59.708 "message": "Input/output error" 00:35:59.708 } 00:35:59.708 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:59.708 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:35:59.708 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:59.708 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:59.708 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:59.708 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:35:59.708 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.708 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.708 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:35:59.708 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.708 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:35:59.708 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:35:59.708 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:59.708 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:59.708 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:59.708 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:59.708 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:59.708 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:59.708 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:59.708 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:59.708 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:59.708 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:59.708 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:35:59.708 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:35:59.708 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:35:59.708 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:59.708 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:59.708 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:59.708 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:59.708 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:35:59.708 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.708 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.708 request: 00:35:59.708 { 00:35:59.708 "name": "nvme0", 00:35:59.708 "trtype": "tcp", 00:35:59.708 "traddr": "10.0.0.1", 00:35:59.708 "adrfam": "ipv4", 00:35:59.708 "trsvcid": "4420", 00:35:59.708 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:35:59.708 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:35:59.708 "prchk_reftag": false, 00:35:59.708 "prchk_guard": false, 00:35:59.708 "hdgst": false, 00:35:59.708 "ddgst": false, 00:35:59.708 "dhchap_key": "key2", 00:35:59.708 "allow_unrecognized_csi": false, 00:35:59.708 "method": "bdev_nvme_attach_controller", 00:35:59.708 "req_id": 1 00:35:59.708 } 00:35:59.708 Got JSON-RPC error response 00:35:59.708 response: 00:35:59.708 { 00:35:59.708 "code": -5, 00:35:59.708 "message": "Input/output error" 00:35:59.708 } 00:35:59.708 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:59.708 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:35:59.708 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:59.708 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:59.708 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:59.708 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:35:59.708 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.708 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:35:59.708 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.708 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.967 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:35:59.967 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:35:59.967 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:59.967 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:59.967 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:59.967 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:59.967 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:59.967 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:59.967 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:59.967 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:59.967 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:59.967 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:59.967 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:59.967 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:35:59.967 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:59.967 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:59.967 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:59.967 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:59.967 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:59.967 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:59.967 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.967 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.967 request: 00:35:59.967 { 00:35:59.967 "name": "nvme0", 00:35:59.967 "trtype": "tcp", 00:35:59.967 "traddr": "10.0.0.1", 00:35:59.967 "adrfam": "ipv4", 00:35:59.967 "trsvcid": "4420", 00:35:59.967 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:35:59.967 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:35:59.967 "prchk_reftag": false, 00:35:59.967 "prchk_guard": false, 00:35:59.967 "hdgst": false, 00:35:59.967 "ddgst": false, 00:35:59.967 "dhchap_key": "key1", 00:35:59.967 "dhchap_ctrlr_key": "ckey2", 00:35:59.967 "allow_unrecognized_csi": false, 00:35:59.967 "method": "bdev_nvme_attach_controller", 00:35:59.967 "req_id": 1 00:35:59.967 } 00:35:59.967 Got JSON-RPC error response 00:35:59.967 response: 00:35:59.967 { 00:35:59.967 "code": -5, 00:35:59.967 "message": "Input/output error" 00:35:59.968 } 00:35:59.968 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:59.968 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:35:59.968 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:59.968 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:59.968 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:59.968 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:35:59.968 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:59.968 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:59.968 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:59.968 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:59.968 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:59.968 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:59.968 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:59.968 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:59.968 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:59.968 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:59.968 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:35:59.968 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.968 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.968 nvme0n1 00:35:59.968 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.968 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:35:59.968 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:59.968 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:59.968 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:59.968 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:59.968 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzY4NmEzNWUxMmE3MTZjM2IwMTFhYjgwMjU3MDIyNmHHYUbr: 00:35:59.968 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTZmODhhYWJmZDMxY2NkOWNkNzMyZjc4YjYwOGM0MTm6KS/l: 00:35:59.968 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:59.968 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:59.968 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzY4NmEzNWUxMmE3MTZjM2IwMTFhYjgwMjU3MDIyNmHHYUbr: 00:35:59.968 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTZmODhhYWJmZDMxY2NkOWNkNzMyZjc4YjYwOGM0MTm6KS/l: ]] 00:35:59.968 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTZmODhhYWJmZDMxY2NkOWNkNzMyZjc4YjYwOGM0MTm6KS/l: 00:35:59.968 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:59.968 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.968 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.226 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.226 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:36:00.226 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.226 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:36:00.226 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.226 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.226 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:00.226 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:00.226 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:36:00.227 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:00.227 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:00.227 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:00.227 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:00.227 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:00.227 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:00.227 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.227 07:59:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.227 request: 00:36:00.227 { 00:36:00.227 "name": "nvme0", 00:36:00.227 "dhchap_key": "key1", 00:36:00.227 "dhchap_ctrlr_key": "ckey2", 00:36:00.227 "method": "bdev_nvme_set_keys", 00:36:00.227 "req_id": 1 00:36:00.227 } 00:36:00.227 Got JSON-RPC error response 00:36:00.227 response: 00:36:00.227 { 00:36:00.227 "code": -13, 00:36:00.227 "message": "Permission denied" 00:36:00.227 } 00:36:00.227 07:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:00.227 07:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:36:00.227 07:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:00.227 07:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:00.227 07:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:00.227 07:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:36:00.227 07:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:36:00.227 07:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.227 07:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.227 07:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.227 07:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:36:00.227 07:59:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:36:01.604 07:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:36:01.604 07:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.604 07:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:36:01.604 07:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.604 07:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.604 07:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:36:01.604 07:59:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:36:02.541 07:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:36:02.541 07:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.541 07:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.541 07:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:36:02.541 07:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.541 07:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:36:02.541 07:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:02.541 07:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:02.541 07:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:02.541 07:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:02.541 07:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:02.541 07:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjRiNmEzMWRjNWQzZjg5Y2I1NmE5Y2FmM2E4YTliNzRhNDYyYjM2NDcxMGUzMzhip9bbxA==: 00:36:02.541 07:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjYxYzQzYzI5YjMyMDUwZDgxNzBlYWIyNjdmMWQwODc3MWVlMjE5NGNkOTg3NTM5U7rzEg==: 00:36:02.541 07:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:02.541 07:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:02.541 07:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjRiNmEzMWRjNWQzZjg5Y2I1NmE5Y2FmM2E4YTliNzRhNDYyYjM2NDcxMGUzMzhip9bbxA==: 00:36:02.541 07:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjYxYzQzYzI5YjMyMDUwZDgxNzBlYWIyNjdmMWQwODc3MWVlMjE5NGNkOTg3NTM5U7rzEg==: ]] 00:36:02.541 07:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjYxYzQzYzI5YjMyMDUwZDgxNzBlYWIyNjdmMWQwODc3MWVlMjE5NGNkOTg3NTM5U7rzEg==: 00:36:02.541 07:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:36:02.541 07:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:02.541 07:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:02.541 07:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:02.541 07:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:02.541 07:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:02.541 07:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:02.541 07:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:02.541 07:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:02.541 07:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:02.541 07:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:02.541 07:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:36:02.541 07:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.541 07:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.541 nvme0n1 00:36:02.541 07:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.541 07:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:36:02.541 07:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:02.541 07:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:02.541 07:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:02.541 07:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:02.541 07:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzY4NmEzNWUxMmE3MTZjM2IwMTFhYjgwMjU3MDIyNmHHYUbr: 00:36:02.541 07:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTZmODhhYWJmZDMxY2NkOWNkNzMyZjc4YjYwOGM0MTm6KS/l: 00:36:02.541 07:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:02.541 07:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:02.541 07:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzY4NmEzNWUxMmE3MTZjM2IwMTFhYjgwMjU3MDIyNmHHYUbr: 00:36:02.541 07:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTZmODhhYWJmZDMxY2NkOWNkNzMyZjc4YjYwOGM0MTm6KS/l: ]] 00:36:02.541 07:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTZmODhhYWJmZDMxY2NkOWNkNzMyZjc4YjYwOGM0MTm6KS/l: 00:36:02.541 07:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:36:02.541 07:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:36:02.541 07:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:36:02.541 07:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:02.541 07:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:02.541 07:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:02.541 07:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:02.541 07:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:36:02.541 07:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.541 07:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.541 request: 00:36:02.541 { 00:36:02.541 "name": "nvme0", 00:36:02.541 "dhchap_key": "key2", 00:36:02.541 "dhchap_ctrlr_key": "ckey1", 00:36:02.541 "method": "bdev_nvme_set_keys", 00:36:02.541 "req_id": 1 00:36:02.541 } 00:36:02.541 Got JSON-RPC error response 00:36:02.541 response: 00:36:02.541 { 00:36:02.541 "code": -13, 00:36:02.541 "message": "Permission denied" 00:36:02.541 } 00:36:02.541 07:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:02.541 07:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:36:02.541 07:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:02.541 07:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:02.541 07:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:02.541 07:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:36:02.541 07:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:36:02.541 07:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.541 07:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.541 07:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.801 07:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:36:02.801 07:59:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:36:03.740 07:59:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:36:03.740 07:59:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:36:03.740 07:59:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.740 07:59:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.740 07:59:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.740 07:59:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:36:03.740 07:59:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:36:03.741 07:59:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:36:03.741 07:59:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:36:03.741 07:59:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:03.741 07:59:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:36:03.741 07:59:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:03.741 07:59:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:36:03.741 07:59:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:03.741 07:59:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:03.741 rmmod nvme_tcp 00:36:03.741 rmmod nvme_fabrics 00:36:03.741 07:59:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:03.741 07:59:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:36:03.741 07:59:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:36:03.741 07:59:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 3115789 ']' 00:36:03.741 07:59:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 3115789 00:36:03.741 07:59:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 3115789 ']' 00:36:03.741 07:59:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 3115789 00:36:03.741 07:59:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:36:03.741 07:59:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:03.741 07:59:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3115789 00:36:03.741 07:59:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:03.741 07:59:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:03.741 07:59:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3115789' 00:36:03.741 killing process with pid 3115789 00:36:03.741 07:59:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 3115789 00:36:03.741 07:59:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 3115789 00:36:04.681 07:59:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:04.681 07:59:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:04.681 07:59:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:04.681 07:59:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:36:04.681 07:59:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:36:04.681 07:59:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:04.681 07:59:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:36:04.681 07:59:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:04.681 07:59:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:04.681 07:59:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:04.681 07:59:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:04.681 07:59:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:07.214 07:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:07.214 07:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:36:07.214 07:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:36:07.214 07:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:36:07.214 07:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:36:07.214 07:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:36:07.214 07:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:07.214 07:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:36:07.214 07:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:07.214 07:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:07.214 07:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:36:07.214 07:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:36:07.214 07:59:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:08.155 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:08.155 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:08.155 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:08.155 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:08.155 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:08.155 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:08.155 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:08.155 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:08.155 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:08.155 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:08.155 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:08.155 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:08.155 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:08.155 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:08.155 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:08.155 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:09.089 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:36:09.089 08:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.isU /tmp/spdk.key-null.Yld /tmp/spdk.key-sha256.I8C /tmp/spdk.key-sha384.4Bs /tmp/spdk.key-sha512.lrZ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:36:09.089 08:00:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:10.465 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:36:10.466 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:36:10.466 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:36:10.466 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:36:10.466 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:36:10.466 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:36:10.466 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:36:10.466 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:36:10.466 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:36:10.466 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:36:10.466 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:36:10.466 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:36:10.466 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:36:10.466 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:36:10.466 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:36:10.466 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:36:10.466 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:36:10.466 00:36:10.466 real 0m55.109s 00:36:10.466 user 0m52.456s 00:36:10.466 sys 0m6.096s 00:36:10.466 08:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:10.466 08:00:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.466 ************************************ 00:36:10.466 END TEST nvmf_auth_host 00:36:10.466 ************************************ 00:36:10.466 08:00:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:36:10.466 08:00:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:36:10.466 08:00:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:10.466 08:00:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:10.466 08:00:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.466 ************************************ 00:36:10.466 START TEST nvmf_digest 00:36:10.466 ************************************ 00:36:10.466 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:36:10.466 * Looking for test storage... 00:36:10.466 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:10.466 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:10.466 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:36:10.466 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:10.466 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:10.466 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:10.466 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:10.466 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:10.466 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:36:10.466 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:36:10.466 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:36:10.466 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:36:10.466 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:36:10.466 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:36:10.466 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:36:10.466 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:10.466 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:36:10.466 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:36:10.466 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:10.466 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:10.466 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:36:10.466 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:36:10.466 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:10.466 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:36:10.466 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:36:10.466 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:36:10.466 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:36:10.466 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:10.466 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:36:10.466 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:36:10.466 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:10.466 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:10.466 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:36:10.466 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:10.466 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:10.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:10.466 --rc genhtml_branch_coverage=1 00:36:10.466 --rc genhtml_function_coverage=1 00:36:10.466 --rc genhtml_legend=1 00:36:10.466 --rc geninfo_all_blocks=1 00:36:10.466 --rc geninfo_unexecuted_blocks=1 00:36:10.466 00:36:10.466 ' 00:36:10.466 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:10.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:10.466 --rc genhtml_branch_coverage=1 00:36:10.466 --rc genhtml_function_coverage=1 00:36:10.466 --rc genhtml_legend=1 00:36:10.466 --rc geninfo_all_blocks=1 00:36:10.466 --rc geninfo_unexecuted_blocks=1 00:36:10.466 00:36:10.466 ' 00:36:10.466 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:10.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:10.466 --rc genhtml_branch_coverage=1 00:36:10.466 --rc genhtml_function_coverage=1 00:36:10.466 --rc genhtml_legend=1 00:36:10.466 --rc geninfo_all_blocks=1 00:36:10.466 --rc geninfo_unexecuted_blocks=1 00:36:10.466 00:36:10.466 ' 00:36:10.466 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:10.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:10.466 --rc genhtml_branch_coverage=1 00:36:10.466 --rc genhtml_function_coverage=1 00:36:10.466 --rc genhtml_legend=1 00:36:10.466 --rc geninfo_all_blocks=1 00:36:10.466 --rc geninfo_unexecuted_blocks=1 00:36:10.466 00:36:10.466 ' 00:36:10.466 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:10.466 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:36:10.466 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:10.466 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:10.466 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:10.466 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:10.466 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:10.466 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:10.466 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:10.466 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:10.466 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:10.466 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:10.466 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:10.466 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:10.466 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:10.466 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:10.466 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:10.466 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:10.467 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:10.467 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:36:10.467 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:10.467 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:10.467 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:10.467 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:10.467 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:10.467 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:10.467 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:36:10.467 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:10.467 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:36:10.467 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:10.467 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:10.467 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:10.467 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:10.467 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:10.467 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:10.467 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:10.467 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:10.467 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:10.467 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:10.467 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:36:10.467 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:36:10.467 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:36:10.467 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:36:10.467 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:36:10.467 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:10.467 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:10.467 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:10.467 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:10.467 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:10.467 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:10.467 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:10.467 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:10.467 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:10.467 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:10.467 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:36:10.467 08:00:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:13.003 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:13.003 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:13.003 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:13.003 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:13.003 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:13.003 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:13.003 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.393 ms 00:36:13.003 00:36:13.003 --- 10.0.0.2 ping statistics --- 00:36:13.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:13.004 rtt min/avg/max/mdev = 0.393/0.393/0.393/0.000 ms 00:36:13.004 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:13.004 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:13.004 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:36:13.004 00:36:13.004 --- 10.0.0.1 ping statistics --- 00:36:13.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:13.004 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:36:13.004 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:13.004 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:36:13.004 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:13.004 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:13.004 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:13.004 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:13.004 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:13.004 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:13.004 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:13.004 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:36:13.004 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:36:13.004 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:36:13.004 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:13.004 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:13.004 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:13.004 ************************************ 00:36:13.004 START TEST nvmf_digest_clean 00:36:13.004 ************************************ 00:36:13.004 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:36:13.004 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:36:13.004 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:36:13.004 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:36:13.004 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:36:13.004 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:36:13.004 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:13.004 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:13.004 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:13.004 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=3125922 00:36:13.004 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:36:13.004 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 3125922 00:36:13.004 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3125922 ']' 00:36:13.004 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:13.004 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:13.004 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:13.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:13.004 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:13.004 08:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:13.004 [2024-11-19 08:00:04.734665] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:36:13.004 [2024-11-19 08:00:04.734835] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:13.004 [2024-11-19 08:00:04.876082] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:13.263 [2024-11-19 08:00:05.008886] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:13.263 [2024-11-19 08:00:05.008993] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:13.263 [2024-11-19 08:00:05.009019] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:13.263 [2024-11-19 08:00:05.009045] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:13.263 [2024-11-19 08:00:05.009065] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:13.263 [2024-11-19 08:00:05.010727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:13.830 08:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:13.830 08:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:36:13.830 08:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:13.830 08:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:13.830 08:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:14.088 08:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:14.088 08:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:36:14.088 08:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:36:14.088 08:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:36:14.088 08:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.088 08:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:14.347 null0 00:36:14.347 [2024-11-19 08:00:06.157019] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:14.347 [2024-11-19 08:00:06.181333] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:14.347 08:00:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.347 08:00:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:36:14.347 08:00:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:14.347 08:00:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:14.347 08:00:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:36:14.347 08:00:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:36:14.347 08:00:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:36:14.347 08:00:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:14.347 08:00:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3126085 00:36:14.347 08:00:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:36:14.347 08:00:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3126085 /var/tmp/bperf.sock 00:36:14.347 08:00:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3126085 ']' 00:36:14.347 08:00:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:14.347 08:00:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:14.347 08:00:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:14.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:14.347 08:00:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:14.347 08:00:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:14.347 [2024-11-19 08:00:06.277520] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:36:14.347 [2024-11-19 08:00:06.277671] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3126085 ] 00:36:14.605 [2024-11-19 08:00:06.423355] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:14.863 [2024-11-19 08:00:06.546549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:15.457 08:00:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:15.457 08:00:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:36:15.457 08:00:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:15.457 08:00:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:15.457 08:00:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:16.048 08:00:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:16.048 08:00:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:16.617 nvme0n1 00:36:16.617 08:00:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:16.617 08:00:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:16.617 Running I/O for 2 seconds... 00:36:18.490 15046.00 IOPS, 58.77 MiB/s [2024-11-19T07:00:10.685Z] 14703.00 IOPS, 57.43 MiB/s 00:36:18.755 Latency(us) 00:36:18.755 [2024-11-19T07:00:10.685Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:18.755 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:18.755 nvme0n1 : 2.04 14445.57 56.43 0.00 0.00 8680.39 4466.16 47185.92 00:36:18.755 [2024-11-19T07:00:10.685Z] =================================================================================================================== 00:36:18.755 [2024-11-19T07:00:10.685Z] Total : 14445.57 56.43 0.00 0.00 8680.39 4466.16 47185.92 00:36:18.755 { 00:36:18.755 "results": [ 00:36:18.755 { 00:36:18.755 "job": "nvme0n1", 00:36:18.755 "core_mask": "0x2", 00:36:18.755 "workload": "randread", 00:36:18.755 "status": "finished", 00:36:18.755 "queue_depth": 128, 00:36:18.755 "io_size": 4096, 00:36:18.755 "runtime": 2.044502, 00:36:18.755 "iops": 14445.571586625985, 00:36:18.755 "mibps": 56.428014010257755, 00:36:18.755 "io_failed": 0, 00:36:18.755 "io_timeout": 0, 00:36:18.755 "avg_latency_us": 8680.391207421955, 00:36:18.755 "min_latency_us": 4466.157037037037, 00:36:18.755 "max_latency_us": 47185.92 00:36:18.755 } 00:36:18.755 ], 00:36:18.755 "core_count": 1 00:36:18.755 } 00:36:18.755 08:00:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:18.755 08:00:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:18.755 08:00:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:18.755 08:00:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:18.755 08:00:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:18.755 | select(.opcode=="crc32c") 00:36:18.755 | "\(.module_name) \(.executed)"' 00:36:19.013 08:00:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:19.013 08:00:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:19.013 08:00:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:19.013 08:00:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:19.013 08:00:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3126085 00:36:19.013 08:00:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3126085 ']' 00:36:19.013 08:00:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3126085 00:36:19.013 08:00:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:36:19.013 08:00:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:19.013 08:00:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3126085 00:36:19.013 08:00:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:19.013 08:00:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:19.013 08:00:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3126085' 00:36:19.013 killing process with pid 3126085 00:36:19.013 08:00:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3126085 00:36:19.013 Received shutdown signal, test time was about 2.000000 seconds 00:36:19.013 00:36:19.013 Latency(us) 00:36:19.013 [2024-11-19T07:00:10.943Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:19.013 [2024-11-19T07:00:10.943Z] =================================================================================================================== 00:36:19.013 [2024-11-19T07:00:10.943Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:19.013 08:00:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3126085 00:36:19.951 08:00:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:36:19.951 08:00:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:19.951 08:00:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:19.951 08:00:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:36:19.951 08:00:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:36:19.951 08:00:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:36:19.951 08:00:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:19.951 08:00:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3127253 00:36:19.951 08:00:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3127253 /var/tmp/bperf.sock 00:36:19.951 08:00:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:36:19.951 08:00:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3127253 ']' 00:36:19.951 08:00:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:19.951 08:00:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:19.951 08:00:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:19.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:19.951 08:00:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:19.951 08:00:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:19.951 [2024-11-19 08:00:11.734777] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:36:19.951 [2024-11-19 08:00:11.734933] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3127253 ] 00:36:19.951 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:19.951 Zero copy mechanism will not be used. 00:36:19.951 [2024-11-19 08:00:11.880277] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:20.211 [2024-11-19 08:00:12.011943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:21.153 08:00:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:21.153 08:00:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:36:21.153 08:00:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:21.153 08:00:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:21.153 08:00:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:21.722 08:00:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:21.722 08:00:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:21.982 nvme0n1 00:36:21.982 08:00:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:21.982 08:00:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:21.982 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:21.982 Zero copy mechanism will not be used. 00:36:21.982 Running I/O for 2 seconds... 00:36:24.308 4543.00 IOPS, 567.88 MiB/s [2024-11-19T07:00:16.238Z] 4621.50 IOPS, 577.69 MiB/s 00:36:24.308 Latency(us) 00:36:24.308 [2024-11-19T07:00:16.238Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:24.308 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:36:24.308 nvme0n1 : 2.00 4620.36 577.55 0.00 0.00 3455.88 1067.99 12815.93 00:36:24.308 [2024-11-19T07:00:16.238Z] =================================================================================================================== 00:36:24.308 [2024-11-19T07:00:16.238Z] Total : 4620.36 577.55 0.00 0.00 3455.88 1067.99 12815.93 00:36:24.308 { 00:36:24.308 "results": [ 00:36:24.308 { 00:36:24.308 "job": "nvme0n1", 00:36:24.308 "core_mask": "0x2", 00:36:24.308 "workload": "randread", 00:36:24.308 "status": "finished", 00:36:24.308 "queue_depth": 16, 00:36:24.308 "io_size": 131072, 00:36:24.308 "runtime": 2.003956, 00:36:24.308 "iops": 4620.360926088198, 00:36:24.308 "mibps": 577.5451157610247, 00:36:24.308 "io_failed": 0, 00:36:24.308 "io_timeout": 0, 00:36:24.308 "avg_latency_us": 3455.884100434812, 00:36:24.308 "min_latency_us": 1067.994074074074, 00:36:24.308 "max_latency_us": 12815.92888888889 00:36:24.308 } 00:36:24.308 ], 00:36:24.308 "core_count": 1 00:36:24.308 } 00:36:24.308 08:00:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:24.308 08:00:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:24.308 08:00:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:24.308 08:00:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:24.308 | select(.opcode=="crc32c") 00:36:24.308 | "\(.module_name) \(.executed)"' 00:36:24.308 08:00:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:24.308 08:00:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:24.308 08:00:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:24.308 08:00:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:24.308 08:00:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:24.308 08:00:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3127253 00:36:24.308 08:00:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3127253 ']' 00:36:24.308 08:00:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3127253 00:36:24.308 08:00:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:36:24.308 08:00:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:24.308 08:00:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3127253 00:36:24.308 08:00:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:24.308 08:00:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:24.308 08:00:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3127253' 00:36:24.308 killing process with pid 3127253 00:36:24.308 08:00:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3127253 00:36:24.308 Received shutdown signal, test time was about 2.000000 seconds 00:36:24.308 00:36:24.308 Latency(us) 00:36:24.308 [2024-11-19T07:00:16.238Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:24.308 [2024-11-19T07:00:16.238Z] =================================================================================================================== 00:36:24.308 [2024-11-19T07:00:16.238Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:24.308 08:00:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3127253 00:36:25.248 08:00:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:36:25.248 08:00:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:25.248 08:00:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:25.248 08:00:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:36:25.248 08:00:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:36:25.248 08:00:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:36:25.248 08:00:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:25.248 08:00:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3127911 00:36:25.248 08:00:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:36:25.248 08:00:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3127911 /var/tmp/bperf.sock 00:36:25.248 08:00:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3127911 ']' 00:36:25.248 08:00:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:25.249 08:00:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:25.249 08:00:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:25.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:25.249 08:00:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:25.249 08:00:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:25.507 [2024-11-19 08:00:17.188232] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:36:25.507 [2024-11-19 08:00:17.188363] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3127911 ] 00:36:25.507 [2024-11-19 08:00:17.328259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:25.764 [2024-11-19 08:00:17.466707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:26.330 08:00:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:26.330 08:00:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:36:26.330 08:00:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:26.330 08:00:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:26.330 08:00:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:26.895 08:00:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:26.895 08:00:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:27.462 nvme0n1 00:36:27.462 08:00:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:27.462 08:00:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:27.462 Running I/O for 2 seconds... 00:36:29.338 16227.00 IOPS, 63.39 MiB/s [2024-11-19T07:00:21.526Z] 16425.50 IOPS, 64.16 MiB/s 00:36:29.596 Latency(us) 00:36:29.596 [2024-11-19T07:00:21.526Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:29.596 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:29.596 nvme0n1 : 2.01 16445.74 64.24 0.00 0.00 7768.20 3495.25 14854.83 00:36:29.596 [2024-11-19T07:00:21.526Z] =================================================================================================================== 00:36:29.596 [2024-11-19T07:00:21.526Z] Total : 16445.74 64.24 0.00 0.00 7768.20 3495.25 14854.83 00:36:29.596 { 00:36:29.596 "results": [ 00:36:29.596 { 00:36:29.596 "job": "nvme0n1", 00:36:29.596 "core_mask": "0x2", 00:36:29.596 "workload": "randwrite", 00:36:29.596 "status": "finished", 00:36:29.597 "queue_depth": 128, 00:36:29.597 "io_size": 4096, 00:36:29.597 "runtime": 2.010186, 00:36:29.597 "iops": 16445.74183682505, 00:36:29.597 "mibps": 64.24117905009786, 00:36:29.597 "io_failed": 0, 00:36:29.597 "io_timeout": 0, 00:36:29.597 "avg_latency_us": 7768.203573005838, 00:36:29.597 "min_latency_us": 3495.2533333333336, 00:36:29.597 "max_latency_us": 14854.826666666666 00:36:29.597 } 00:36:29.597 ], 00:36:29.597 "core_count": 1 00:36:29.597 } 00:36:29.597 08:00:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:29.597 08:00:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:29.597 08:00:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:29.597 08:00:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:29.597 | select(.opcode=="crc32c") 00:36:29.597 | "\(.module_name) \(.executed)"' 00:36:29.597 08:00:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:29.856 08:00:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:29.856 08:00:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:29.856 08:00:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:29.856 08:00:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:29.856 08:00:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3127911 00:36:29.856 08:00:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3127911 ']' 00:36:29.857 08:00:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3127911 00:36:29.857 08:00:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:36:29.857 08:00:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:29.857 08:00:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3127911 00:36:29.857 08:00:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:29.857 08:00:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:29.857 08:00:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3127911' 00:36:29.857 killing process with pid 3127911 00:36:29.857 08:00:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3127911 00:36:29.857 Received shutdown signal, test time was about 2.000000 seconds 00:36:29.857 00:36:29.857 Latency(us) 00:36:29.857 [2024-11-19T07:00:21.787Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:29.857 [2024-11-19T07:00:21.787Z] =================================================================================================================== 00:36:29.857 [2024-11-19T07:00:21.787Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:29.857 08:00:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3127911 00:36:30.790 08:00:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:36:30.790 08:00:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:30.790 08:00:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:30.790 08:00:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:36:30.790 08:00:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:36:30.790 08:00:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:36:30.790 08:00:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:30.790 08:00:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3128467 00:36:30.790 08:00:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:36:30.790 08:00:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3128467 /var/tmp/bperf.sock 00:36:30.790 08:00:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3128467 ']' 00:36:30.790 08:00:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:30.790 08:00:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:30.790 08:00:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:30.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:30.790 08:00:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:30.790 08:00:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:30.790 [2024-11-19 08:00:22.577840] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:36:30.790 [2024-11-19 08:00:22.577980] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3128467 ] 00:36:30.790 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:30.790 Zero copy mechanism will not be used. 00:36:30.790 [2024-11-19 08:00:22.719604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:31.047 [2024-11-19 08:00:22.848137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:31.984 08:00:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:31.984 08:00:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:36:31.984 08:00:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:31.984 08:00:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:31.984 08:00:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:32.553 08:00:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:32.553 08:00:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:32.810 nvme0n1 00:36:32.810 08:00:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:32.810 08:00:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:32.810 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:32.810 Zero copy mechanism will not be used. 00:36:32.810 Running I/O for 2 seconds... 00:36:35.120 4831.00 IOPS, 603.88 MiB/s [2024-11-19T07:00:27.050Z] 4815.00 IOPS, 601.88 MiB/s 00:36:35.120 Latency(us) 00:36:35.120 [2024-11-19T07:00:27.050Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:35.120 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:36:35.120 nvme0n1 : 2.00 4813.47 601.68 0.00 0.00 3314.62 2572.89 12233.39 00:36:35.120 [2024-11-19T07:00:27.050Z] =================================================================================================================== 00:36:35.120 [2024-11-19T07:00:27.050Z] Total : 4813.47 601.68 0.00 0.00 3314.62 2572.89 12233.39 00:36:35.120 { 00:36:35.120 "results": [ 00:36:35.120 { 00:36:35.120 "job": "nvme0n1", 00:36:35.120 "core_mask": "0x2", 00:36:35.120 "workload": "randwrite", 00:36:35.120 "status": "finished", 00:36:35.120 "queue_depth": 16, 00:36:35.120 "io_size": 131072, 00:36:35.120 "runtime": 2.004792, 00:36:35.120 "iops": 4813.4669332279855, 00:36:35.120 "mibps": 601.6833666534982, 00:36:35.120 "io_failed": 0, 00:36:35.120 "io_timeout": 0, 00:36:35.120 "avg_latency_us": 3314.6155673767034, 00:36:35.120 "min_latency_us": 2572.8948148148147, 00:36:35.120 "max_latency_us": 12233.386666666667 00:36:35.120 } 00:36:35.120 ], 00:36:35.120 "core_count": 1 00:36:35.120 } 00:36:35.120 08:00:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:35.120 08:00:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:35.120 08:00:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:35.120 08:00:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:35.120 08:00:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:35.120 | select(.opcode=="crc32c") 00:36:35.120 | "\(.module_name) \(.executed)"' 00:36:35.120 08:00:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:35.120 08:00:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:35.120 08:00:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:35.120 08:00:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:35.120 08:00:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3128467 00:36:35.120 08:00:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3128467 ']' 00:36:35.120 08:00:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3128467 00:36:35.120 08:00:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:36:35.120 08:00:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:35.120 08:00:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3128467 00:36:35.120 08:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:35.120 08:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:35.120 08:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3128467' 00:36:35.120 killing process with pid 3128467 00:36:35.120 08:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3128467 00:36:35.120 Received shutdown signal, test time was about 2.000000 seconds 00:36:35.120 00:36:35.120 Latency(us) 00:36:35.120 [2024-11-19T07:00:27.051Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:35.121 [2024-11-19T07:00:27.051Z] =================================================================================================================== 00:36:35.121 [2024-11-19T07:00:27.051Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:35.121 08:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3128467 00:36:36.055 08:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3125922 00:36:36.055 08:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3125922 ']' 00:36:36.055 08:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3125922 00:36:36.055 08:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:36:36.055 08:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:36.055 08:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3125922 00:36:36.055 08:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:36.055 08:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:36.055 08:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3125922' 00:36:36.055 killing process with pid 3125922 00:36:36.055 08:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3125922 00:36:36.055 08:00:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3125922 00:36:37.432 00:36:37.432 real 0m24.457s 00:36:37.432 user 0m47.907s 00:36:37.432 sys 0m4.675s 00:36:37.432 08:00:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:37.432 08:00:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:37.432 ************************************ 00:36:37.432 END TEST nvmf_digest_clean 00:36:37.432 ************************************ 00:36:37.432 08:00:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:36:37.432 08:00:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:37.432 08:00:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:37.432 08:00:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:37.432 ************************************ 00:36:37.432 START TEST nvmf_digest_error 00:36:37.432 ************************************ 00:36:37.432 08:00:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:36:37.432 08:00:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:36:37.432 08:00:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:37.432 08:00:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:37.432 08:00:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:37.432 08:00:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=3129285 00:36:37.432 08:00:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:36:37.432 08:00:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 3129285 00:36:37.432 08:00:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3129285 ']' 00:36:37.432 08:00:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:37.432 08:00:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:37.432 08:00:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:37.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:37.432 08:00:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:37.432 08:00:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:37.432 [2024-11-19 08:00:29.247620] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:36:37.432 [2024-11-19 08:00:29.247814] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:37.689 [2024-11-19 08:00:29.410603] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:37.689 [2024-11-19 08:00:29.549014] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:37.689 [2024-11-19 08:00:29.549100] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:37.689 [2024-11-19 08:00:29.549142] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:37.689 [2024-11-19 08:00:29.549168] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:37.689 [2024-11-19 08:00:29.549187] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:37.689 [2024-11-19 08:00:29.550795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:38.626 08:00:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:38.626 08:00:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:36:38.626 08:00:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:38.626 08:00:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:38.626 08:00:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:38.626 08:00:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:38.626 08:00:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:36:38.627 08:00:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.627 08:00:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:38.627 [2024-11-19 08:00:30.333620] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:36:38.627 08:00:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.627 08:00:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:36:38.627 08:00:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:36:38.627 08:00:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.627 08:00:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:38.885 null0 00:36:38.885 [2024-11-19 08:00:30.732474] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:38.885 [2024-11-19 08:00:30.756826] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:38.885 08:00:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.885 08:00:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:36:38.885 08:00:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:38.885 08:00:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:36:38.885 08:00:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:36:38.885 08:00:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:36:38.885 08:00:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3129557 00:36:38.885 08:00:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3129557 /var/tmp/bperf.sock 00:36:38.885 08:00:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:36:38.885 08:00:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3129557 ']' 00:36:38.885 08:00:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:38.885 08:00:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:38.885 08:00:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:38.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:38.885 08:00:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:38.885 08:00:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:39.145 [2024-11-19 08:00:30.854013] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:36:39.145 [2024-11-19 08:00:30.854168] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3129557 ] 00:36:39.145 [2024-11-19 08:00:31.001286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:39.405 [2024-11-19 08:00:31.123637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:39.970 08:00:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:39.970 08:00:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:36:39.970 08:00:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:39.970 08:00:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:40.538 08:00:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:40.538 08:00:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.538 08:00:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:40.539 08:00:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.539 08:00:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:40.539 08:00:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:40.799 nvme0n1 00:36:40.799 08:00:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:36:40.799 08:00:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.799 08:00:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:40.799 08:00:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.799 08:00:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:40.799 08:00:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:40.799 Running I/O for 2 seconds... 00:36:40.799 [2024-11-19 08:00:32.716867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:40.799 [2024-11-19 08:00:32.716939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.799 [2024-11-19 08:00:32.716969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.799 [2024-11-19 08:00:32.732303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:40.799 [2024-11-19 08:00:32.732354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:18317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.799 [2024-11-19 08:00:32.732384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.058 [2024-11-19 08:00:32.752666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:41.058 [2024-11-19 08:00:32.752741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:16483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.058 [2024-11-19 08:00:32.752768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.058 [2024-11-19 08:00:32.770533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:41.058 [2024-11-19 08:00:32.770582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:23442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.058 [2024-11-19 08:00:32.770626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.058 [2024-11-19 08:00:32.787454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:41.058 [2024-11-19 08:00:32.787502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:4146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.058 [2024-11-19 08:00:32.787532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.058 [2024-11-19 08:00:32.804512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:41.058 [2024-11-19 08:00:32.804561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:13759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.058 [2024-11-19 08:00:32.804590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.058 [2024-11-19 08:00:32.821528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:41.058 [2024-11-19 08:00:32.821576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.058 [2024-11-19 08:00:32.821606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.058 [2024-11-19 08:00:32.840536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:41.059 [2024-11-19 08:00:32.840585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:10172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.059 [2024-11-19 08:00:32.840616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.059 [2024-11-19 08:00:32.856953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:41.059 [2024-11-19 08:00:32.856995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:17620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.059 [2024-11-19 08:00:32.857021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.059 [2024-11-19 08:00:32.877807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:41.059 [2024-11-19 08:00:32.877864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:6690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.059 [2024-11-19 08:00:32.877906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.059 [2024-11-19 08:00:32.899159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:41.059 [2024-11-19 08:00:32.899216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:17018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.059 [2024-11-19 08:00:32.899247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.059 [2024-11-19 08:00:32.916860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:41.059 [2024-11-19 08:00:32.916901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:4593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.059 [2024-11-19 08:00:32.916926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.059 [2024-11-19 08:00:32.934598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:41.059 [2024-11-19 08:00:32.934646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.059 [2024-11-19 08:00:32.934676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.059 [2024-11-19 08:00:32.950132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:41.059 [2024-11-19 08:00:32.950175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.059 [2024-11-19 08:00:32.950201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.059 [2024-11-19 08:00:32.968519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:41.059 [2024-11-19 08:00:32.968567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:23757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.059 [2024-11-19 08:00:32.968597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.059 [2024-11-19 08:00:32.987780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:41.059 [2024-11-19 08:00:32.987821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.059 [2024-11-19 08:00:32.987846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.317 [2024-11-19 08:00:33.005634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:41.317 [2024-11-19 08:00:33.005700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.318 [2024-11-19 08:00:33.005743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.318 [2024-11-19 08:00:33.024915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:41.318 [2024-11-19 08:00:33.024974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:9142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.318 [2024-11-19 08:00:33.025000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.318 [2024-11-19 08:00:33.040534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:41.318 [2024-11-19 08:00:33.040582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:15101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.318 [2024-11-19 08:00:33.040625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.318 [2024-11-19 08:00:33.058389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:41.318 [2024-11-19 08:00:33.058437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:6429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.318 [2024-11-19 08:00:33.058466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.318 [2024-11-19 08:00:33.077313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:41.318 [2024-11-19 08:00:33.077356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.318 [2024-11-19 08:00:33.077383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.318 [2024-11-19 08:00:33.094502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:41.318 [2024-11-19 08:00:33.094550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:6910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.318 [2024-11-19 08:00:33.094593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.318 [2024-11-19 08:00:33.110633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:41.318 [2024-11-19 08:00:33.110681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:10926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.318 [2024-11-19 08:00:33.110732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.318 [2024-11-19 08:00:33.129528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:41.318 [2024-11-19 08:00:33.129574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:1752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.318 [2024-11-19 08:00:33.129619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.318 [2024-11-19 08:00:33.147081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:41.318 [2024-11-19 08:00:33.147129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:15033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.318 [2024-11-19 08:00:33.147159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.318 [2024-11-19 08:00:33.165004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:41.318 [2024-11-19 08:00:33.165053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:19618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.318 [2024-11-19 08:00:33.165083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.318 [2024-11-19 08:00:33.181015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:41.318 [2024-11-19 08:00:33.181063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:11308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.318 [2024-11-19 08:00:33.181092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.318 [2024-11-19 08:00:33.200314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:41.318 [2024-11-19 08:00:33.200370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:18479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.318 [2024-11-19 08:00:33.200415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.318 [2024-11-19 08:00:33.220266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:41.318 [2024-11-19 08:00:33.220310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.318 [2024-11-19 08:00:33.220337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.318 [2024-11-19 08:00:33.236333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:41.318 [2024-11-19 08:00:33.236381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:15784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.318 [2024-11-19 08:00:33.236410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.577 [2024-11-19 08:00:33.254865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:41.577 [2024-11-19 08:00:33.254925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:3192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.577 [2024-11-19 08:00:33.254953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.577 [2024-11-19 08:00:33.272146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:41.577 [2024-11-19 08:00:33.272206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:8442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.577 [2024-11-19 08:00:33.272248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.577 [2024-11-19 08:00:33.290133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:41.577 [2024-11-19 08:00:33.290191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:15621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.577 [2024-11-19 08:00:33.290216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.577 [2024-11-19 08:00:33.306832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:41.577 [2024-11-19 08:00:33.306875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.577 [2024-11-19 08:00:33.306901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.577 [2024-11-19 08:00:33.322960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:41.577 [2024-11-19 08:00:33.323003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:3904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.577 [2024-11-19 08:00:33.323028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.577 [2024-11-19 08:00:33.339241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:41.577 [2024-11-19 08:00:33.339298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:12613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.577 [2024-11-19 08:00:33.339325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.577 [2024-11-19 08:00:33.355319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:41.577 [2024-11-19 08:00:33.355373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:3672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.577 [2024-11-19 08:00:33.355399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.577 [2024-11-19 08:00:33.374060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:41.577 [2024-11-19 08:00:33.374118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:7637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.577 [2024-11-19 08:00:33.374144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.577 [2024-11-19 08:00:33.394146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:41.577 [2024-11-19 08:00:33.394207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:11494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.577 [2024-11-19 08:00:33.394233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.577 [2024-11-19 08:00:33.411005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:41.577 [2024-11-19 08:00:33.411050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:24397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.577 [2024-11-19 08:00:33.411076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.577 [2024-11-19 08:00:33.430613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:41.577 [2024-11-19 08:00:33.430670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:5083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.577 [2024-11-19 08:00:33.430719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.577 [2024-11-19 08:00:33.445597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:41.577 [2024-11-19 08:00:33.445653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.577 [2024-11-19 08:00:33.445679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.577 [2024-11-19 08:00:33.464789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:41.577 [2024-11-19 08:00:33.464843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:2167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.577 [2024-11-19 08:00:33.464868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.577 [2024-11-19 08:00:33.486063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:41.577 [2024-11-19 08:00:33.486107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:21883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.577 [2024-11-19 08:00:33.486151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.577 [2024-11-19 08:00:33.507306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:41.577 [2024-11-19 08:00:33.507373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:2355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.577 [2024-11-19 08:00:33.507400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.837 [2024-11-19 08:00:33.526750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:41.837 [2024-11-19 08:00:33.526793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.837 [2024-11-19 08:00:33.526818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.837 [2024-11-19 08:00:33.543316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:41.837 [2024-11-19 08:00:33.543372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:18868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.837 [2024-11-19 08:00:33.543399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.837 [2024-11-19 08:00:33.560171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:41.837 [2024-11-19 08:00:33.560216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.837 [2024-11-19 08:00:33.560243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.837 [2024-11-19 08:00:33.576921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:41.837 [2024-11-19 08:00:33.576965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:2489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.837 [2024-11-19 08:00:33.576991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.837 [2024-11-19 08:00:33.590658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:41.837 [2024-11-19 08:00:33.590731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:17427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.837 [2024-11-19 08:00:33.590766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.837 [2024-11-19 08:00:33.610445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:41.837 [2024-11-19 08:00:33.610490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:2827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.837 [2024-11-19 08:00:33.610518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.837 [2024-11-19 08:00:33.628262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:41.837 [2024-11-19 08:00:33.628321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:18668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.837 [2024-11-19 08:00:33.628347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.837 [2024-11-19 08:00:33.643287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:41.837 [2024-11-19 08:00:33.643357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:9843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.837 [2024-11-19 08:00:33.643384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.837 [2024-11-19 08:00:33.659900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:41.837 [2024-11-19 08:00:33.659942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:14633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.837 [2024-11-19 08:00:33.659968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.837 [2024-11-19 08:00:33.678208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:41.837 [2024-11-19 08:00:33.678252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:15262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.837 [2024-11-19 08:00:33.678279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.837 14215.00 IOPS, 55.53 MiB/s [2024-11-19T07:00:33.767Z] [2024-11-19 08:00:33.695838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:41.837 [2024-11-19 08:00:33.695882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.838 [2024-11-19 08:00:33.695909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.838 [2024-11-19 08:00:33.712666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:41.838 [2024-11-19 08:00:33.712734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.838 [2024-11-19 08:00:33.712761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.838 [2024-11-19 08:00:33.728248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:41.838 [2024-11-19 08:00:33.728302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.838 [2024-11-19 08:00:33.728328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.838 [2024-11-19 08:00:33.747574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:41.838 [2024-11-19 08:00:33.747620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:10555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.838 [2024-11-19 08:00:33.747646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.838 [2024-11-19 08:00:33.768990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:41.838 [2024-11-19 08:00:33.769037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:1718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.838 [2024-11-19 08:00:33.769063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.097 [2024-11-19 08:00:33.785175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.097 [2024-11-19 08:00:33.785232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:3348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.097 [2024-11-19 08:00:33.785258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.097 [2024-11-19 08:00:33.803835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.097 [2024-11-19 08:00:33.803890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:21766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.097 [2024-11-19 08:00:33.803917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.097 [2024-11-19 08:00:33.825422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.097 [2024-11-19 08:00:33.825469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:9825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.097 [2024-11-19 08:00:33.825497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.097 [2024-11-19 08:00:33.843951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.097 [2024-11-19 08:00:33.844011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:5255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.097 [2024-11-19 08:00:33.844039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.097 [2024-11-19 08:00:33.858563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.097 [2024-11-19 08:00:33.858617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:25466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.097 [2024-11-19 08:00:33.858643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.097 [2024-11-19 08:00:33.875845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.097 [2024-11-19 08:00:33.875903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:1541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.097 [2024-11-19 08:00:33.875930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.097 [2024-11-19 08:00:33.893042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.097 [2024-11-19 08:00:33.893082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:15425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.097 [2024-11-19 08:00:33.893108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.097 [2024-11-19 08:00:33.910887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.097 [2024-11-19 08:00:33.910929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:1267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.097 [2024-11-19 08:00:33.910954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.097 [2024-11-19 08:00:33.928302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.097 [2024-11-19 08:00:33.928356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:20242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.097 [2024-11-19 08:00:33.928382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.097 [2024-11-19 08:00:33.945809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.097 [2024-11-19 08:00:33.945854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:3031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.097 [2024-11-19 08:00:33.945881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.097 [2024-11-19 08:00:33.960084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.097 [2024-11-19 08:00:33.960138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:5303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.097 [2024-11-19 08:00:33.960164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.097 [2024-11-19 08:00:33.986386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.097 [2024-11-19 08:00:33.986434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:25527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.097 [2024-11-19 08:00:33.986461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.097 [2024-11-19 08:00:34.000473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.097 [2024-11-19 08:00:34.000518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:25429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.097 [2024-11-19 08:00:34.000544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.097 [2024-11-19 08:00:34.019540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.097 [2024-11-19 08:00:34.019597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.097 [2024-11-19 08:00:34.019624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.357 [2024-11-19 08:00:34.040615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.357 [2024-11-19 08:00:34.040673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:23639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.357 [2024-11-19 08:00:34.040707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.357 [2024-11-19 08:00:34.056613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.357 [2024-11-19 08:00:34.056669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:1215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.357 [2024-11-19 08:00:34.056714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.357 [2024-11-19 08:00:34.073843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.357 [2024-11-19 08:00:34.073886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:18794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.357 [2024-11-19 08:00:34.073913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.357 [2024-11-19 08:00:34.088320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.357 [2024-11-19 08:00:34.088364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.357 [2024-11-19 08:00:34.088392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.357 [2024-11-19 08:00:34.107880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.357 [2024-11-19 08:00:34.107923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:24671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.357 [2024-11-19 08:00:34.107958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.357 [2024-11-19 08:00:34.127235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.357 [2024-11-19 08:00:34.127300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:21031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.357 [2024-11-19 08:00:34.127326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.357 [2024-11-19 08:00:34.148432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.358 [2024-11-19 08:00:34.148492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:24104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.358 [2024-11-19 08:00:34.148518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.358 [2024-11-19 08:00:34.167921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.358 [2024-11-19 08:00:34.167965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:4345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.358 [2024-11-19 08:00:34.167991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.358 [2024-11-19 08:00:34.183165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.358 [2024-11-19 08:00:34.183219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:4285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.358 [2024-11-19 08:00:34.183246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.358 [2024-11-19 08:00:34.201614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.358 [2024-11-19 08:00:34.201669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.358 [2024-11-19 08:00:34.201701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.358 [2024-11-19 08:00:34.223077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.358 [2024-11-19 08:00:34.223133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:22550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.358 [2024-11-19 08:00:34.223160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.358 [2024-11-19 08:00:34.237015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.358 [2024-11-19 08:00:34.237056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:9408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.358 [2024-11-19 08:00:34.237098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.358 [2024-11-19 08:00:34.257321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.358 [2024-11-19 08:00:34.257379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:1554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.358 [2024-11-19 08:00:34.257407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.358 [2024-11-19 08:00:34.275494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.358 [2024-11-19 08:00:34.275552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:24943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.358 [2024-11-19 08:00:34.275594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.618 [2024-11-19 08:00:34.294162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.618 [2024-11-19 08:00:34.294205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.618 [2024-11-19 08:00:34.294245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.618 [2024-11-19 08:00:34.309756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.618 [2024-11-19 08:00:34.309814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:6555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.618 [2024-11-19 08:00:34.309841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.618 [2024-11-19 08:00:34.328916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.618 [2024-11-19 08:00:34.328959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:9133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.618 [2024-11-19 08:00:34.328983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.618 [2024-11-19 08:00:34.349061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.618 [2024-11-19 08:00:34.349115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.618 [2024-11-19 08:00:34.349141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.618 [2024-11-19 08:00:34.368082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.618 [2024-11-19 08:00:34.368142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:21012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.618 [2024-11-19 08:00:34.368169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.618 [2024-11-19 08:00:34.386768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.618 [2024-11-19 08:00:34.386830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:17296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.618 [2024-11-19 08:00:34.386858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.618 [2024-11-19 08:00:34.403167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.618 [2024-11-19 08:00:34.403223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:12686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.618 [2024-11-19 08:00:34.403249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.618 [2024-11-19 08:00:34.418658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.618 [2024-11-19 08:00:34.418719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:11822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.618 [2024-11-19 08:00:34.418755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.618 [2024-11-19 08:00:34.435090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.618 [2024-11-19 08:00:34.435148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:5538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.618 [2024-11-19 08:00:34.435176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.618 [2024-11-19 08:00:34.451706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.618 [2024-11-19 08:00:34.451750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:5073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.618 [2024-11-19 08:00:34.451777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.618 [2024-11-19 08:00:34.469321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.618 [2024-11-19 08:00:34.469364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:10851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.619 [2024-11-19 08:00:34.469390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.619 [2024-11-19 08:00:34.487373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.619 [2024-11-19 08:00:34.487417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.619 [2024-11-19 08:00:34.487444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.619 [2024-11-19 08:00:34.501569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.619 [2024-11-19 08:00:34.501624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:13663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.619 [2024-11-19 08:00:34.501650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.619 [2024-11-19 08:00:34.521392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.619 [2024-11-19 08:00:34.521448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:18115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.619 [2024-11-19 08:00:34.521475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.619 [2024-11-19 08:00:34.538916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.619 [2024-11-19 08:00:34.538958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:19853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.619 [2024-11-19 08:00:34.538985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.877 [2024-11-19 08:00:34.557747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.877 [2024-11-19 08:00:34.557793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.877 [2024-11-19 08:00:34.557821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.877 [2024-11-19 08:00:34.573237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.877 [2024-11-19 08:00:34.573293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:2626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.877 [2024-11-19 08:00:34.573319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.877 [2024-11-19 08:00:34.593177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.877 [2024-11-19 08:00:34.593234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:25079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.877 [2024-11-19 08:00:34.593260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.877 [2024-11-19 08:00:34.610244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.877 [2024-11-19 08:00:34.610289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.877 [2024-11-19 08:00:34.610316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.877 [2024-11-19 08:00:34.624409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.878 [2024-11-19 08:00:34.624464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.878 [2024-11-19 08:00:34.624505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.878 [2024-11-19 08:00:34.644496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.878 [2024-11-19 08:00:34.644552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:1779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.878 [2024-11-19 08:00:34.644578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.878 [2024-11-19 08:00:34.663302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.878 [2024-11-19 08:00:34.663350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:22068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.878 [2024-11-19 08:00:34.663381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.878 [2024-11-19 08:00:34.681143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.878 [2024-11-19 08:00:34.681190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:6092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.878 [2024-11-19 08:00:34.681220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.878 14234.50 IOPS, 55.60 MiB/s [2024-11-19T07:00:34.808Z] [2024-11-19 08:00:34.697629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:42.878 [2024-11-19 08:00:34.697681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:16763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.878 [2024-11-19 08:00:34.697722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.878 00:36:42.878 Latency(us) 00:36:42.878 [2024-11-19T07:00:34.808Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:42.878 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:42.878 nvme0n1 : 2.05 13988.64 54.64 0.00 0.00 8964.45 4611.79 50875.35 00:36:42.878 [2024-11-19T07:00:34.808Z] =================================================================================================================== 00:36:42.878 [2024-11-19T07:00:34.808Z] Total : 13988.64 54.64 0.00 0.00 8964.45 4611.79 50875.35 00:36:42.878 { 00:36:42.878 "results": [ 00:36:42.878 { 00:36:42.878 "job": "nvme0n1", 00:36:42.878 "core_mask": "0x2", 00:36:42.878 "workload": "randread", 00:36:42.878 "status": "finished", 00:36:42.878 "queue_depth": 128, 00:36:42.878 "io_size": 4096, 00:36:42.878 "runtime": 2.046375, 00:36:42.878 "iops": 13988.638446032619, 00:36:42.878 "mibps": 54.64311892981492, 00:36:42.878 "io_failed": 0, 00:36:42.878 "io_timeout": 0, 00:36:42.878 "avg_latency_us": 8964.44816817656, 00:36:42.878 "min_latency_us": 4611.792592592593, 00:36:42.878 "max_latency_us": 50875.35407407407 00:36:42.878 } 00:36:42.878 ], 00:36:42.878 "core_count": 1 00:36:42.878 } 00:36:42.878 08:00:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:42.878 08:00:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:42.878 08:00:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:42.878 08:00:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:42.878 | .driver_specific 00:36:42.878 | .nvme_error 00:36:42.878 | .status_code 00:36:42.878 | .command_transient_transport_error' 00:36:43.137 08:00:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 112 > 0 )) 00:36:43.137 08:00:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3129557 00:36:43.137 08:00:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3129557 ']' 00:36:43.137 08:00:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3129557 00:36:43.137 08:00:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:36:43.137 08:00:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:43.137 08:00:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3129557 00:36:43.137 08:00:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:43.137 08:00:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:43.137 08:00:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3129557' 00:36:43.137 killing process with pid 3129557 00:36:43.137 08:00:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3129557 00:36:43.137 Received shutdown signal, test time was about 2.000000 seconds 00:36:43.137 00:36:43.137 Latency(us) 00:36:43.137 [2024-11-19T07:00:35.067Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:43.137 [2024-11-19T07:00:35.067Z] =================================================================================================================== 00:36:43.137 [2024-11-19T07:00:35.067Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:43.137 08:00:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3129557 00:36:44.073 08:00:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:36:44.073 08:00:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:44.073 08:00:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:36:44.073 08:00:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:36:44.073 08:00:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:36:44.073 08:00:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3130107 00:36:44.073 08:00:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:36:44.073 08:00:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3130107 /var/tmp/bperf.sock 00:36:44.073 08:00:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3130107 ']' 00:36:44.073 08:00:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:44.073 08:00:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:44.073 08:00:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:44.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:44.074 08:00:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:44.074 08:00:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:44.331 [2024-11-19 08:00:36.034507] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:36:44.331 [2024-11-19 08:00:36.034656] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3130107 ] 00:36:44.331 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:44.331 Zero copy mechanism will not be used. 00:36:44.331 [2024-11-19 08:00:36.188819] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:44.588 [2024-11-19 08:00:36.325296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:45.153 08:00:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:45.153 08:00:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:36:45.153 08:00:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:45.153 08:00:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:45.411 08:00:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:45.411 08:00:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.411 08:00:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:45.411 08:00:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.411 08:00:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:45.411 08:00:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:45.991 nvme0n1 00:36:45.991 08:00:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:36:45.991 08:00:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.991 08:00:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:45.991 08:00:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.991 08:00:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:45.991 08:00:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:45.991 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:45.991 Zero copy mechanism will not be used. 00:36:45.991 Running I/O for 2 seconds... 00:36:45.991 [2024-11-19 08:00:37.871433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:45.991 [2024-11-19 08:00:37.871511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.991 [2024-11-19 08:00:37.871546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:45.991 [2024-11-19 08:00:37.879213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:45.991 [2024-11-19 08:00:37.879265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.991 [2024-11-19 08:00:37.879296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:45.991 [2024-11-19 08:00:37.887442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:45.992 [2024-11-19 08:00:37.887490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.992 [2024-11-19 08:00:37.887521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:45.992 [2024-11-19 08:00:37.896241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:45.992 [2024-11-19 08:00:37.896291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.992 [2024-11-19 08:00:37.896322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:45.992 [2024-11-19 08:00:37.905281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:45.992 [2024-11-19 08:00:37.905330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.992 [2024-11-19 08:00:37.905360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:45.992 [2024-11-19 08:00:37.913227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:45.992 [2024-11-19 08:00:37.913276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.992 [2024-11-19 08:00:37.913306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:45.992 [2024-11-19 08:00:37.920369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:45.992 [2024-11-19 08:00:37.920420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.992 [2024-11-19 08:00:37.920452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:46.254 [2024-11-19 08:00:37.928373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.254 [2024-11-19 08:00:37.928422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.254 [2024-11-19 08:00:37.928454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:46.254 [2024-11-19 08:00:37.936158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.254 [2024-11-19 08:00:37.936206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.254 [2024-11-19 08:00:37.936236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:46.254 [2024-11-19 08:00:37.943889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.254 [2024-11-19 08:00:37.943932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.254 [2024-11-19 08:00:37.943987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:46.254 [2024-11-19 08:00:37.951362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.254 [2024-11-19 08:00:37.951412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.254 [2024-11-19 08:00:37.951443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:46.254 [2024-11-19 08:00:37.959770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.254 [2024-11-19 08:00:37.959814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.254 [2024-11-19 08:00:37.959841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:46.254 [2024-11-19 08:00:37.966560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.254 [2024-11-19 08:00:37.966611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.254 [2024-11-19 08:00:37.966648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:46.254 [2024-11-19 08:00:37.974290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.254 [2024-11-19 08:00:37.974339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.254 [2024-11-19 08:00:37.974370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:46.254 [2024-11-19 08:00:37.982652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.254 [2024-11-19 08:00:37.982712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.254 [2024-11-19 08:00:37.982758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:46.254 [2024-11-19 08:00:37.990944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.254 [2024-11-19 08:00:37.991005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.254 [2024-11-19 08:00:37.991034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:46.255 [2024-11-19 08:00:38.000536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.255 [2024-11-19 08:00:38.000586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.255 [2024-11-19 08:00:38.000627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:46.255 [2024-11-19 08:00:38.009877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.255 [2024-11-19 08:00:38.009923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.255 [2024-11-19 08:00:38.009950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:46.255 [2024-11-19 08:00:38.018114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.255 [2024-11-19 08:00:38.018157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.255 [2024-11-19 08:00:38.018183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:46.255 [2024-11-19 08:00:38.026089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.255 [2024-11-19 08:00:38.026146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.255 [2024-11-19 08:00:38.026187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:46.255 [2024-11-19 08:00:38.034287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.255 [2024-11-19 08:00:38.034330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.255 [2024-11-19 08:00:38.034356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:46.255 [2024-11-19 08:00:38.041474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.255 [2024-11-19 08:00:38.041519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.255 [2024-11-19 08:00:38.041547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:46.255 [2024-11-19 08:00:38.046439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.255 [2024-11-19 08:00:38.046489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.255 [2024-11-19 08:00:38.046523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:46.255 [2024-11-19 08:00:38.052453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.255 [2024-11-19 08:00:38.052510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.255 [2024-11-19 08:00:38.052537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:46.255 [2024-11-19 08:00:38.059894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.255 [2024-11-19 08:00:38.059953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.255 [2024-11-19 08:00:38.059983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:46.255 [2024-11-19 08:00:38.067324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.255 [2024-11-19 08:00:38.067379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.255 [2024-11-19 08:00:38.067407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:46.255 [2024-11-19 08:00:38.074646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.255 [2024-11-19 08:00:38.074714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.255 [2024-11-19 08:00:38.074767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:46.255 [2024-11-19 08:00:38.081950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.255 [2024-11-19 08:00:38.082008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.255 [2024-11-19 08:00:38.082050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:46.255 [2024-11-19 08:00:38.088945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.255 [2024-11-19 08:00:38.089004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.255 [2024-11-19 08:00:38.089030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:46.255 [2024-11-19 08:00:38.095434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.255 [2024-11-19 08:00:38.095476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.255 [2024-11-19 08:00:38.095501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:46.255 [2024-11-19 08:00:38.101787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.255 [2024-11-19 08:00:38.101829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.255 [2024-11-19 08:00:38.101856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:46.255 [2024-11-19 08:00:38.108663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.255 [2024-11-19 08:00:38.108729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.255 [2024-11-19 08:00:38.108756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:46.255 [2024-11-19 08:00:38.115413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.255 [2024-11-19 08:00:38.115455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.255 [2024-11-19 08:00:38.115481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:46.255 [2024-11-19 08:00:38.122282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.255 [2024-11-19 08:00:38.122324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.255 [2024-11-19 08:00:38.122361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:46.255 [2024-11-19 08:00:38.129038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.255 [2024-11-19 08:00:38.129092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.255 [2024-11-19 08:00:38.129120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:46.255 [2024-11-19 08:00:38.135476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.255 [2024-11-19 08:00:38.135540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.255 [2024-11-19 08:00:38.135566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:46.255 [2024-11-19 08:00:38.142601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.255 [2024-11-19 08:00:38.142661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.255 [2024-11-19 08:00:38.142697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:46.255 [2024-11-19 08:00:38.149245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.255 [2024-11-19 08:00:38.149301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.255 [2024-11-19 08:00:38.149328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:46.255 [2024-11-19 08:00:38.155815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.255 [2024-11-19 08:00:38.155858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.255 [2024-11-19 08:00:38.155886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:46.256 [2024-11-19 08:00:38.162299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.256 [2024-11-19 08:00:38.162341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.256 [2024-11-19 08:00:38.162367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:46.256 [2024-11-19 08:00:38.168867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.256 [2024-11-19 08:00:38.168909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.256 [2024-11-19 08:00:38.168936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:46.256 [2024-11-19 08:00:38.175151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.256 [2024-11-19 08:00:38.175194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.256 [2024-11-19 08:00:38.175237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:46.256 [2024-11-19 08:00:38.180648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.256 [2024-11-19 08:00:38.180718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.256 [2024-11-19 08:00:38.180763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:46.256 [2024-11-19 08:00:38.184996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.256 [2024-11-19 08:00:38.185038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.256 [2024-11-19 08:00:38.185066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:46.521 [2024-11-19 08:00:38.190336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.521 [2024-11-19 08:00:38.190394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.521 [2024-11-19 08:00:38.190422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:46.521 [2024-11-19 08:00:38.195153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.521 [2024-11-19 08:00:38.195196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.521 [2024-11-19 08:00:38.195223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:46.521 [2024-11-19 08:00:38.199119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.521 [2024-11-19 08:00:38.199161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.521 [2024-11-19 08:00:38.199189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:46.521 [2024-11-19 08:00:38.204402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.521 [2024-11-19 08:00:38.204444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.521 [2024-11-19 08:00:38.204470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:46.521 [2024-11-19 08:00:38.211092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.521 [2024-11-19 08:00:38.211151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.521 [2024-11-19 08:00:38.211178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:46.521 [2024-11-19 08:00:38.217477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.521 [2024-11-19 08:00:38.217518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.521 [2024-11-19 08:00:38.217544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:46.521 [2024-11-19 08:00:38.223954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.521 [2024-11-19 08:00:38.224014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.521 [2024-11-19 08:00:38.224052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:46.521 [2024-11-19 08:00:38.230534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.521 [2024-11-19 08:00:38.230577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.521 [2024-11-19 08:00:38.230605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:46.521 [2024-11-19 08:00:38.237638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.521 [2024-11-19 08:00:38.237703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.521 [2024-11-19 08:00:38.237733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:46.521 [2024-11-19 08:00:38.245069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.521 [2024-11-19 08:00:38.245128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.521 [2024-11-19 08:00:38.245155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:46.521 [2024-11-19 08:00:38.252337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.521 [2024-11-19 08:00:38.252398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.521 [2024-11-19 08:00:38.252426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:46.522 [2024-11-19 08:00:38.259740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.522 [2024-11-19 08:00:38.259785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.522 [2024-11-19 08:00:38.259814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:46.522 [2024-11-19 08:00:38.266773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.522 [2024-11-19 08:00:38.266818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.522 [2024-11-19 08:00:38.266846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:46.522 [2024-11-19 08:00:38.273661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.522 [2024-11-19 08:00:38.273726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.522 [2024-11-19 08:00:38.273756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:46.522 [2024-11-19 08:00:38.279958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.522 [2024-11-19 08:00:38.280017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.522 [2024-11-19 08:00:38.280043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:46.522 [2024-11-19 08:00:38.286271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.522 [2024-11-19 08:00:38.286328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.522 [2024-11-19 08:00:38.286354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:46.522 [2024-11-19 08:00:38.292859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.522 [2024-11-19 08:00:38.292904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.522 [2024-11-19 08:00:38.292932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:46.522 [2024-11-19 08:00:38.299558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.522 [2024-11-19 08:00:38.299601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.522 [2024-11-19 08:00:38.299643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:46.522 [2024-11-19 08:00:38.306204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.522 [2024-11-19 08:00:38.306248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.522 [2024-11-19 08:00:38.306290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:46.522 [2024-11-19 08:00:38.312624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.522 [2024-11-19 08:00:38.312666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.522 [2024-11-19 08:00:38.312717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:46.522 [2024-11-19 08:00:38.318857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.522 [2024-11-19 08:00:38.318902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.522 [2024-11-19 08:00:38.318929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:46.522 [2024-11-19 08:00:38.325139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.522 [2024-11-19 08:00:38.325181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.522 [2024-11-19 08:00:38.325207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:46.522 [2024-11-19 08:00:38.331361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.522 [2024-11-19 08:00:38.331403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.522 [2024-11-19 08:00:38.331428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:46.522 [2024-11-19 08:00:38.337595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.522 [2024-11-19 08:00:38.337637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.522 [2024-11-19 08:00:38.337702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:46.522 [2024-11-19 08:00:38.343992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.522 [2024-11-19 08:00:38.344049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.522 [2024-11-19 08:00:38.344077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:46.522 [2024-11-19 08:00:38.350349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.522 [2024-11-19 08:00:38.350391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.522 [2024-11-19 08:00:38.350419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:46.522 [2024-11-19 08:00:38.356460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.522 [2024-11-19 08:00:38.356503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.522 [2024-11-19 08:00:38.356530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:46.522 [2024-11-19 08:00:38.362723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.522 [2024-11-19 08:00:38.362782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.522 [2024-11-19 08:00:38.362808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:46.522 [2024-11-19 08:00:38.368951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.522 [2024-11-19 08:00:38.369009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.522 [2024-11-19 08:00:38.369035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:46.522 [2024-11-19 08:00:38.375086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.522 [2024-11-19 08:00:38.375130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.522 [2024-11-19 08:00:38.375173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:46.522 [2024-11-19 08:00:38.381270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.522 [2024-11-19 08:00:38.381314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.522 [2024-11-19 08:00:38.381342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:46.522 [2024-11-19 08:00:38.387370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.522 [2024-11-19 08:00:38.387414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.522 [2024-11-19 08:00:38.387442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:46.522 [2024-11-19 08:00:38.393406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.522 [2024-11-19 08:00:38.393459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.522 [2024-11-19 08:00:38.393502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:46.522 [2024-11-19 08:00:38.399575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.522 [2024-11-19 08:00:38.399618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.522 [2024-11-19 08:00:38.399643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:46.522 [2024-11-19 08:00:38.405780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.522 [2024-11-19 08:00:38.405823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.522 [2024-11-19 08:00:38.405850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:46.522 [2024-11-19 08:00:38.411844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.522 [2024-11-19 08:00:38.411887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.522 [2024-11-19 08:00:38.411914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:46.522 [2024-11-19 08:00:38.418231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.522 [2024-11-19 08:00:38.418276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.522 [2024-11-19 08:00:38.418319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:46.523 [2024-11-19 08:00:38.424309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.523 [2024-11-19 08:00:38.424368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.523 [2024-11-19 08:00:38.424394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:46.523 [2024-11-19 08:00:38.430567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.523 [2024-11-19 08:00:38.430627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.523 [2024-11-19 08:00:38.430653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:46.523 [2024-11-19 08:00:38.436707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.523 [2024-11-19 08:00:38.436750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.523 [2024-11-19 08:00:38.436778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:46.523 [2024-11-19 08:00:38.443404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.523 [2024-11-19 08:00:38.443450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.523 [2024-11-19 08:00:38.443509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:46.523 [2024-11-19 08:00:38.449759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.790 [2024-11-19 08:00:38.449809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.791 [2024-11-19 08:00:38.449840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:46.791 [2024-11-19 08:00:38.453893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.791 [2024-11-19 08:00:38.453940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.791 [2024-11-19 08:00:38.453969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:46.791 [2024-11-19 08:00:38.459536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.791 [2024-11-19 08:00:38.459584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.791 [2024-11-19 08:00:38.459619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:46.791 [2024-11-19 08:00:38.463419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.791 [2024-11-19 08:00:38.463462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.791 [2024-11-19 08:00:38.463490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:46.791 [2024-11-19 08:00:38.468755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.791 [2024-11-19 08:00:38.468802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.791 [2024-11-19 08:00:38.468833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:46.791 [2024-11-19 08:00:38.475050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.791 [2024-11-19 08:00:38.475109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.791 [2024-11-19 08:00:38.475138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:46.791 [2024-11-19 08:00:38.481663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.791 [2024-11-19 08:00:38.481715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.791 [2024-11-19 08:00:38.481748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:46.791 [2024-11-19 08:00:38.488288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.791 [2024-11-19 08:00:38.488349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.791 [2024-11-19 08:00:38.488381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:46.791 [2024-11-19 08:00:38.494882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.791 [2024-11-19 08:00:38.494954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.791 [2024-11-19 08:00:38.494990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:46.791 [2024-11-19 08:00:38.499676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.791 [2024-11-19 08:00:38.499729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.791 [2024-11-19 08:00:38.499762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:46.791 [2024-11-19 08:00:38.505175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.791 [2024-11-19 08:00:38.505223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.791 [2024-11-19 08:00:38.505251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:46.791 [2024-11-19 08:00:38.510835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.791 [2024-11-19 08:00:38.510879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.791 [2024-11-19 08:00:38.510909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:46.791 [2024-11-19 08:00:38.515834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.791 [2024-11-19 08:00:38.515879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.791 [2024-11-19 08:00:38.515908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:46.791 [2024-11-19 08:00:38.521222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.791 [2024-11-19 08:00:38.521266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.791 [2024-11-19 08:00:38.521294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:46.791 [2024-11-19 08:00:38.527277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.791 [2024-11-19 08:00:38.527336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.791 [2024-11-19 08:00:38.527363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:46.791 [2024-11-19 08:00:38.531525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.791 [2024-11-19 08:00:38.531568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.791 [2024-11-19 08:00:38.531595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:46.791 [2024-11-19 08:00:38.536785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.791 [2024-11-19 08:00:38.536838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.791 [2024-11-19 08:00:38.536875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:46.791 [2024-11-19 08:00:38.542202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.791 [2024-11-19 08:00:38.542259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.791 [2024-11-19 08:00:38.542287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:46.791 [2024-11-19 08:00:38.546823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.791 [2024-11-19 08:00:38.546868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.791 [2024-11-19 08:00:38.546899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:46.791 [2024-11-19 08:00:38.553816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.791 [2024-11-19 08:00:38.553858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.791 [2024-11-19 08:00:38.553900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:46.791 [2024-11-19 08:00:38.561095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.791 [2024-11-19 08:00:38.561141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.791 [2024-11-19 08:00:38.561168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:46.791 [2024-11-19 08:00:38.569121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.791 [2024-11-19 08:00:38.569165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.791 [2024-11-19 08:00:38.569192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:46.791 [2024-11-19 08:00:38.577256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.791 [2024-11-19 08:00:38.577319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.791 [2024-11-19 08:00:38.577346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:46.791 [2024-11-19 08:00:38.585651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.791 [2024-11-19 08:00:38.585706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.791 [2024-11-19 08:00:38.585735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:46.791 [2024-11-19 08:00:38.594349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.791 [2024-11-19 08:00:38.594410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.791 [2024-11-19 08:00:38.594453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:46.792 [2024-11-19 08:00:38.603413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.792 [2024-11-19 08:00:38.603470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.792 [2024-11-19 08:00:38.603499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:46.792 [2024-11-19 08:00:38.613029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.792 [2024-11-19 08:00:38.613075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.792 [2024-11-19 08:00:38.613103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:46.792 [2024-11-19 08:00:38.621850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.792 [2024-11-19 08:00:38.621896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.792 [2024-11-19 08:00:38.621923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:46.792 [2024-11-19 08:00:38.628424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.792 [2024-11-19 08:00:38.628468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.792 [2024-11-19 08:00:38.628496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:46.792 [2024-11-19 08:00:38.635228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.792 [2024-11-19 08:00:38.635284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.792 [2024-11-19 08:00:38.635312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:46.792 [2024-11-19 08:00:38.643038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.792 [2024-11-19 08:00:38.643098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.792 [2024-11-19 08:00:38.643133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:46.792 [2024-11-19 08:00:38.651558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.792 [2024-11-19 08:00:38.651619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.792 [2024-11-19 08:00:38.651647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:46.792 [2024-11-19 08:00:38.660394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.792 [2024-11-19 08:00:38.660453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.792 [2024-11-19 08:00:38.660480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:46.792 [2024-11-19 08:00:38.669769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.792 [2024-11-19 08:00:38.669816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.792 [2024-11-19 08:00:38.669844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:46.792 [2024-11-19 08:00:38.678625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.792 [2024-11-19 08:00:38.678684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.792 [2024-11-19 08:00:38.678724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:46.792 [2024-11-19 08:00:38.688416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.792 [2024-11-19 08:00:38.688479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.792 [2024-11-19 08:00:38.688506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:46.792 [2024-11-19 08:00:38.697894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.792 [2024-11-19 08:00:38.697940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.792 [2024-11-19 08:00:38.697988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:46.792 [2024-11-19 08:00:38.706921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.792 [2024-11-19 08:00:38.706966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.792 [2024-11-19 08:00:38.706994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:46.792 [2024-11-19 08:00:38.715421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:46.792 [2024-11-19 08:00:38.715474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.792 [2024-11-19 08:00:38.715503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:47.063 [2024-11-19 08:00:38.725295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.063 [2024-11-19 08:00:38.725360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.063 [2024-11-19 08:00:38.725395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:47.063 [2024-11-19 08:00:38.734141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.063 [2024-11-19 08:00:38.734204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.063 [2024-11-19 08:00:38.734249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:47.063 [2024-11-19 08:00:38.741622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.063 [2024-11-19 08:00:38.741680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.063 [2024-11-19 08:00:38.741718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:47.063 [2024-11-19 08:00:38.748116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.063 [2024-11-19 08:00:38.748186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.063 [2024-11-19 08:00:38.748213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:47.063 [2024-11-19 08:00:38.754569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.063 [2024-11-19 08:00:38.754629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.063 [2024-11-19 08:00:38.754657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:47.063 [2024-11-19 08:00:38.760951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.063 [2024-11-19 08:00:38.760995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.063 [2024-11-19 08:00:38.761022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:47.063 [2024-11-19 08:00:38.767254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.063 [2024-11-19 08:00:38.767297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.063 [2024-11-19 08:00:38.767325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:47.063 [2024-11-19 08:00:38.773392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.063 [2024-11-19 08:00:38.773448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.063 [2024-11-19 08:00:38.773476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:47.063 [2024-11-19 08:00:38.779650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.063 [2024-11-19 08:00:38.779715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.063 [2024-11-19 08:00:38.779745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:47.063 [2024-11-19 08:00:38.785440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.063 [2024-11-19 08:00:38.785484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.063 [2024-11-19 08:00:38.785511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:47.063 [2024-11-19 08:00:38.789413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.063 [2024-11-19 08:00:38.789454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.063 [2024-11-19 08:00:38.789482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:47.063 [2024-11-19 08:00:38.795195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.063 [2024-11-19 08:00:38.795238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.063 [2024-11-19 08:00:38.795265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:47.063 [2024-11-19 08:00:38.800318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.063 [2024-11-19 08:00:38.800375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.063 [2024-11-19 08:00:38.800403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:47.063 [2024-11-19 08:00:38.804637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.063 [2024-11-19 08:00:38.804704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.063 [2024-11-19 08:00:38.804750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:47.063 [2024-11-19 08:00:38.810415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.063 [2024-11-19 08:00:38.810472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.063 [2024-11-19 08:00:38.810501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:47.063 [2024-11-19 08:00:38.814814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.063 [2024-11-19 08:00:38.814857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.063 [2024-11-19 08:00:38.814890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:47.063 [2024-11-19 08:00:38.820918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.064 [2024-11-19 08:00:38.820977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.064 [2024-11-19 08:00:38.821005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:47.064 [2024-11-19 08:00:38.827005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.064 [2024-11-19 08:00:38.827048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.064 [2024-11-19 08:00:38.827090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:47.064 [2024-11-19 08:00:38.833792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.064 [2024-11-19 08:00:38.833837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.064 [2024-11-19 08:00:38.833864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:47.064 [2024-11-19 08:00:38.840326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.064 [2024-11-19 08:00:38.840384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.064 [2024-11-19 08:00:38.840409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:47.064 [2024-11-19 08:00:38.846727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.064 [2024-11-19 08:00:38.846777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.064 [2024-11-19 08:00:38.846803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:47.064 [2024-11-19 08:00:38.853380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.064 [2024-11-19 08:00:38.853421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.064 [2024-11-19 08:00:38.853463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:47.064 [2024-11-19 08:00:38.859821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.064 [2024-11-19 08:00:38.859864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.064 [2024-11-19 08:00:38.859890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:47.064 4532.00 IOPS, 566.50 MiB/s [2024-11-19T07:00:38.994Z] [2024-11-19 08:00:38.867980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.064 [2024-11-19 08:00:38.868025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.064 [2024-11-19 08:00:38.868068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:47.064 [2024-11-19 08:00:38.874641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.064 [2024-11-19 08:00:38.874708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.064 [2024-11-19 08:00:38.874738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:47.064 [2024-11-19 08:00:38.882801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.064 [2024-11-19 08:00:38.882845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.064 [2024-11-19 08:00:38.882872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:47.064 [2024-11-19 08:00:38.891002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.064 [2024-11-19 08:00:38.891047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.064 [2024-11-19 08:00:38.891075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:47.064 [2024-11-19 08:00:38.898049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.064 [2024-11-19 08:00:38.898091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.064 [2024-11-19 08:00:38.898132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:47.064 [2024-11-19 08:00:38.905267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.064 [2024-11-19 08:00:38.905325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.064 [2024-11-19 08:00:38.905353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:47.064 [2024-11-19 08:00:38.912669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.064 [2024-11-19 08:00:38.912738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.064 [2024-11-19 08:00:38.912781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:47.064 [2024-11-19 08:00:38.920330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.064 [2024-11-19 08:00:38.920388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.064 [2024-11-19 08:00:38.920415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:47.064 [2024-11-19 08:00:38.928289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.064 [2024-11-19 08:00:38.928346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.064 [2024-11-19 08:00:38.928372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:47.064 [2024-11-19 08:00:38.935619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.064 [2024-11-19 08:00:38.935664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.064 [2024-11-19 08:00:38.935713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:47.064 [2024-11-19 08:00:38.942717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.064 [2024-11-19 08:00:38.942774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.064 [2024-11-19 08:00:38.942802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:47.064 [2024-11-19 08:00:38.949879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.064 [2024-11-19 08:00:38.949924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.064 [2024-11-19 08:00:38.949967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:47.064 [2024-11-19 08:00:38.956618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.064 [2024-11-19 08:00:38.956680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.064 [2024-11-19 08:00:38.956720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:47.064 [2024-11-19 08:00:38.963356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.064 [2024-11-19 08:00:38.963417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.064 [2024-11-19 08:00:38.963445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:47.064 [2024-11-19 08:00:38.970559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.064 [2024-11-19 08:00:38.970629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.064 [2024-11-19 08:00:38.970657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:47.064 [2024-11-19 08:00:38.977373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.064 [2024-11-19 08:00:38.977432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.064 [2024-11-19 08:00:38.977461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:47.064 [2024-11-19 08:00:38.982077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.064 [2024-11-19 08:00:38.982119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.064 [2024-11-19 08:00:38.982147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:47.064 [2024-11-19 08:00:38.987162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.064 [2024-11-19 08:00:38.987204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.064 [2024-11-19 08:00:38.987231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:47.065 [2024-11-19 08:00:38.992763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.065 [2024-11-19 08:00:38.992806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.065 [2024-11-19 08:00:38.992833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:47.324 [2024-11-19 08:00:38.998081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.324 [2024-11-19 08:00:38.998138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.324 [2024-11-19 08:00:38.998166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:47.324 [2024-11-19 08:00:39.001998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.324 [2024-11-19 08:00:39.002040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.324 [2024-11-19 08:00:39.002067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:47.324 [2024-11-19 08:00:39.008211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.325 [2024-11-19 08:00:39.008267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.325 [2024-11-19 08:00:39.008294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:47.325 [2024-11-19 08:00:39.014446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.325 [2024-11-19 08:00:39.014502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.325 [2024-11-19 08:00:39.014529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:47.325 [2024-11-19 08:00:39.020594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.325 [2024-11-19 08:00:39.020650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.325 [2024-11-19 08:00:39.020677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:47.325 [2024-11-19 08:00:39.026939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.325 [2024-11-19 08:00:39.026981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.325 [2024-11-19 08:00:39.027008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:47.325 [2024-11-19 08:00:39.033076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.325 [2024-11-19 08:00:39.033133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.325 [2024-11-19 08:00:39.033160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:47.325 [2024-11-19 08:00:39.039252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.325 [2024-11-19 08:00:39.039308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.325 [2024-11-19 08:00:39.039335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:47.325 [2024-11-19 08:00:39.045397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.325 [2024-11-19 08:00:39.045452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.325 [2024-11-19 08:00:39.045479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:47.325 [2024-11-19 08:00:39.051660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.325 [2024-11-19 08:00:39.051726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.325 [2024-11-19 08:00:39.051753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:47.325 [2024-11-19 08:00:39.057791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.325 [2024-11-19 08:00:39.057833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.325 [2024-11-19 08:00:39.057859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:47.325 [2024-11-19 08:00:39.063817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.325 [2024-11-19 08:00:39.063859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.325 [2024-11-19 08:00:39.063885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:47.325 [2024-11-19 08:00:39.070745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.325 [2024-11-19 08:00:39.070790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.325 [2024-11-19 08:00:39.070832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:47.325 [2024-11-19 08:00:39.076012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.325 [2024-11-19 08:00:39.076067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.325 [2024-11-19 08:00:39.076093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:47.325 [2024-11-19 08:00:39.083141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.325 [2024-11-19 08:00:39.083214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.325 [2024-11-19 08:00:39.083242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:47.325 [2024-11-19 08:00:39.088421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.325 [2024-11-19 08:00:39.088475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.325 [2024-11-19 08:00:39.088501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:47.325 [2024-11-19 08:00:39.094643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.325 [2024-11-19 08:00:39.094707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.325 [2024-11-19 08:00:39.094737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:47.325 [2024-11-19 08:00:39.100724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.325 [2024-11-19 08:00:39.100767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.325 [2024-11-19 08:00:39.100794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:47.325 [2024-11-19 08:00:39.106823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.325 [2024-11-19 08:00:39.106866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.325 [2024-11-19 08:00:39.106892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:47.325 [2024-11-19 08:00:39.113001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.325 [2024-11-19 08:00:39.113057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.325 [2024-11-19 08:00:39.113098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:47.325 [2024-11-19 08:00:39.119430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.325 [2024-11-19 08:00:39.119485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.325 [2024-11-19 08:00:39.119512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:47.325 [2024-11-19 08:00:39.126093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.325 [2024-11-19 08:00:39.126149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.325 [2024-11-19 08:00:39.126177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:47.325 [2024-11-19 08:00:39.132386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.325 [2024-11-19 08:00:39.132441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.325 [2024-11-19 08:00:39.132469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:47.325 [2024-11-19 08:00:39.138475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.325 [2024-11-19 08:00:39.138530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.325 [2024-11-19 08:00:39.138557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:47.325 [2024-11-19 08:00:39.144616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.325 [2024-11-19 08:00:39.144660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.325 [2024-11-19 08:00:39.144694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:47.325 [2024-11-19 08:00:39.150910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.325 [2024-11-19 08:00:39.150953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.325 [2024-11-19 08:00:39.150996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:47.325 [2024-11-19 08:00:39.157278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.325 [2024-11-19 08:00:39.157335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.326 [2024-11-19 08:00:39.157362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:47.326 [2024-11-19 08:00:39.163534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.326 [2024-11-19 08:00:39.163591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.326 [2024-11-19 08:00:39.163618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:47.326 [2024-11-19 08:00:39.169629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.326 [2024-11-19 08:00:39.169686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.326 [2024-11-19 08:00:39.169722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:47.326 [2024-11-19 08:00:39.175555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.326 [2024-11-19 08:00:39.175612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.326 [2024-11-19 08:00:39.175650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:47.326 [2024-11-19 08:00:39.181616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.326 [2024-11-19 08:00:39.181676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.326 [2024-11-19 08:00:39.181726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:47.326 [2024-11-19 08:00:39.187953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.326 [2024-11-19 08:00:39.187996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.326 [2024-11-19 08:00:39.188023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:47.326 [2024-11-19 08:00:39.194268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.326 [2024-11-19 08:00:39.194326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.326 [2024-11-19 08:00:39.194360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:47.326 [2024-11-19 08:00:39.200605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.326 [2024-11-19 08:00:39.200662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.326 [2024-11-19 08:00:39.200699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:47.326 [2024-11-19 08:00:39.206762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.326 [2024-11-19 08:00:39.206829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.326 [2024-11-19 08:00:39.206857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:47.326 [2024-11-19 08:00:39.213069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.326 [2024-11-19 08:00:39.213112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.326 [2024-11-19 08:00:39.213139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:47.326 [2024-11-19 08:00:39.218759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.326 [2024-11-19 08:00:39.218801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.326 [2024-11-19 08:00:39.218844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:47.326 [2024-11-19 08:00:39.224341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.326 [2024-11-19 08:00:39.224384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.326 [2024-11-19 08:00:39.224412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:47.326 [2024-11-19 08:00:39.228749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.326 [2024-11-19 08:00:39.228791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.326 [2024-11-19 08:00:39.228825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:47.326 [2024-11-19 08:00:39.235385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.326 [2024-11-19 08:00:39.235441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.326 [2024-11-19 08:00:39.235469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:47.326 [2024-11-19 08:00:39.241891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.326 [2024-11-19 08:00:39.241934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.326 [2024-11-19 08:00:39.241972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:47.326 [2024-11-19 08:00:39.248667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.326 [2024-11-19 08:00:39.248731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.326 [2024-11-19 08:00:39.248759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:47.326 [2024-11-19 08:00:39.254891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.326 [2024-11-19 08:00:39.254936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.326 [2024-11-19 08:00:39.254963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:47.588 [2024-11-19 08:00:39.261405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.588 [2024-11-19 08:00:39.261462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.588 [2024-11-19 08:00:39.261488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:47.588 [2024-11-19 08:00:39.268208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.588 [2024-11-19 08:00:39.268265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.588 [2024-11-19 08:00:39.268293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:47.588 [2024-11-19 08:00:39.274791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.588 [2024-11-19 08:00:39.274833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.588 [2024-11-19 08:00:39.274859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:47.588 [2024-11-19 08:00:39.280954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.588 [2024-11-19 08:00:39.281011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.588 [2024-11-19 08:00:39.281052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:47.588 [2024-11-19 08:00:39.287322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.588 [2024-11-19 08:00:39.287378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.588 [2024-11-19 08:00:39.287406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:47.588 [2024-11-19 08:00:39.293394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.588 [2024-11-19 08:00:39.293451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.588 [2024-11-19 08:00:39.293478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:47.588 [2024-11-19 08:00:39.299617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.588 [2024-11-19 08:00:39.299672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.588 [2024-11-19 08:00:39.299708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:47.588 [2024-11-19 08:00:39.305760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.588 [2024-11-19 08:00:39.305824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.588 [2024-11-19 08:00:39.305851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:47.588 [2024-11-19 08:00:39.311994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.588 [2024-11-19 08:00:39.312051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.588 [2024-11-19 08:00:39.312079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:47.588 [2024-11-19 08:00:39.318179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.588 [2024-11-19 08:00:39.318235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.588 [2024-11-19 08:00:39.318262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:47.588 [2024-11-19 08:00:39.324544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.588 [2024-11-19 08:00:39.324600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.588 [2024-11-19 08:00:39.324627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:47.588 [2024-11-19 08:00:39.331655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.588 [2024-11-19 08:00:39.331721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.588 [2024-11-19 08:00:39.331749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:47.588 [2024-11-19 08:00:39.337994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.588 [2024-11-19 08:00:39.338053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.588 [2024-11-19 08:00:39.338095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:47.588 [2024-11-19 08:00:39.343419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.588 [2024-11-19 08:00:39.343473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.588 [2024-11-19 08:00:39.343500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:47.588 [2024-11-19 08:00:39.349569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.588 [2024-11-19 08:00:39.349625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.588 [2024-11-19 08:00:39.349652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:47.588 [2024-11-19 08:00:39.355857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.588 [2024-11-19 08:00:39.355913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.588 [2024-11-19 08:00:39.355939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:47.588 [2024-11-19 08:00:39.362343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.588 [2024-11-19 08:00:39.362398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.588 [2024-11-19 08:00:39.362426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:47.588 [2024-11-19 08:00:39.368575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.588 [2024-11-19 08:00:39.368632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.588 [2024-11-19 08:00:39.368660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:47.588 [2024-11-19 08:00:39.374645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.588 [2024-11-19 08:00:39.374707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.588 [2024-11-19 08:00:39.374736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:47.588 [2024-11-19 08:00:39.380831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.588 [2024-11-19 08:00:39.380873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.588 [2024-11-19 08:00:39.380899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:47.588 [2024-11-19 08:00:39.386814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.588 [2024-11-19 08:00:39.386855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.588 [2024-11-19 08:00:39.386891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:47.588 [2024-11-19 08:00:39.393012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.588 [2024-11-19 08:00:39.393070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.588 [2024-11-19 08:00:39.393111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:47.588 [2024-11-19 08:00:39.399001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.588 [2024-11-19 08:00:39.399046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.588 [2024-11-19 08:00:39.399073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:47.588 [2024-11-19 08:00:39.405044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.588 [2024-11-19 08:00:39.405085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.588 [2024-11-19 08:00:39.405126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:47.588 [2024-11-19 08:00:39.411422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.588 [2024-11-19 08:00:39.411477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.588 [2024-11-19 08:00:39.411504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:47.589 [2024-11-19 08:00:39.417872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.589 [2024-11-19 08:00:39.417913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.589 [2024-11-19 08:00:39.417940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:47.589 [2024-11-19 08:00:39.424185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.589 [2024-11-19 08:00:39.424240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.589 [2024-11-19 08:00:39.424267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:47.589 [2024-11-19 08:00:39.430478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.589 [2024-11-19 08:00:39.430533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.589 [2024-11-19 08:00:39.430561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:47.589 [2024-11-19 08:00:39.436513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.589 [2024-11-19 08:00:39.436568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.589 [2024-11-19 08:00:39.436596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:47.589 [2024-11-19 08:00:39.442640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.589 [2024-11-19 08:00:39.442705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.589 [2024-11-19 08:00:39.442734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:47.589 [2024-11-19 08:00:39.448827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.589 [2024-11-19 08:00:39.448869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.589 [2024-11-19 08:00:39.448896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:47.589 [2024-11-19 08:00:39.454928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.589 [2024-11-19 08:00:39.454971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.589 [2024-11-19 08:00:39.454999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:47.589 [2024-11-19 08:00:39.460890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.589 [2024-11-19 08:00:39.460931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.589 [2024-11-19 08:00:39.460958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:47.589 [2024-11-19 08:00:39.467193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.589 [2024-11-19 08:00:39.467251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.589 [2024-11-19 08:00:39.467278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:47.589 [2024-11-19 08:00:39.473322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.589 [2024-11-19 08:00:39.473377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.589 [2024-11-19 08:00:39.473404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:47.589 [2024-11-19 08:00:39.479459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.589 [2024-11-19 08:00:39.479515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.589 [2024-11-19 08:00:39.479542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:47.589 [2024-11-19 08:00:39.485750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.589 [2024-11-19 08:00:39.485792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.589 [2024-11-19 08:00:39.485828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:47.589 [2024-11-19 08:00:39.491942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.589 [2024-11-19 08:00:39.491984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.589 [2024-11-19 08:00:39.492019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:47.589 [2024-11-19 08:00:39.498035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.589 [2024-11-19 08:00:39.498092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.589 [2024-11-19 08:00:39.498119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:47.589 [2024-11-19 08:00:39.505968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.589 [2024-11-19 08:00:39.506027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.589 [2024-11-19 08:00:39.506055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:47.589 [2024-11-19 08:00:39.511717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.589 [2024-11-19 08:00:39.511783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.589 [2024-11-19 08:00:39.511808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:47.589 [2024-11-19 08:00:39.519323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.589 [2024-11-19 08:00:39.519370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.589 [2024-11-19 08:00:39.519401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:47.849 [2024-11-19 08:00:39.524916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.849 [2024-11-19 08:00:39.524959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.849 [2024-11-19 08:00:39.524986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:47.849 [2024-11-19 08:00:39.530575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.849 [2024-11-19 08:00:39.530623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.849 [2024-11-19 08:00:39.530653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:47.849 [2024-11-19 08:00:39.535123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.849 [2024-11-19 08:00:39.535169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.849 [2024-11-19 08:00:39.535198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:47.849 [2024-11-19 08:00:39.540128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.849 [2024-11-19 08:00:39.540175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.849 [2024-11-19 08:00:39.540205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:47.849 [2024-11-19 08:00:39.546441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.849 [2024-11-19 08:00:39.546497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.849 [2024-11-19 08:00:39.546527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:47.849 [2024-11-19 08:00:39.553246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.849 [2024-11-19 08:00:39.553294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.849 [2024-11-19 08:00:39.553324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:47.849 [2024-11-19 08:00:39.559816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.849 [2024-11-19 08:00:39.559856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.849 [2024-11-19 08:00:39.559881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:47.849 [2024-11-19 08:00:39.566470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.849 [2024-11-19 08:00:39.566517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.849 [2024-11-19 08:00:39.566547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:47.849 [2024-11-19 08:00:39.573174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.849 [2024-11-19 08:00:39.573222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.849 [2024-11-19 08:00:39.573252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:47.849 [2024-11-19 08:00:39.579737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.849 [2024-11-19 08:00:39.579795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.849 [2024-11-19 08:00:39.579821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:47.849 [2024-11-19 08:00:39.584571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.849 [2024-11-19 08:00:39.584617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.849 [2024-11-19 08:00:39.584647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:47.849 [2024-11-19 08:00:39.589960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.849 [2024-11-19 08:00:39.589998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.849 [2024-11-19 08:00:39.590023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:47.849 [2024-11-19 08:00:39.596945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.849 [2024-11-19 08:00:39.596986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.849 [2024-11-19 08:00:39.597040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:47.849 [2024-11-19 08:00:39.603660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.849 [2024-11-19 08:00:39.603717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.849 [2024-11-19 08:00:39.603763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:47.849 [2024-11-19 08:00:39.610288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.849 [2024-11-19 08:00:39.610336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.849 [2024-11-19 08:00:39.610396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:47.849 [2024-11-19 08:00:39.616878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.849 [2024-11-19 08:00:39.616918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.849 [2024-11-19 08:00:39.616944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:47.849 [2024-11-19 08:00:39.623821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.849 [2024-11-19 08:00:39.623860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.849 [2024-11-19 08:00:39.623886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:47.849 [2024-11-19 08:00:39.630609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.849 [2024-11-19 08:00:39.630656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.849 [2024-11-19 08:00:39.630687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:47.849 [2024-11-19 08:00:39.637165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.849 [2024-11-19 08:00:39.637212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.849 [2024-11-19 08:00:39.637242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:47.849 [2024-11-19 08:00:39.644782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.849 [2024-11-19 08:00:39.644825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.849 [2024-11-19 08:00:39.644851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:47.850 [2024-11-19 08:00:39.650406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.850 [2024-11-19 08:00:39.650453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.850 [2024-11-19 08:00:39.650484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:47.850 [2024-11-19 08:00:39.657122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.850 [2024-11-19 08:00:39.657180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.850 [2024-11-19 08:00:39.657210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:47.850 [2024-11-19 08:00:39.663900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.850 [2024-11-19 08:00:39.663942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.850 [2024-11-19 08:00:39.663982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:47.850 [2024-11-19 08:00:39.670594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.850 [2024-11-19 08:00:39.670641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.850 [2024-11-19 08:00:39.670671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:47.850 [2024-11-19 08:00:39.677393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.850 [2024-11-19 08:00:39.677439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.850 [2024-11-19 08:00:39.677468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:47.850 [2024-11-19 08:00:39.683978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.850 [2024-11-19 08:00:39.684017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.850 [2024-11-19 08:00:39.684061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:47.850 [2024-11-19 08:00:39.690571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.850 [2024-11-19 08:00:39.690618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.850 [2024-11-19 08:00:39.690647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:47.850 [2024-11-19 08:00:39.697326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.850 [2024-11-19 08:00:39.697374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.850 [2024-11-19 08:00:39.697404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:47.850 [2024-11-19 08:00:39.704018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.850 [2024-11-19 08:00:39.704066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.850 [2024-11-19 08:00:39.704096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:47.850 [2024-11-19 08:00:39.710282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.850 [2024-11-19 08:00:39.710330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.850 [2024-11-19 08:00:39.710369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:47.850 [2024-11-19 08:00:39.715324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.850 [2024-11-19 08:00:39.715375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.850 [2024-11-19 08:00:39.715407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:47.850 [2024-11-19 08:00:39.720409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.850 [2024-11-19 08:00:39.720455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.850 [2024-11-19 08:00:39.720485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:47.850 [2024-11-19 08:00:39.727462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.850 [2024-11-19 08:00:39.727522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.850 [2024-11-19 08:00:39.727554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:47.850 [2024-11-19 08:00:39.734291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.850 [2024-11-19 08:00:39.734339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.850 [2024-11-19 08:00:39.734369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:47.850 [2024-11-19 08:00:39.741408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.850 [2024-11-19 08:00:39.741456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.850 [2024-11-19 08:00:39.741486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:47.850 [2024-11-19 08:00:39.748591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.850 [2024-11-19 08:00:39.748640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.850 [2024-11-19 08:00:39.748670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:47.850 [2024-11-19 08:00:39.755523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.850 [2024-11-19 08:00:39.755572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.850 [2024-11-19 08:00:39.755602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:47.850 [2024-11-19 08:00:39.762247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.850 [2024-11-19 08:00:39.762295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.850 [2024-11-19 08:00:39.762326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:47.850 [2024-11-19 08:00:39.768612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.850 [2024-11-19 08:00:39.768668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.850 [2024-11-19 08:00:39.768710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:47.850 [2024-11-19 08:00:39.772776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.850 [2024-11-19 08:00:39.772819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.850 [2024-11-19 08:00:39.772846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:47.850 [2024-11-19 08:00:39.778478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:47.850 [2024-11-19 08:00:39.778525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.850 [2024-11-19 08:00:39.778555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:48.110 [2024-11-19 08:00:39.782889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.110 [2024-11-19 08:00:39.782939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.110 [2024-11-19 08:00:39.782967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:48.110 [2024-11-19 08:00:39.787975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.110 [2024-11-19 08:00:39.788035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.110 [2024-11-19 08:00:39.788066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:48.110 [2024-11-19 08:00:39.793153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.110 [2024-11-19 08:00:39.793201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.110 [2024-11-19 08:00:39.793239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:48.110 [2024-11-19 08:00:39.798085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.110 [2024-11-19 08:00:39.798131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.110 [2024-11-19 08:00:39.798163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:48.110 [2024-11-19 08:00:39.804478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.110 [2024-11-19 08:00:39.804525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.110 [2024-11-19 08:00:39.804556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:48.110 [2024-11-19 08:00:39.812712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.110 [2024-11-19 08:00:39.812774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.110 [2024-11-19 08:00:39.812809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:48.110 [2024-11-19 08:00:39.821486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.110 [2024-11-19 08:00:39.821534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.110 [2024-11-19 08:00:39.821565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:48.110 [2024-11-19 08:00:39.830214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.110 [2024-11-19 08:00:39.830262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.110 [2024-11-19 08:00:39.830293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:48.110 [2024-11-19 08:00:39.838821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.110 [2024-11-19 08:00:39.838863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.110 [2024-11-19 08:00:39.838889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:48.110 [2024-11-19 08:00:39.847390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.110 [2024-11-19 08:00:39.847438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.110 [2024-11-19 08:00:39.847469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:48.110 [2024-11-19 08:00:39.856024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.110 [2024-11-19 08:00:39.856081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.110 [2024-11-19 08:00:39.856121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:48.110 [2024-11-19 08:00:39.864661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:48.110 [2024-11-19 08:00:39.864721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.110 [2024-11-19 08:00:39.864765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:48.110 4696.50 IOPS, 587.06 MiB/s 00:36:48.110 Latency(us) 00:36:48.110 [2024-11-19T07:00:40.040Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:48.110 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:36:48.110 nvme0n1 : 2.00 4693.51 586.69 0.00 0.00 3402.65 1007.31 15922.82 00:36:48.110 [2024-11-19T07:00:40.040Z] =================================================================================================================== 00:36:48.110 [2024-11-19T07:00:40.040Z] Total : 4693.51 586.69 0.00 0.00 3402.65 1007.31 15922.82 00:36:48.110 { 00:36:48.110 "results": [ 00:36:48.110 { 00:36:48.110 "job": "nvme0n1", 00:36:48.110 "core_mask": "0x2", 00:36:48.110 "workload": "randread", 00:36:48.110 "status": "finished", 00:36:48.110 "queue_depth": 16, 00:36:48.110 "io_size": 131072, 00:36:48.110 "runtime": 2.004683, 00:36:48.110 "iops": 4693.510145993157, 00:36:48.110 "mibps": 586.6887682491446, 00:36:48.110 "io_failed": 0, 00:36:48.110 "io_timeout": 0, 00:36:48.110 "avg_latency_us": 3402.646517636778, 00:36:48.110 "min_latency_us": 1007.3125925925926, 00:36:48.110 "max_latency_us": 15922.82074074074 00:36:48.110 } 00:36:48.110 ], 00:36:48.110 "core_count": 1 00:36:48.110 } 00:36:48.110 08:00:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:48.110 08:00:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:48.110 08:00:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:48.110 | .driver_specific 00:36:48.110 | .nvme_error 00:36:48.110 | .status_code 00:36:48.110 | .command_transient_transport_error' 00:36:48.111 08:00:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:48.370 08:00:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 304 > 0 )) 00:36:48.370 08:00:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3130107 00:36:48.370 08:00:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3130107 ']' 00:36:48.370 08:00:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3130107 00:36:48.370 08:00:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:36:48.370 08:00:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:48.370 08:00:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3130107 00:36:48.370 08:00:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:48.370 08:00:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:48.370 08:00:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3130107' 00:36:48.370 killing process with pid 3130107 00:36:48.370 08:00:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3130107 00:36:48.370 Received shutdown signal, test time was about 2.000000 seconds 00:36:48.370 00:36:48.370 Latency(us) 00:36:48.370 [2024-11-19T07:00:40.300Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:48.370 [2024-11-19T07:00:40.300Z] =================================================================================================================== 00:36:48.370 [2024-11-19T07:00:40.300Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:48.370 08:00:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3130107 00:36:49.307 08:00:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:36:49.307 08:00:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:49.307 08:00:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:36:49.307 08:00:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:36:49.307 08:00:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:36:49.308 08:00:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3130764 00:36:49.308 08:00:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:36:49.308 08:00:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3130764 /var/tmp/bperf.sock 00:36:49.308 08:00:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3130764 ']' 00:36:49.308 08:00:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:49.308 08:00:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:49.308 08:00:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:49.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:49.308 08:00:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:49.308 08:00:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:49.308 [2024-11-19 08:00:41.123118] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:36:49.308 [2024-11-19 08:00:41.123278] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3130764 ] 00:36:49.565 [2024-11-19 08:00:41.257188] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:49.565 [2024-11-19 08:00:41.378105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:50.501 08:00:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:50.501 08:00:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:36:50.501 08:00:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:50.501 08:00:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:50.501 08:00:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:50.501 08:00:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:50.501 08:00:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:50.501 08:00:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:50.501 08:00:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:50.501 08:00:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:51.067 nvme0n1 00:36:51.067 08:00:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:36:51.067 08:00:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:51.067 08:00:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:51.067 08:00:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:51.067 08:00:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:51.067 08:00:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:51.326 Running I/O for 2 seconds... 00:36:51.326 [2024-11-19 08:00:43.071258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be2c28 00:36:51.326 [2024-11-19 08:00:43.073332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:13624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:51.326 [2024-11-19 08:00:43.073389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:36:51.326 [2024-11-19 08:00:43.086095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf20d8 00:36:51.326 [2024-11-19 08:00:43.087432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:23812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:51.326 [2024-11-19 08:00:43.087481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:36:51.326 [2024-11-19 08:00:43.102346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be99d8 00:36:51.326 [2024-11-19 08:00:43.103684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:6335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:51.326 [2024-11-19 08:00:43.103737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:51.326 [2024-11-19 08:00:43.120048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf7da8 00:36:51.326 [2024-11-19 08:00:43.122258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:51.326 [2024-11-19 08:00:43.122302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:36:51.326 [2024-11-19 08:00:43.135039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf92c0 00:36:51.326 [2024-11-19 08:00:43.136645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:96 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:51.326 [2024-11-19 08:00:43.136704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:51.326 [2024-11-19 08:00:43.149580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf20d8 00:36:51.326 [2024-11-19 08:00:43.151981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:11676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:51.326 [2024-11-19 08:00:43.152020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:51.326 [2024-11-19 08:00:43.167661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be1b48 00:36:51.326 [2024-11-19 08:00:43.169655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:7779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:51.326 [2024-11-19 08:00:43.169716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:51.326 [2024-11-19 08:00:43.182631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be73e0 00:36:51.326 [2024-11-19 08:00:43.184100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:51.326 [2024-11-19 08:00:43.184153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:36:51.326 [2024-11-19 08:00:43.198474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be84c0 00:36:51.326 [2024-11-19 08:00:43.199925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:17609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:51.326 [2024-11-19 08:00:43.199984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:36:51.326 [2024-11-19 08:00:43.214130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bee190 00:36:51.326 [2024-11-19 08:00:43.215824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:14527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:51.326 [2024-11-19 08:00:43.215865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:36:51.326 [2024-11-19 08:00:43.230824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beff18 00:36:51.326 [2024-11-19 08:00:43.232044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:51.326 [2024-11-19 08:00:43.232093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:36:51.326 [2024-11-19 08:00:43.247558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf4f40 00:36:51.326 [2024-11-19 08:00:43.248585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:9309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:51.326 [2024-11-19 08:00:43.248625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:36:51.585 [2024-11-19 08:00:43.263091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:36:51.585 [2024-11-19 08:00:43.264511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:4105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:51.585 [2024-11-19 08:00:43.264555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:36:51.585 [2024-11-19 08:00:43.280245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beb760 00:36:51.585 [2024-11-19 08:00:43.281582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:51.585 [2024-11-19 08:00:43.281626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:51.585 [2024-11-19 08:00:43.296950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfda78 00:36:51.585 [2024-11-19 08:00:43.298039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:4605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:51.585 [2024-11-19 08:00:43.298111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:36:51.585 [2024-11-19 08:00:43.315707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf1430 00:36:51.585 [2024-11-19 08:00:43.318015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:51.585 [2024-11-19 08:00:43.318065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:36:51.585 [2024-11-19 08:00:43.328715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf1430 00:36:51.585 [2024-11-19 08:00:43.330102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:2407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:51.585 [2024-11-19 08:00:43.330145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:36:51.585 [2024-11-19 08:00:43.346585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfef90 00:36:51.585 [2024-11-19 08:00:43.348169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:15004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:51.585 [2024-11-19 08:00:43.348213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:51.585 [2024-11-19 08:00:43.363513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf7538 00:36:51.585 [2024-11-19 08:00:43.365487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:15873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:51.585 [2024-11-19 08:00:43.365530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:51.585 [2024-11-19 08:00:43.378390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfe720 00:36:51.585 [2024-11-19 08:00:43.379785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:51.585 [2024-11-19 08:00:43.379839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:36:51.585 [2024-11-19 08:00:43.396126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be9e10 00:36:51.585 [2024-11-19 08:00:43.398324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:15324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:51.585 [2024-11-19 08:00:43.398367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:36:51.585 [2024-11-19 08:00:43.410884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfbcf0 00:36:51.585 [2024-11-19 08:00:43.412459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:51.585 [2024-11-19 08:00:43.412502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:51.585 [2024-11-19 08:00:43.425199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be0ea0 00:36:51.585 [2024-11-19 08:00:43.427625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:51.585 [2024-11-19 08:00:43.427665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:51.585 [2024-11-19 08:00:43.441202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be27f0 00:36:51.585 [2024-11-19 08:00:43.443114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:51.585 [2024-11-19 08:00:43.443154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:51.585 [2024-11-19 08:00:43.456871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beee38 00:36:51.585 [2024-11-19 08:00:43.458210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:16398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:51.585 [2024-11-19 08:00:43.458253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:51.585 [2024-11-19 08:00:43.473134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf3e60 00:36:51.585 [2024-11-19 08:00:43.474362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:21113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:51.585 [2024-11-19 08:00:43.474406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:51.585 [2024-11-19 08:00:43.489397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be5a90 00:36:51.585 [2024-11-19 08:00:43.491060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:20675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:51.585 [2024-11-19 08:00:43.491104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:51.585 [2024-11-19 08:00:43.505602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bdf988 00:36:51.585 [2024-11-19 08:00:43.507059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:13473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:51.585 [2024-11-19 08:00:43.507102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:36:51.844 [2024-11-19 08:00:43.523929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bef270 00:36:51.844 [2024-11-19 08:00:43.526532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:25174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:51.844 [2024-11-19 08:00:43.526576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:36:51.844 [2024-11-19 08:00:43.535638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfe720 00:36:51.844 [2024-11-19 08:00:43.537036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:14350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:51.844 [2024-11-19 08:00:43.537080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:36:51.844 [2024-11-19 08:00:43.551768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be3d08 00:36:51.844 [2024-11-19 08:00:43.553062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:11185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:51.844 [2024-11-19 08:00:43.553104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:51.844 [2024-11-19 08:00:43.570760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be0630 00:36:51.844 [2024-11-19 08:00:43.573010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:4473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:51.844 [2024-11-19 08:00:43.573054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:36:51.844 [2024-11-19 08:00:43.586781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be9168 00:36:51.844 [2024-11-19 08:00:43.589030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:4282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:51.844 [2024-11-19 08:00:43.589073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:36:51.844 [2024-11-19 08:00:43.601656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:36:51.844 [2024-11-19 08:00:43.602001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:51.844 [2024-11-19 08:00:43.602040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:51.844 [2024-11-19 08:00:43.619749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:36:51.844 [2024-11-19 08:00:43.620088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:17985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:51.844 [2024-11-19 08:00:43.620141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:51.844 [2024-11-19 08:00:43.637765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:36:51.844 [2024-11-19 08:00:43.638083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:51.844 [2024-11-19 08:00:43.638136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:51.844 [2024-11-19 08:00:43.655758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:36:51.844 [2024-11-19 08:00:43.656144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:16627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:51.844 [2024-11-19 08:00:43.656183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:51.844 [2024-11-19 08:00:43.673844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:36:51.844 [2024-11-19 08:00:43.674267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:51.844 [2024-11-19 08:00:43.674310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:51.844 [2024-11-19 08:00:43.692002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:36:51.844 [2024-11-19 08:00:43.692338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:25413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:51.844 [2024-11-19 08:00:43.692376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:51.844 [2024-11-19 08:00:43.709902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:36:51.844 [2024-11-19 08:00:43.710314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:51.844 [2024-11-19 08:00:43.710356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:51.844 [2024-11-19 08:00:43.728138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:36:51.844 [2024-11-19 08:00:43.728489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:8808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:51.844 [2024-11-19 08:00:43.728533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:51.844 [2024-11-19 08:00:43.746374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:36:51.844 [2024-11-19 08:00:43.746728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:51.844 [2024-11-19 08:00:43.746767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:51.844 [2024-11-19 08:00:43.764625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:36:51.844 [2024-11-19 08:00:43.764969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:4665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:51.844 [2024-11-19 08:00:43.765033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:52.102 [2024-11-19 08:00:43.783059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:36:52.102 [2024-11-19 08:00:43.783466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.102 [2024-11-19 08:00:43.783504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:52.102 [2024-11-19 08:00:43.801521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:36:52.102 [2024-11-19 08:00:43.801955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:15407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.102 [2024-11-19 08:00:43.802019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:52.102 [2024-11-19 08:00:43.819882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:36:52.102 [2024-11-19 08:00:43.820264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.102 [2024-11-19 08:00:43.820320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:52.102 [2024-11-19 08:00:43.838098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:36:52.102 [2024-11-19 08:00:43.838434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:20814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.102 [2024-11-19 08:00:43.838472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:52.102 [2024-11-19 08:00:43.856080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:36:52.102 [2024-11-19 08:00:43.856416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.102 [2024-11-19 08:00:43.856468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:52.102 [2024-11-19 08:00:43.874395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:36:52.102 [2024-11-19 08:00:43.874727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:11026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.102 [2024-11-19 08:00:43.874765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:52.102 [2024-11-19 08:00:43.892626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:36:52.102 [2024-11-19 08:00:43.892979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.102 [2024-11-19 08:00:43.893033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:52.102 [2024-11-19 08:00:43.910602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:36:52.102 [2024-11-19 08:00:43.910971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:17252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.102 [2024-11-19 08:00:43.911009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:52.102 [2024-11-19 08:00:43.928631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:36:52.102 [2024-11-19 08:00:43.929201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.102 [2024-11-19 08:00:43.929244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:52.102 [2024-11-19 08:00:43.946777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:36:52.102 [2024-11-19 08:00:43.947092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:14216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.102 [2024-11-19 08:00:43.947145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:52.102 [2024-11-19 08:00:43.964785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:36:52.102 [2024-11-19 08:00:43.965114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.102 [2024-11-19 08:00:43.965166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:52.102 [2024-11-19 08:00:43.982841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:36:52.102 [2024-11-19 08:00:43.983251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:18954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.102 [2024-11-19 08:00:43.983294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:52.103 [2024-11-19 08:00:44.000887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:36:52.103 [2024-11-19 08:00:44.001218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.103 [2024-11-19 08:00:44.001270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:52.103 [2024-11-19 08:00:44.018825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:36:52.103 [2024-11-19 08:00:44.019162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:5046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.103 [2024-11-19 08:00:44.019199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:52.361 [2024-11-19 08:00:44.036951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:36:52.361 [2024-11-19 08:00:44.037384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.361 [2024-11-19 08:00:44.037443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:52.361 [2024-11-19 08:00:44.055048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:36:52.361 14996.00 IOPS, 58.58 MiB/s [2024-11-19T07:00:44.291Z] [2024-11-19 08:00:44.055905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:13184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.361 [2024-11-19 08:00:44.055958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:52.361 [2024-11-19 08:00:44.073165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:36:52.361 [2024-11-19 08:00:44.073558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:4989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.362 [2024-11-19 08:00:44.073609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:52.362 [2024-11-19 08:00:44.091268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:36:52.362 [2024-11-19 08:00:44.091605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:5494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.362 [2024-11-19 08:00:44.091643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:52.362 [2024-11-19 08:00:44.109230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:36:52.362 [2024-11-19 08:00:44.109649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:16823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.362 [2024-11-19 08:00:44.109721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:52.362 [2024-11-19 08:00:44.127326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:36:52.362 [2024-11-19 08:00:44.127724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.362 [2024-11-19 08:00:44.127788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:52.362 [2024-11-19 08:00:44.145428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:36:52.362 [2024-11-19 08:00:44.145820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:12583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.362 [2024-11-19 08:00:44.145874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:52.362 [2024-11-19 08:00:44.163517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:36:52.362 [2024-11-19 08:00:44.163947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:15440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.362 [2024-11-19 08:00:44.163986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:52.362 [2024-11-19 08:00:44.181532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:36:52.362 [2024-11-19 08:00:44.181870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:16585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.362 [2024-11-19 08:00:44.181923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:52.362 [2024-11-19 08:00:44.199499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:36:52.362 [2024-11-19 08:00:44.199901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.362 [2024-11-19 08:00:44.199955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:52.362 [2024-11-19 08:00:44.217494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:36:52.362 [2024-11-19 08:00:44.217842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:3998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.362 [2024-11-19 08:00:44.217894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:52.362 [2024-11-19 08:00:44.235491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:36:52.362 [2024-11-19 08:00:44.235902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:20345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.362 [2024-11-19 08:00:44.235955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:52.362 [2024-11-19 08:00:44.253545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:36:52.362 [2024-11-19 08:00:44.253949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:3865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.362 [2024-11-19 08:00:44.254004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:52.362 [2024-11-19 08:00:44.271642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:36:52.362 [2024-11-19 08:00:44.272044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:21385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.362 [2024-11-19 08:00:44.272101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:52.362 [2024-11-19 08:00:44.289960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:36:52.362 [2024-11-19 08:00:44.290389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:8335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.362 [2024-11-19 08:00:44.290428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:52.622 [2024-11-19 08:00:44.308030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:36:52.622 [2024-11-19 08:00:44.308366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:16346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.622 [2024-11-19 08:00:44.308419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:52.622 [2024-11-19 08:00:44.326050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:36:52.622 [2024-11-19 08:00:44.326384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:16810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.622 [2024-11-19 08:00:44.326438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:52.622 [2024-11-19 08:00:44.344127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:36:52.622 [2024-11-19 08:00:44.344466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:8824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.622 [2024-11-19 08:00:44.344504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:52.622 [2024-11-19 08:00:44.362170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:36:52.622 [2024-11-19 08:00:44.362511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:9173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.622 [2024-11-19 08:00:44.362554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:52.622 [2024-11-19 08:00:44.380044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:36:52.622 [2024-11-19 08:00:44.380472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:15882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.622 [2024-11-19 08:00:44.380511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:52.622 [2024-11-19 08:00:44.398125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:36:52.622 [2024-11-19 08:00:44.398517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:14000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.622 [2024-11-19 08:00:44.398571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:52.622 [2024-11-19 08:00:44.416252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:36:52.622 [2024-11-19 08:00:44.416564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:19395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.622 [2024-11-19 08:00:44.416629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:52.622 [2024-11-19 08:00:44.434270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:36:52.622 [2024-11-19 08:00:44.434672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:15740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.622 [2024-11-19 08:00:44.434739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:52.622 [2024-11-19 08:00:44.452481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:36:52.622 [2024-11-19 08:00:44.452815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:18513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.622 [2024-11-19 08:00:44.452854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:52.622 [2024-11-19 08:00:44.470608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:36:52.622 [2024-11-19 08:00:44.471020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.622 [2024-11-19 08:00:44.471074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:52.622 [2024-11-19 08:00:44.488847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:36:52.622 [2024-11-19 08:00:44.489241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:11211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.622 [2024-11-19 08:00:44.489297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:52.622 [2024-11-19 08:00:44.506913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:36:52.623 [2024-11-19 08:00:44.507338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:11198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.623 [2024-11-19 08:00:44.507382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:52.623 [2024-11-19 08:00:44.525029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:36:52.623 [2024-11-19 08:00:44.525425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:8359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.623 [2024-11-19 08:00:44.525477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:52.623 [2024-11-19 08:00:44.543045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:36:52.623 [2024-11-19 08:00:44.543381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:4823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.623 [2024-11-19 08:00:44.543435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:52.883 [2024-11-19 08:00:44.561063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:36:52.883 [2024-11-19 08:00:44.561455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:9631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.883 [2024-11-19 08:00:44.561510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:52.883 [2024-11-19 08:00:44.579062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:36:52.883 [2024-11-19 08:00:44.579396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:8455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.883 [2024-11-19 08:00:44.579457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:52.883 [2024-11-19 08:00:44.596956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:36:52.883 [2024-11-19 08:00:44.597312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:8882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.883 [2024-11-19 08:00:44.597351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:52.883 [2024-11-19 08:00:44.614902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:36:52.883 [2024-11-19 08:00:44.615331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:4687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.883 [2024-11-19 08:00:44.615374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:52.883 [2024-11-19 08:00:44.632963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:36:52.883 [2024-11-19 08:00:44.633310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:16640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.884 [2024-11-19 08:00:44.633365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:52.884 [2024-11-19 08:00:44.651132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:36:52.884 [2024-11-19 08:00:44.651557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:3572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.884 [2024-11-19 08:00:44.651595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:52.884 [2024-11-19 08:00:44.669255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:36:52.884 [2024-11-19 08:00:44.669581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:4554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.884 [2024-11-19 08:00:44.669619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:52.884 [2024-11-19 08:00:44.687151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:36:52.884 [2024-11-19 08:00:44.687486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:13548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.884 [2024-11-19 08:00:44.687540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:52.884 [2024-11-19 08:00:44.705140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:36:52.884 [2024-11-19 08:00:44.705558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:10330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.884 [2024-11-19 08:00:44.705596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:52.884 [2024-11-19 08:00:44.723156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:36:52.884 [2024-11-19 08:00:44.723483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:14159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.884 [2024-11-19 08:00:44.723536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:52.884 [2024-11-19 08:00:44.741227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:36:52.884 [2024-11-19 08:00:44.741646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:12023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.884 [2024-11-19 08:00:44.741707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:52.884 [2024-11-19 08:00:44.759360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:36:52.884 [2024-11-19 08:00:44.759722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:24780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.884 [2024-11-19 08:00:44.759775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:52.884 [2024-11-19 08:00:44.777484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:36:52.884 [2024-11-19 08:00:44.777827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:23248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.884 [2024-11-19 08:00:44.777881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:52.884 [2024-11-19 08:00:44.795455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:36:52.884 [2024-11-19 08:00:44.795872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:13332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.884 [2024-11-19 08:00:44.795911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:52.884 [2024-11-19 08:00:44.813485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:36:52.884 [2024-11-19 08:00:44.813847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:10993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.884 [2024-11-19 08:00:44.813900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:53.145 [2024-11-19 08:00:44.831450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:36:53.145 [2024-11-19 08:00:44.831895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.145 [2024-11-19 08:00:44.831935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:53.145 [2024-11-19 08:00:44.849499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:36:53.145 [2024-11-19 08:00:44.849846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:12848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.145 [2024-11-19 08:00:44.849885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:53.145 [2024-11-19 08:00:44.867390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:36:53.145 [2024-11-19 08:00:44.867780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:8047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.145 [2024-11-19 08:00:44.867819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:53.145 [2024-11-19 08:00:44.885392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:36:53.145 [2024-11-19 08:00:44.885810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:20510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.145 [2024-11-19 08:00:44.885855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:53.145 [2024-11-19 08:00:44.903480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:36:53.145 [2024-11-19 08:00:44.903819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:4320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.145 [2024-11-19 08:00:44.903872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:53.145 [2024-11-19 08:00:44.921575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:36:53.145 [2024-11-19 08:00:44.922010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:11513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.145 [2024-11-19 08:00:44.922066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:53.145 [2024-11-19 08:00:44.939608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:36:53.145 [2024-11-19 08:00:44.939974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:19129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.145 [2024-11-19 08:00:44.940013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:53.145 [2024-11-19 08:00:44.957818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:36:53.145 [2024-11-19 08:00:44.958139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:13463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.145 [2024-11-19 08:00:44.958191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:53.145 [2024-11-19 08:00:44.975719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:36:53.145 [2024-11-19 08:00:44.976064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:17226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.145 [2024-11-19 08:00:44.976102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:53.145 [2024-11-19 08:00:44.993712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:36:53.145 [2024-11-19 08:00:44.994097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:18522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.145 [2024-11-19 08:00:44.994150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:53.145 [2024-11-19 08:00:45.011819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:36:53.145 [2024-11-19 08:00:45.012158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:12161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.145 [2024-11-19 08:00:45.012195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:53.145 [2024-11-19 08:00:45.029967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:36:53.145 [2024-11-19 08:00:45.030311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:22563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.145 [2024-11-19 08:00:45.030364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:53.145 [2024-11-19 08:00:45.047900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:36:53.145 [2024-11-19 08:00:45.048245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:19302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.145 [2024-11-19 08:00:45.048282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:53.145 14578.00 IOPS, 56.95 MiB/s 00:36:53.145 Latency(us) 00:36:53.145 [2024-11-19T07:00:45.075Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:53.145 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:53.145 nvme0n1 : 2.01 14577.60 56.94 0.00 0.00 8755.06 3519.53 21165.70 00:36:53.145 [2024-11-19T07:00:45.075Z] =================================================================================================================== 00:36:53.145 [2024-11-19T07:00:45.075Z] Total : 14577.60 56.94 0.00 0.00 8755.06 3519.53 21165.70 00:36:53.145 { 00:36:53.145 "results": [ 00:36:53.146 { 00:36:53.146 "job": "nvme0n1", 00:36:53.146 "core_mask": "0x2", 00:36:53.146 "workload": "randwrite", 00:36:53.146 "status": "finished", 00:36:53.146 "queue_depth": 128, 00:36:53.146 "io_size": 4096, 00:36:53.146 "runtime": 2.010482, 00:36:53.146 "iops": 14577.598804664753, 00:36:53.146 "mibps": 56.94374533072169, 00:36:53.146 "io_failed": 0, 00:36:53.146 "io_timeout": 0, 00:36:53.146 "avg_latency_us": 8755.06484974397, 00:36:53.146 "min_latency_us": 3519.525925925926, 00:36:53.146 "max_latency_us": 21165.70074074074 00:36:53.146 } 00:36:53.146 ], 00:36:53.146 "core_count": 1 00:36:53.146 } 00:36:53.404 08:00:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:53.404 08:00:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:53.404 08:00:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:53.404 | .driver_specific 00:36:53.404 | .nvme_error 00:36:53.404 | .status_code 00:36:53.404 | .command_transient_transport_error' 00:36:53.404 08:00:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:53.662 08:00:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 114 > 0 )) 00:36:53.662 08:00:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3130764 00:36:53.662 08:00:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3130764 ']' 00:36:53.662 08:00:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3130764 00:36:53.662 08:00:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:36:53.662 08:00:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:53.662 08:00:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3130764 00:36:53.662 08:00:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:53.662 08:00:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:53.662 08:00:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3130764' 00:36:53.662 killing process with pid 3130764 00:36:53.662 08:00:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3130764 00:36:53.662 Received shutdown signal, test time was about 2.000000 seconds 00:36:53.662 00:36:53.662 Latency(us) 00:36:53.662 [2024-11-19T07:00:45.592Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:53.662 [2024-11-19T07:00:45.592Z] =================================================================================================================== 00:36:53.662 [2024-11-19T07:00:45.592Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:53.662 08:00:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3130764 00:36:54.594 08:00:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:36:54.594 08:00:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:54.594 08:00:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:36:54.594 08:00:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:36:54.594 08:00:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:36:54.594 08:00:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3131311 00:36:54.594 08:00:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:36:54.594 08:00:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3131311 /var/tmp/bperf.sock 00:36:54.594 08:00:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3131311 ']' 00:36:54.594 08:00:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:54.594 08:00:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:54.594 08:00:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:54.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:54.594 08:00:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:54.594 08:00:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:54.594 [2024-11-19 08:00:46.353179] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:36:54.594 [2024-11-19 08:00:46.353331] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3131311 ] 00:36:54.594 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:54.594 Zero copy mechanism will not be used. 00:36:54.594 [2024-11-19 08:00:46.496389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:54.853 [2024-11-19 08:00:46.631897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:55.419 08:00:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:55.419 08:00:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:36:55.419 08:00:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:55.419 08:00:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:55.677 08:00:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:55.677 08:00:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:55.677 08:00:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:55.677 08:00:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:55.677 08:00:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:55.677 08:00:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:56.244 nvme0n1 00:36:56.244 08:00:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:36:56.244 08:00:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:56.244 08:00:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:56.244 08:00:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:56.244 08:00:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:56.244 08:00:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:56.244 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:56.244 Zero copy mechanism will not be used. 00:36:56.244 Running I/O for 2 seconds... 00:36:56.244 [2024-11-19 08:00:48.085920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:56.244 [2024-11-19 08:00:48.086184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.244 [2024-11-19 08:00:48.086235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.244 [2024-11-19 08:00:48.093999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:56.244 [2024-11-19 08:00:48.094212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.244 [2024-11-19 08:00:48.094256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.244 [2024-11-19 08:00:48.101567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:56.244 [2024-11-19 08:00:48.101787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.244 [2024-11-19 08:00:48.101829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.244 [2024-11-19 08:00:48.109012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:56.244 [2024-11-19 08:00:48.109232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.244 [2024-11-19 08:00:48.109273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.244 [2024-11-19 08:00:48.116383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:56.244 [2024-11-19 08:00:48.116607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.244 [2024-11-19 08:00:48.116648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.244 [2024-11-19 08:00:48.123818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:56.244 [2024-11-19 08:00:48.123991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.244 [2024-11-19 08:00:48.124032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.244 [2024-11-19 08:00:48.131328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:56.244 [2024-11-19 08:00:48.131525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.244 [2024-11-19 08:00:48.131573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.244 [2024-11-19 08:00:48.139503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:56.244 [2024-11-19 08:00:48.139716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.244 [2024-11-19 08:00:48.139767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.244 [2024-11-19 08:00:48.147854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:56.244 [2024-11-19 08:00:48.148049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.244 [2024-11-19 08:00:48.148103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.244 [2024-11-19 08:00:48.155394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:56.244 [2024-11-19 08:00:48.155551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.244 [2024-11-19 08:00:48.155590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.244 [2024-11-19 08:00:48.162881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:56.244 [2024-11-19 08:00:48.163041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.244 [2024-11-19 08:00:48.163081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.244 [2024-11-19 08:00:48.170341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:56.244 [2024-11-19 08:00:48.170493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.244 [2024-11-19 08:00:48.170541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.503 [2024-11-19 08:00:48.177839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:56.503 [2024-11-19 08:00:48.178005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.503 [2024-11-19 08:00:48.178045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.503 [2024-11-19 08:00:48.186360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:56.503 [2024-11-19 08:00:48.186623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.503 [2024-11-19 08:00:48.186668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.503 [2024-11-19 08:00:48.194190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:56.503 [2024-11-19 08:00:48.194366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.503 [2024-11-19 08:00:48.194420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.503 [2024-11-19 08:00:48.201490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:56.503 [2024-11-19 08:00:48.201651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.503 [2024-11-19 08:00:48.201699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.503 [2024-11-19 08:00:48.208889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:56.503 [2024-11-19 08:00:48.209084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.504 [2024-11-19 08:00:48.209124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.504 [2024-11-19 08:00:48.216280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:56.504 [2024-11-19 08:00:48.216498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.504 [2024-11-19 08:00:48.216538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.504 [2024-11-19 08:00:48.223345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:56.504 [2024-11-19 08:00:48.223513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.504 [2024-11-19 08:00:48.223552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.504 [2024-11-19 08:00:48.230458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:56.504 [2024-11-19 08:00:48.230682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.504 [2024-11-19 08:00:48.230730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.504 [2024-11-19 08:00:48.237587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:56.504 [2024-11-19 08:00:48.237814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.504 [2024-11-19 08:00:48.237854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.504 [2024-11-19 08:00:48.244793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:56.504 [2024-11-19 08:00:48.244995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.504 [2024-11-19 08:00:48.245034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.504 [2024-11-19 08:00:48.251810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:56.504 [2024-11-19 08:00:48.252020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.504 [2024-11-19 08:00:48.252060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.504 [2024-11-19 08:00:48.258858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:56.504 [2024-11-19 08:00:48.259048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.504 [2024-11-19 08:00:48.259088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.504 [2024-11-19 08:00:48.265968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:56.504 [2024-11-19 08:00:48.266183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.504 [2024-11-19 08:00:48.266223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.504 [2024-11-19 08:00:48.273178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:56.504 [2024-11-19 08:00:48.273369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.504 [2024-11-19 08:00:48.273409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.504 [2024-11-19 08:00:48.280379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:56.504 [2024-11-19 08:00:48.280518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.504 [2024-11-19 08:00:48.280557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.504 [2024-11-19 08:00:48.287527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:56.504 [2024-11-19 08:00:48.287759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.504 [2024-11-19 08:00:48.287798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.504 [2024-11-19 08:00:48.294784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:56.504 [2024-11-19 08:00:48.295004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.504 [2024-11-19 08:00:48.295043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.504 [2024-11-19 08:00:48.301992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:56.504 [2024-11-19 08:00:48.302154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.504 [2024-11-19 08:00:48.302195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.504 [2024-11-19 08:00:48.309300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:56.504 [2024-11-19 08:00:48.309514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.504 [2024-11-19 08:00:48.309553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.504 [2024-11-19 08:00:48.316786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:56.504 [2024-11-19 08:00:48.316967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.504 [2024-11-19 08:00:48.317016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.504 [2024-11-19 08:00:48.324184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:56.504 [2024-11-19 08:00:48.324389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.504 [2024-11-19 08:00:48.324437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.504 [2024-11-19 08:00:48.332384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:56.504 [2024-11-19 08:00:48.332601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.504 [2024-11-19 08:00:48.332641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.504 [2024-11-19 08:00:48.339731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:56.504 [2024-11-19 08:00:48.339904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.504 [2024-11-19 08:00:48.339943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.504 [2024-11-19 08:00:48.347268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:56.504 [2024-11-19 08:00:48.347419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.504 [2024-11-19 08:00:48.347477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.504 [2024-11-19 08:00:48.354524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:56.504 [2024-11-19 08:00:48.354736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.504 [2024-11-19 08:00:48.354775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.504 [2024-11-19 08:00:48.361782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:56.504 [2024-11-19 08:00:48.361931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.504 [2024-11-19 08:00:48.361982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.504 [2024-11-19 08:00:48.368919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:56.504 [2024-11-19 08:00:48.369156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.504 [2024-11-19 08:00:48.369196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.504 [2024-11-19 08:00:48.376031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:56.504 [2024-11-19 08:00:48.376241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.504 [2024-11-19 08:00:48.376282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.504 [2024-11-19 08:00:48.383144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:56.504 [2024-11-19 08:00:48.383357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.504 [2024-11-19 08:00:48.383396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.504 [2024-11-19 08:00:48.390508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:56.504 [2024-11-19 08:00:48.390757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.504 [2024-11-19 08:00:48.390797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.504 [2024-11-19 08:00:48.397873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:56.504 [2024-11-19 08:00:48.398063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.504 [2024-11-19 08:00:48.398109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.504 [2024-11-19 08:00:48.405235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:56.505 [2024-11-19 08:00:48.405380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.505 [2024-11-19 08:00:48.405419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.505 [2024-11-19 08:00:48.412612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:56.505 [2024-11-19 08:00:48.412828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.505 [2024-11-19 08:00:48.412868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.505 [2024-11-19 08:00:48.419876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:56.505 [2024-11-19 08:00:48.420080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.505 [2024-11-19 08:00:48.420119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.505 [2024-11-19 08:00:48.426818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:56.505 [2024-11-19 08:00:48.427026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.505 [2024-11-19 08:00:48.427065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.505 [2024-11-19 08:00:48.434041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:56.505 [2024-11-19 08:00:48.434239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.505 [2024-11-19 08:00:48.434278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.765 [2024-11-19 08:00:48.441497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:56.765 [2024-11-19 08:00:48.441639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.765 [2024-11-19 08:00:48.441696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.765 [2024-11-19 08:00:48.448869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:56.765 [2024-11-19 08:00:48.449068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.765 [2024-11-19 08:00:48.449116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.765 [2024-11-19 08:00:48.456180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:56.765 [2024-11-19 08:00:48.456377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.765 [2024-11-19 08:00:48.456416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.765 [2024-11-19 08:00:48.463482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:56.765 [2024-11-19 08:00:48.463719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.765 [2024-11-19 08:00:48.463766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.765 [2024-11-19 08:00:48.470778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:56.765 [2024-11-19 08:00:48.470978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.765 [2024-11-19 08:00:48.471024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.766 [2024-11-19 08:00:48.477895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:56.766 [2024-11-19 08:00:48.478118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.766 [2024-11-19 08:00:48.478157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.766 [2024-11-19 08:00:48.485028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:56.766 [2024-11-19 08:00:48.485229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.766 [2024-11-19 08:00:48.485268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.766 [2024-11-19 08:00:48.492112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:56.766 [2024-11-19 08:00:48.492288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.766 [2024-11-19 08:00:48.492328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.766 [2024-11-19 08:00:48.499256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:56.766 [2024-11-19 08:00:48.499483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.766 [2024-11-19 08:00:48.499522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.766 [2024-11-19 08:00:48.506391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:56.766 [2024-11-19 08:00:48.506617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.766 [2024-11-19 08:00:48.506656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.766 [2024-11-19 08:00:48.513493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:56.766 [2024-11-19 08:00:48.513737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.766 [2024-11-19 08:00:48.513776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.766 [2024-11-19 08:00:48.520782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:56.766 [2024-11-19 08:00:48.520965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.766 [2024-11-19 08:00:48.521004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.766 [2024-11-19 08:00:48.527898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:56.766 [2024-11-19 08:00:48.528088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.766 [2024-11-19 08:00:48.528127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.766 [2024-11-19 08:00:48.534871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:56.766 [2024-11-19 08:00:48.535095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.766 [2024-11-19 08:00:48.535134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.766 [2024-11-19 08:00:48.541913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:56.766 [2024-11-19 08:00:48.542138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.766 [2024-11-19 08:00:48.542178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.766 [2024-11-19 08:00:48.548921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:56.766 [2024-11-19 08:00:48.549065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.766 [2024-11-19 08:00:48.549105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.766 [2024-11-19 08:00:48.556237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:56.766 [2024-11-19 08:00:48.556444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.766 [2024-11-19 08:00:48.556483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.766 [2024-11-19 08:00:48.563356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:56.766 [2024-11-19 08:00:48.563578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.766 [2024-11-19 08:00:48.563618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.766 [2024-11-19 08:00:48.570395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:56.766 [2024-11-19 08:00:48.570608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.766 [2024-11-19 08:00:48.570648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.766 [2024-11-19 08:00:48.577563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:56.766 [2024-11-19 08:00:48.577730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.766 [2024-11-19 08:00:48.577770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.766 [2024-11-19 08:00:48.584871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:56.766 [2024-11-19 08:00:48.585087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.766 [2024-11-19 08:00:48.585127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.766 [2024-11-19 08:00:48.591940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:56.766 [2024-11-19 08:00:48.592158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.766 [2024-11-19 08:00:48.592197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.766 [2024-11-19 08:00:48.599054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:56.766 [2024-11-19 08:00:48.599251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.766 [2024-11-19 08:00:48.599290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.766 [2024-11-19 08:00:48.606144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:56.766 [2024-11-19 08:00:48.606347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.766 [2024-11-19 08:00:48.606386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.766 [2024-11-19 08:00:48.613379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:56.766 [2024-11-19 08:00:48.613618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.766 [2024-11-19 08:00:48.613663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.766 [2024-11-19 08:00:48.620794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:56.766 [2024-11-19 08:00:48.620970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.766 [2024-11-19 08:00:48.621008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.766 [2024-11-19 08:00:48.627836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:56.766 [2024-11-19 08:00:48.628054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.766 [2024-11-19 08:00:48.628094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.766 [2024-11-19 08:00:48.634888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:56.766 [2024-11-19 08:00:48.635103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.766 [2024-11-19 08:00:48.635142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.766 [2024-11-19 08:00:48.642332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:56.766 [2024-11-19 08:00:48.642527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.766 [2024-11-19 08:00:48.642566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.766 [2024-11-19 08:00:48.649399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:56.766 [2024-11-19 08:00:48.649649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.766 [2024-11-19 08:00:48.649700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.766 [2024-11-19 08:00:48.656712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:56.766 [2024-11-19 08:00:48.656916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.766 [2024-11-19 08:00:48.656970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.766 [2024-11-19 08:00:48.663963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:56.766 [2024-11-19 08:00:48.664172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.767 [2024-11-19 08:00:48.664212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:56.767 [2024-11-19 08:00:48.671181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:56.767 [2024-11-19 08:00:48.671387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.767 [2024-11-19 08:00:48.671425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:56.767 [2024-11-19 08:00:48.678309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:56.767 [2024-11-19 08:00:48.678492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.767 [2024-11-19 08:00:48.678531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:56.767 [2024-11-19 08:00:48.685408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:56.767 [2024-11-19 08:00:48.685553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.767 [2024-11-19 08:00:48.685592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:56.767 [2024-11-19 08:00:48.692460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:56.767 [2024-11-19 08:00:48.692679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.767 [2024-11-19 08:00:48.692729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:57.026 [2024-11-19 08:00:48.700579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.026 [2024-11-19 08:00:48.700784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.026 [2024-11-19 08:00:48.700824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:57.026 [2024-11-19 08:00:48.708321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.026 [2024-11-19 08:00:48.708479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.026 [2024-11-19 08:00:48.708537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:57.026 [2024-11-19 08:00:48.715360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.026 [2024-11-19 08:00:48.715532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.026 [2024-11-19 08:00:48.715591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:57.026 [2024-11-19 08:00:48.722415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.026 [2024-11-19 08:00:48.722565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.026 [2024-11-19 08:00:48.722604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:57.026 [2024-11-19 08:00:48.729527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.026 [2024-11-19 08:00:48.729687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.026 [2024-11-19 08:00:48.729735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:57.026 [2024-11-19 08:00:48.736705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.026 [2024-11-19 08:00:48.736929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.026 [2024-11-19 08:00:48.736968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:57.026 [2024-11-19 08:00:48.743848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.026 [2024-11-19 08:00:48.744053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.026 [2024-11-19 08:00:48.744094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:57.026 [2024-11-19 08:00:48.750869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.026 [2024-11-19 08:00:48.751089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.026 [2024-11-19 08:00:48.751128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:57.026 [2024-11-19 08:00:48.757882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.026 [2024-11-19 08:00:48.758021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.026 [2024-11-19 08:00:48.758068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:57.026 [2024-11-19 08:00:48.764927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.026 [2024-11-19 08:00:48.765118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.026 [2024-11-19 08:00:48.765157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:57.026 [2024-11-19 08:00:48.772059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.026 [2024-11-19 08:00:48.772261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.026 [2024-11-19 08:00:48.772301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:57.026 [2024-11-19 08:00:48.779226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.026 [2024-11-19 08:00:48.779393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.026 [2024-11-19 08:00:48.779433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:57.027 [2024-11-19 08:00:48.786372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.027 [2024-11-19 08:00:48.786630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.027 [2024-11-19 08:00:48.786673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:57.027 [2024-11-19 08:00:48.793822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.027 [2024-11-19 08:00:48.794029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.027 [2024-11-19 08:00:48.794079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:57.027 [2024-11-19 08:00:48.801002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.027 [2024-11-19 08:00:48.801240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.027 [2024-11-19 08:00:48.801280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:57.027 [2024-11-19 08:00:48.808239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.027 [2024-11-19 08:00:48.808385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.027 [2024-11-19 08:00:48.808426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:57.027 [2024-11-19 08:00:48.815616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.027 [2024-11-19 08:00:48.815838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.027 [2024-11-19 08:00:48.815877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:57.027 [2024-11-19 08:00:48.823206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.027 [2024-11-19 08:00:48.823408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.027 [2024-11-19 08:00:48.823447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:57.027 [2024-11-19 08:00:48.830873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.027 [2024-11-19 08:00:48.831086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.027 [2024-11-19 08:00:48.831125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:57.027 [2024-11-19 08:00:48.838627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.027 [2024-11-19 08:00:48.838825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.027 [2024-11-19 08:00:48.838865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:57.027 [2024-11-19 08:00:48.846127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.027 [2024-11-19 08:00:48.846326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.027 [2024-11-19 08:00:48.846365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:57.027 [2024-11-19 08:00:48.853661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.027 [2024-11-19 08:00:48.853865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.027 [2024-11-19 08:00:48.853905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:57.027 [2024-11-19 08:00:48.861134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.027 [2024-11-19 08:00:48.861301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.027 [2024-11-19 08:00:48.861341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:57.027 [2024-11-19 08:00:48.868425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.027 [2024-11-19 08:00:48.868631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.027 [2024-11-19 08:00:48.868670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:57.027 [2024-11-19 08:00:48.875466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.027 [2024-11-19 08:00:48.875611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.027 [2024-11-19 08:00:48.875650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:57.027 [2024-11-19 08:00:48.882591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.027 [2024-11-19 08:00:48.882809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.027 [2024-11-19 08:00:48.882856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:57.027 [2024-11-19 08:00:48.889741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.027 [2024-11-19 08:00:48.889958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.027 [2024-11-19 08:00:48.889998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:57.027 [2024-11-19 08:00:48.896790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.027 [2024-11-19 08:00:48.896970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.027 [2024-11-19 08:00:48.897009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:57.027 [2024-11-19 08:00:48.903942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.027 [2024-11-19 08:00:48.904114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.027 [2024-11-19 08:00:48.904153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:57.027 [2024-11-19 08:00:48.910985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.027 [2024-11-19 08:00:48.911216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.027 [2024-11-19 08:00:48.911255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:57.027 [2024-11-19 08:00:48.918128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.027 [2024-11-19 08:00:48.918325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.027 [2024-11-19 08:00:48.918364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:57.027 [2024-11-19 08:00:48.925327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.027 [2024-11-19 08:00:48.925494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.027 [2024-11-19 08:00:48.925553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:57.027 [2024-11-19 08:00:48.932422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.027 [2024-11-19 08:00:48.932627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.027 [2024-11-19 08:00:48.932666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:57.027 [2024-11-19 08:00:48.939869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.027 [2024-11-19 08:00:48.940089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.027 [2024-11-19 08:00:48.940128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:57.027 [2024-11-19 08:00:48.947270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.027 [2024-11-19 08:00:48.947453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.027 [2024-11-19 08:00:48.947493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:57.027 [2024-11-19 08:00:48.954861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.027 [2024-11-19 08:00:48.955072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.027 [2024-11-19 08:00:48.955111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:57.287 [2024-11-19 08:00:48.962671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.287 [2024-11-19 08:00:48.962889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.287 [2024-11-19 08:00:48.962930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:57.287 [2024-11-19 08:00:48.970196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.287 [2024-11-19 08:00:48.970394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.287 [2024-11-19 08:00:48.970434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:57.287 [2024-11-19 08:00:48.977787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.287 [2024-11-19 08:00:48.977991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.287 [2024-11-19 08:00:48.978030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:57.287 [2024-11-19 08:00:48.985143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.287 [2024-11-19 08:00:48.985307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.287 [2024-11-19 08:00:48.985346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:57.287 [2024-11-19 08:00:48.992346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.287 [2024-11-19 08:00:48.992498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.288 [2024-11-19 08:00:48.992555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:57.288 [2024-11-19 08:00:48.999532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.288 [2024-11-19 08:00:48.999747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.288 [2024-11-19 08:00:48.999788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:57.288 [2024-11-19 08:00:49.006851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.288 [2024-11-19 08:00:49.007040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.288 [2024-11-19 08:00:49.007079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:57.288 [2024-11-19 08:00:49.013945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.288 [2024-11-19 08:00:49.014177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.288 [2024-11-19 08:00:49.014216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:57.288 [2024-11-19 08:00:49.020970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.288 [2024-11-19 08:00:49.021171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.288 [2024-11-19 08:00:49.021210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:57.288 [2024-11-19 08:00:49.028207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.288 [2024-11-19 08:00:49.028391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.288 [2024-11-19 08:00:49.028430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:57.288 [2024-11-19 08:00:49.035316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.288 [2024-11-19 08:00:49.035466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.288 [2024-11-19 08:00:49.035505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:57.288 [2024-11-19 08:00:49.042806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.288 [2024-11-19 08:00:49.042980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.288 [2024-11-19 08:00:49.043025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:57.288 [2024-11-19 08:00:49.050250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.288 [2024-11-19 08:00:49.050475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.288 [2024-11-19 08:00:49.050514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:57.288 [2024-11-19 08:00:49.057632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.288 [2024-11-19 08:00:49.057800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.288 [2024-11-19 08:00:49.057840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:57.288 [2024-11-19 08:00:49.065276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.288 [2024-11-19 08:00:49.065518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.288 [2024-11-19 08:00:49.065561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:57.288 [2024-11-19 08:00:49.073030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.288 [2024-11-19 08:00:49.073255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.288 [2024-11-19 08:00:49.073294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:57.288 [2024-11-19 08:00:49.080742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.288 [2024-11-19 08:00:49.082793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.288 [2024-11-19 08:00:49.082847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:57.288 4216.00 IOPS, 527.00 MiB/s [2024-11-19T07:00:49.218Z] [2024-11-19 08:00:49.089710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.288 [2024-11-19 08:00:49.089901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.288 [2024-11-19 08:00:49.089939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:57.288 [2024-11-19 08:00:49.096903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.288 [2024-11-19 08:00:49.097037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.288 [2024-11-19 08:00:49.097076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:57.288 [2024-11-19 08:00:49.103950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.288 [2024-11-19 08:00:49.104107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.288 [2024-11-19 08:00:49.104146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:57.288 [2024-11-19 08:00:49.111014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.288 [2024-11-19 08:00:49.111168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.288 [2024-11-19 08:00:49.111208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:57.288 [2024-11-19 08:00:49.118089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.288 [2024-11-19 08:00:49.118232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.288 [2024-11-19 08:00:49.118270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:57.288 [2024-11-19 08:00:49.126215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.288 [2024-11-19 08:00:49.126389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.288 [2024-11-19 08:00:49.126443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:57.288 [2024-11-19 08:00:49.133648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.288 [2024-11-19 08:00:49.133801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.288 [2024-11-19 08:00:49.133841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:57.288 [2024-11-19 08:00:49.140933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.288 [2024-11-19 08:00:49.141148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.288 [2024-11-19 08:00:49.141188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:57.288 [2024-11-19 08:00:49.148137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.288 [2024-11-19 08:00:49.148315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.288 [2024-11-19 08:00:49.148354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:57.288 [2024-11-19 08:00:49.155172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.288 [2024-11-19 08:00:49.155346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.288 [2024-11-19 08:00:49.155386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:57.288 [2024-11-19 08:00:49.162324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.288 [2024-11-19 08:00:49.162543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.288 [2024-11-19 08:00:49.162582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:57.288 [2024-11-19 08:00:49.169387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.288 [2024-11-19 08:00:49.169630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.288 [2024-11-19 08:00:49.169670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:57.288 [2024-11-19 08:00:49.176434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.288 [2024-11-19 08:00:49.176675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.288 [2024-11-19 08:00:49.176723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:57.288 [2024-11-19 08:00:49.183542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.288 [2024-11-19 08:00:49.183716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.288 [2024-11-19 08:00:49.183756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:57.289 [2024-11-19 08:00:49.190680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.289 [2024-11-19 08:00:49.190911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.289 [2024-11-19 08:00:49.190949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:57.289 [2024-11-19 08:00:49.197870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.289 [2024-11-19 08:00:49.198074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.289 [2024-11-19 08:00:49.198122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:57.289 [2024-11-19 08:00:49.204902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.289 [2024-11-19 08:00:49.205064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.289 [2024-11-19 08:00:49.205103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:57.289 [2024-11-19 08:00:49.211920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.289 [2024-11-19 08:00:49.212145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.289 [2024-11-19 08:00:49.212184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:57.289 [2024-11-19 08:00:49.219021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.289 [2024-11-19 08:00:49.219210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.289 [2024-11-19 08:00:49.219250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:57.547 [2024-11-19 08:00:49.226436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.547 [2024-11-19 08:00:49.226645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.547 [2024-11-19 08:00:49.226685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:57.547 [2024-11-19 08:00:49.233807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.547 [2024-11-19 08:00:49.233954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.547 [2024-11-19 08:00:49.233993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:57.547 [2024-11-19 08:00:49.240858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.547 [2024-11-19 08:00:49.241055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.547 [2024-11-19 08:00:49.241094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:57.547 [2024-11-19 08:00:49.247853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.547 [2024-11-19 08:00:49.248086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.547 [2024-11-19 08:00:49.248125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:57.547 [2024-11-19 08:00:49.255048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.547 [2024-11-19 08:00:49.255249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.547 [2024-11-19 08:00:49.255288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:57.547 [2024-11-19 08:00:49.262888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.548 [2024-11-19 08:00:49.262994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.548 [2024-11-19 08:00:49.263034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:57.548 [2024-11-19 08:00:49.270564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.548 [2024-11-19 08:00:49.270766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.548 [2024-11-19 08:00:49.270806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:57.548 [2024-11-19 08:00:49.277618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.548 [2024-11-19 08:00:49.277869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.548 [2024-11-19 08:00:49.277909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:57.548 [2024-11-19 08:00:49.284699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.548 [2024-11-19 08:00:49.284908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.548 [2024-11-19 08:00:49.284947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:57.548 [2024-11-19 08:00:49.291936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.548 [2024-11-19 08:00:49.292150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.548 [2024-11-19 08:00:49.292190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:57.548 [2024-11-19 08:00:49.299249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.548 [2024-11-19 08:00:49.299440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.548 [2024-11-19 08:00:49.299479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:57.548 [2024-11-19 08:00:49.306381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.548 [2024-11-19 08:00:49.306535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.548 [2024-11-19 08:00:49.306574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:57.548 [2024-11-19 08:00:49.313383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.548 [2024-11-19 08:00:49.313603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.548 [2024-11-19 08:00:49.313643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:57.548 [2024-11-19 08:00:49.320424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.548 [2024-11-19 08:00:49.320622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.548 [2024-11-19 08:00:49.320668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:57.548 [2024-11-19 08:00:49.327432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.548 [2024-11-19 08:00:49.327584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.548 [2024-11-19 08:00:49.327624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:57.548 [2024-11-19 08:00:49.334912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.548 [2024-11-19 08:00:49.335101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.548 [2024-11-19 08:00:49.335140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:57.548 [2024-11-19 08:00:49.341927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.548 [2024-11-19 08:00:49.342122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.548 [2024-11-19 08:00:49.342161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:57.548 [2024-11-19 08:00:49.348991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.548 [2024-11-19 08:00:49.349230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.548 [2024-11-19 08:00:49.349270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:57.548 [2024-11-19 08:00:49.356142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.548 [2024-11-19 08:00:49.356301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.548 [2024-11-19 08:00:49.356340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:57.548 [2024-11-19 08:00:49.363241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.548 [2024-11-19 08:00:49.363440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.548 [2024-11-19 08:00:49.363479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:57.548 [2024-11-19 08:00:49.370264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.548 [2024-11-19 08:00:49.370460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.548 [2024-11-19 08:00:49.370499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:57.548 [2024-11-19 08:00:49.377345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.548 [2024-11-19 08:00:49.377486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.548 [2024-11-19 08:00:49.377526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:57.548 [2024-11-19 08:00:49.384656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.548 [2024-11-19 08:00:49.384888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.548 [2024-11-19 08:00:49.384927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:57.548 [2024-11-19 08:00:49.391848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.548 [2024-11-19 08:00:49.392044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.548 [2024-11-19 08:00:49.392083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:57.548 [2024-11-19 08:00:49.398891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.548 [2024-11-19 08:00:49.399115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.548 [2024-11-19 08:00:49.399154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:57.548 [2024-11-19 08:00:49.405871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.548 [2024-11-19 08:00:49.406087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.548 [2024-11-19 08:00:49.406125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:57.548 [2024-11-19 08:00:49.412936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.548 [2024-11-19 08:00:49.413172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.548 [2024-11-19 08:00:49.413211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:57.548 [2024-11-19 08:00:49.421139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.548 [2024-11-19 08:00:49.421259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.548 [2024-11-19 08:00:49.421298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:57.548 [2024-11-19 08:00:49.429640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.548 [2024-11-19 08:00:49.429856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.548 [2024-11-19 08:00:49.429896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:57.548 [2024-11-19 08:00:49.438130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.548 [2024-11-19 08:00:49.438349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.548 [2024-11-19 08:00:49.438388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:57.548 [2024-11-19 08:00:49.446545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.548 [2024-11-19 08:00:49.446672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.548 [2024-11-19 08:00:49.446719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:57.548 [2024-11-19 08:00:49.454859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.548 [2024-11-19 08:00:49.455012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.548 [2024-11-19 08:00:49.455051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:57.548 [2024-11-19 08:00:49.463378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.548 [2024-11-19 08:00:49.463572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.548 [2024-11-19 08:00:49.463611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:57.548 [2024-11-19 08:00:49.471785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.548 [2024-11-19 08:00:49.471987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.548 [2024-11-19 08:00:49.472032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:57.548 [2024-11-19 08:00:49.480418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.549 [2024-11-19 08:00:49.480545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.549 [2024-11-19 08:00:49.480596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:57.807 [2024-11-19 08:00:49.488504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.807 [2024-11-19 08:00:49.488739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.807 [2024-11-19 08:00:49.488779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:57.807 [2024-11-19 08:00:49.496127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.807 [2024-11-19 08:00:49.496522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.807 [2024-11-19 08:00:49.496567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:57.807 [2024-11-19 08:00:49.503988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.807 [2024-11-19 08:00:49.504392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.807 [2024-11-19 08:00:49.504436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:57.807 [2024-11-19 08:00:49.511961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.807 [2024-11-19 08:00:49.512269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.807 [2024-11-19 08:00:49.512312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:57.807 [2024-11-19 08:00:49.518484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.807 [2024-11-19 08:00:49.518861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.807 [2024-11-19 08:00:49.518901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:57.807 [2024-11-19 08:00:49.525463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.807 [2024-11-19 08:00:49.525886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.807 [2024-11-19 08:00:49.525926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:57.807 [2024-11-19 08:00:49.532283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.807 [2024-11-19 08:00:49.532740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.807 [2024-11-19 08:00:49.532780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:57.807 [2024-11-19 08:00:49.539915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.807 [2024-11-19 08:00:49.540300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.807 [2024-11-19 08:00:49.540344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:57.807 [2024-11-19 08:00:49.547768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.807 [2024-11-19 08:00:49.548205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.807 [2024-11-19 08:00:49.548248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:57.807 [2024-11-19 08:00:49.555305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.807 [2024-11-19 08:00:49.555670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.807 [2024-11-19 08:00:49.555748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:57.807 [2024-11-19 08:00:49.562478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.807 [2024-11-19 08:00:49.562862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.807 [2024-11-19 08:00:49.562903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:57.807 [2024-11-19 08:00:49.569249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.807 [2024-11-19 08:00:49.569671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.807 [2024-11-19 08:00:49.569739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:57.807 [2024-11-19 08:00:49.575981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.807 [2024-11-19 08:00:49.576416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.807 [2024-11-19 08:00:49.576468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:57.807 [2024-11-19 08:00:49.582512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.807 [2024-11-19 08:00:49.582884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.807 [2024-11-19 08:00:49.582924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:57.807 [2024-11-19 08:00:49.588822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.807 [2024-11-19 08:00:49.589156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.807 [2024-11-19 08:00:49.589199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:57.807 [2024-11-19 08:00:49.595025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.807 [2024-11-19 08:00:49.595335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.807 [2024-11-19 08:00:49.595392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:57.807 [2024-11-19 08:00:49.601167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.807 [2024-11-19 08:00:49.601472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.807 [2024-11-19 08:00:49.601515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:57.807 [2024-11-19 08:00:49.607456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.807 [2024-11-19 08:00:49.607765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.807 [2024-11-19 08:00:49.607804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:57.807 [2024-11-19 08:00:49.613798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.807 [2024-11-19 08:00:49.614089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.807 [2024-11-19 08:00:49.614133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:57.807 [2024-11-19 08:00:49.620345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.807 [2024-11-19 08:00:49.620644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.807 [2024-11-19 08:00:49.620699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:57.807 [2024-11-19 08:00:49.627170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.808 [2024-11-19 08:00:49.627434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.808 [2024-11-19 08:00:49.627477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:57.808 [2024-11-19 08:00:49.633288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.808 [2024-11-19 08:00:49.633549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.808 [2024-11-19 08:00:49.633600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:57.808 [2024-11-19 08:00:49.639467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.808 [2024-11-19 08:00:49.639794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.808 [2024-11-19 08:00:49.639833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:57.808 [2024-11-19 08:00:49.645607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.808 [2024-11-19 08:00:49.645939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.808 [2024-11-19 08:00:49.645989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:57.808 [2024-11-19 08:00:49.652018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.808 [2024-11-19 08:00:49.652285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.808 [2024-11-19 08:00:49.652328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:57.808 [2024-11-19 08:00:49.658334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.808 [2024-11-19 08:00:49.658629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.808 [2024-11-19 08:00:49.658671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:57.808 [2024-11-19 08:00:49.664842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.808 [2024-11-19 08:00:49.665110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.808 [2024-11-19 08:00:49.665153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:57.808 [2024-11-19 08:00:49.671013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.808 [2024-11-19 08:00:49.671427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.808 [2024-11-19 08:00:49.671470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:57.808 [2024-11-19 08:00:49.677344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.808 [2024-11-19 08:00:49.677645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.808 [2024-11-19 08:00:49.677700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:57.808 [2024-11-19 08:00:49.683608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.808 [2024-11-19 08:00:49.683886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.808 [2024-11-19 08:00:49.683925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:57.808 [2024-11-19 08:00:49.689965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.808 [2024-11-19 08:00:49.690251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.808 [2024-11-19 08:00:49.690294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:57.808 [2024-11-19 08:00:49.696331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.808 [2024-11-19 08:00:49.696595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.808 [2024-11-19 08:00:49.696639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:57.808 [2024-11-19 08:00:49.702575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.808 [2024-11-19 08:00:49.702811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.808 [2024-11-19 08:00:49.702850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:57.808 [2024-11-19 08:00:49.708723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.808 [2024-11-19 08:00:49.709008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.808 [2024-11-19 08:00:49.709052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:57.808 [2024-11-19 08:00:49.714945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.808 [2024-11-19 08:00:49.715245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.808 [2024-11-19 08:00:49.715289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:57.808 [2024-11-19 08:00:49.721136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.808 [2024-11-19 08:00:49.721473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.808 [2024-11-19 08:00:49.721516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:57.808 [2024-11-19 08:00:49.727239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.808 [2024-11-19 08:00:49.727597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.808 [2024-11-19 08:00:49.727640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:57.808 [2024-11-19 08:00:49.733521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:57.808 [2024-11-19 08:00:49.733842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.808 [2024-11-19 08:00:49.733882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:58.066 [2024-11-19 08:00:49.740162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.066 [2024-11-19 08:00:49.740455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.066 [2024-11-19 08:00:49.740506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:58.066 [2024-11-19 08:00:49.746398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.066 [2024-11-19 08:00:49.746656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.066 [2024-11-19 08:00:49.746708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:58.066 [2024-11-19 08:00:49.753285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.066 [2024-11-19 08:00:49.753527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.066 [2024-11-19 08:00:49.753570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:58.067 [2024-11-19 08:00:49.759721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.067 [2024-11-19 08:00:49.760050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.067 [2024-11-19 08:00:49.760094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:58.067 [2024-11-19 08:00:49.766112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.067 [2024-11-19 08:00:49.766368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.067 [2024-11-19 08:00:49.766411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:58.067 [2024-11-19 08:00:49.772385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.067 [2024-11-19 08:00:49.772653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.067 [2024-11-19 08:00:49.772708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:58.067 [2024-11-19 08:00:49.778523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.067 [2024-11-19 08:00:49.778807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.067 [2024-11-19 08:00:49.778847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:58.067 [2024-11-19 08:00:49.784866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.067 [2024-11-19 08:00:49.785159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.067 [2024-11-19 08:00:49.785203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:58.067 [2024-11-19 08:00:49.791114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.067 [2024-11-19 08:00:49.791399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.067 [2024-11-19 08:00:49.791443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:58.067 [2024-11-19 08:00:49.797430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.067 [2024-11-19 08:00:49.797680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.067 [2024-11-19 08:00:49.797748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:58.067 [2024-11-19 08:00:49.803838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.067 [2024-11-19 08:00:49.804132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.067 [2024-11-19 08:00:49.804176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:58.067 [2024-11-19 08:00:49.810255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.067 [2024-11-19 08:00:49.810561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.067 [2024-11-19 08:00:49.810605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:58.067 [2024-11-19 08:00:49.816621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.067 [2024-11-19 08:00:49.816948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.067 [2024-11-19 08:00:49.816992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:58.067 [2024-11-19 08:00:49.822994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.067 [2024-11-19 08:00:49.823271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.067 [2024-11-19 08:00:49.823314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:58.067 [2024-11-19 08:00:49.829346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.067 [2024-11-19 08:00:49.829628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.067 [2024-11-19 08:00:49.829681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:58.067 [2024-11-19 08:00:49.835854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.067 [2024-11-19 08:00:49.836126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.067 [2024-11-19 08:00:49.836169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:58.067 [2024-11-19 08:00:49.842138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.067 [2024-11-19 08:00:49.842389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.067 [2024-11-19 08:00:49.842432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:58.067 [2024-11-19 08:00:49.848467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.067 [2024-11-19 08:00:49.848743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.067 [2024-11-19 08:00:49.848783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:58.067 [2024-11-19 08:00:49.854762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.067 [2024-11-19 08:00:49.855016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.067 [2024-11-19 08:00:49.855060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:58.067 [2024-11-19 08:00:49.861055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.067 [2024-11-19 08:00:49.861312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.067 [2024-11-19 08:00:49.861355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:58.067 [2024-11-19 08:00:49.867329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.067 [2024-11-19 08:00:49.867607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.067 [2024-11-19 08:00:49.867650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:58.067 [2024-11-19 08:00:49.873772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.067 [2024-11-19 08:00:49.874094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.067 [2024-11-19 08:00:49.874137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:58.067 [2024-11-19 08:00:49.880065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.067 [2024-11-19 08:00:49.880340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.067 [2024-11-19 08:00:49.880383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:58.067 [2024-11-19 08:00:49.886168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.067 [2024-11-19 08:00:49.886558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.067 [2024-11-19 08:00:49.886600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:58.067 [2024-11-19 08:00:49.892328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.067 [2024-11-19 08:00:49.892674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.067 [2024-11-19 08:00:49.892734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:58.067 [2024-11-19 08:00:49.898557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.067 [2024-11-19 08:00:49.898808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.067 [2024-11-19 08:00:49.898847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:58.067 [2024-11-19 08:00:49.904593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.067 [2024-11-19 08:00:49.904927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.067 [2024-11-19 08:00:49.904966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:58.067 [2024-11-19 08:00:49.910908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.067 [2024-11-19 08:00:49.911200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.067 [2024-11-19 08:00:49.911244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:58.067 [2024-11-19 08:00:49.917276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.067 [2024-11-19 08:00:49.917554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.068 [2024-11-19 08:00:49.917597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:58.068 [2024-11-19 08:00:49.923764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.068 [2024-11-19 08:00:49.924085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.068 [2024-11-19 08:00:49.924128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:58.068 [2024-11-19 08:00:49.930042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.068 [2024-11-19 08:00:49.930335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.068 [2024-11-19 08:00:49.930378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:58.068 [2024-11-19 08:00:49.936247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.068 [2024-11-19 08:00:49.936551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.068 [2024-11-19 08:00:49.936594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:58.068 [2024-11-19 08:00:49.942573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.068 [2024-11-19 08:00:49.942868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.068 [2024-11-19 08:00:49.942908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:58.068 [2024-11-19 08:00:49.948897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.068 [2024-11-19 08:00:49.949209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.068 [2024-11-19 08:00:49.949252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:58.068 [2024-11-19 08:00:49.955146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.068 [2024-11-19 08:00:49.955454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.068 [2024-11-19 08:00:49.955498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:58.068 [2024-11-19 08:00:49.961616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.068 [2024-11-19 08:00:49.961870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.068 [2024-11-19 08:00:49.961909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:58.068 [2024-11-19 08:00:49.967849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.068 [2024-11-19 08:00:49.968165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.068 [2024-11-19 08:00:49.968208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:58.068 [2024-11-19 08:00:49.974125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.068 [2024-11-19 08:00:49.974383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.068 [2024-11-19 08:00:49.974426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:58.068 [2024-11-19 08:00:49.980325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.068 [2024-11-19 08:00:49.980606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.068 [2024-11-19 08:00:49.980648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:58.068 [2024-11-19 08:00:49.986858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.068 [2024-11-19 08:00:49.987139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.068 [2024-11-19 08:00:49.987182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:58.068 [2024-11-19 08:00:49.993201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.068 [2024-11-19 08:00:49.993448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.068 [2024-11-19 08:00:49.993491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:58.326 [2024-11-19 08:00:50.000088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.326 [2024-11-19 08:00:50.000365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.326 [2024-11-19 08:00:50.000423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:58.326 [2024-11-19 08:00:50.007118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.326 [2024-11-19 08:00:50.007402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.326 [2024-11-19 08:00:50.007453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:58.326 [2024-11-19 08:00:50.013651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.326 [2024-11-19 08:00:50.013967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.326 [2024-11-19 08:00:50.014048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:58.326 [2024-11-19 08:00:50.020503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.326 [2024-11-19 08:00:50.020847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.326 [2024-11-19 08:00:50.020891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:58.326 [2024-11-19 08:00:50.027088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.326 [2024-11-19 08:00:50.027355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.326 [2024-11-19 08:00:50.027400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:58.326 [2024-11-19 08:00:50.033337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.326 [2024-11-19 08:00:50.033646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.326 [2024-11-19 08:00:50.033709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:58.326 [2024-11-19 08:00:50.039698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.326 [2024-11-19 08:00:50.039988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.326 [2024-11-19 08:00:50.040044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:58.326 [2024-11-19 08:00:50.046216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.326 [2024-11-19 08:00:50.046527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.326 [2024-11-19 08:00:50.046571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:58.326 [2024-11-19 08:00:50.052581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.326 [2024-11-19 08:00:50.052886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.326 [2024-11-19 08:00:50.052927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:58.326 [2024-11-19 08:00:50.059074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.326 [2024-11-19 08:00:50.059610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.326 [2024-11-19 08:00:50.059680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:58.326 [2024-11-19 08:00:50.065626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.326 [2024-11-19 08:00:50.065901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.326 [2024-11-19 08:00:50.065943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:58.326 [2024-11-19 08:00:50.071847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.327 [2024-11-19 08:00:50.072060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.327 [2024-11-19 08:00:50.072101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:58.327 [2024-11-19 08:00:50.078232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.327 [2024-11-19 08:00:50.078527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.327 [2024-11-19 08:00:50.078571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:58.327 4383.00 IOPS, 547.88 MiB/s [2024-11-19T07:00:50.257Z] [2024-11-19 08:00:50.085878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:36:58.327 [2024-11-19 08:00:50.086011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.327 [2024-11-19 08:00:50.086051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:58.327 00:36:58.327 Latency(us) 00:36:58.327 [2024-11-19T07:00:50.257Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:58.327 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:36:58.327 nvme0n1 : 2.00 4381.25 547.66 0.00 0.00 3640.78 2767.08 11990.66 00:36:58.327 [2024-11-19T07:00:50.257Z] =================================================================================================================== 00:36:58.327 [2024-11-19T07:00:50.257Z] Total : 4381.25 547.66 0.00 0.00 3640.78 2767.08 11990.66 00:36:58.327 { 00:36:58.327 "results": [ 00:36:58.327 { 00:36:58.327 "job": "nvme0n1", 00:36:58.327 "core_mask": "0x2", 00:36:58.327 "workload": "randwrite", 00:36:58.327 "status": "finished", 00:36:58.327 "queue_depth": 16, 00:36:58.327 "io_size": 131072, 00:36:58.327 "runtime": 2.004449, 00:36:58.327 "iops": 4381.253900697898, 00:36:58.327 "mibps": 547.6567375872372, 00:36:58.327 "io_failed": 0, 00:36:58.327 "io_timeout": 0, 00:36:58.327 "avg_latency_us": 3640.7797147363717, 00:36:58.327 "min_latency_us": 2767.0755555555556, 00:36:58.327 "max_latency_us": 11990.660740740741 00:36:58.327 } 00:36:58.327 ], 00:36:58.327 "core_count": 1 00:36:58.327 } 00:36:58.327 08:00:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:58.327 08:00:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:58.327 08:00:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:58.327 | .driver_specific 00:36:58.327 | .nvme_error 00:36:58.327 | .status_code 00:36:58.327 | .command_transient_transport_error' 00:36:58.327 08:00:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:58.584 08:00:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 284 > 0 )) 00:36:58.584 08:00:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3131311 00:36:58.584 08:00:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3131311 ']' 00:36:58.584 08:00:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3131311 00:36:58.584 08:00:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:36:58.584 08:00:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:58.584 08:00:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3131311 00:36:58.584 08:00:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:58.584 08:00:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:58.584 08:00:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3131311' 00:36:58.584 killing process with pid 3131311 00:36:58.584 08:00:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3131311 00:36:58.584 Received shutdown signal, test time was about 2.000000 seconds 00:36:58.584 00:36:58.584 Latency(us) 00:36:58.584 [2024-11-19T07:00:50.514Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:58.584 [2024-11-19T07:00:50.514Z] =================================================================================================================== 00:36:58.584 [2024-11-19T07:00:50.514Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:58.584 08:00:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3131311 00:36:59.522 08:00:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3129285 00:36:59.522 08:00:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3129285 ']' 00:36:59.522 08:00:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3129285 00:36:59.522 08:00:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:36:59.522 08:00:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:59.522 08:00:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3129285 00:36:59.522 08:00:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:59.522 08:00:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:59.522 08:00:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3129285' 00:36:59.522 killing process with pid 3129285 00:36:59.522 08:00:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3129285 00:36:59.522 08:00:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3129285 00:37:00.901 00:37:00.901 real 0m23.254s 00:37:00.901 user 0m45.706s 00:37:00.901 sys 0m4.768s 00:37:00.901 08:00:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:00.901 08:00:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:00.901 ************************************ 00:37:00.901 END TEST nvmf_digest_error 00:37:00.901 ************************************ 00:37:00.901 08:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:37:00.901 08:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:37:00.901 08:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:00.901 08:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:37:00.901 08:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:00.901 08:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:37:00.901 08:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:00.901 08:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:00.901 rmmod nvme_tcp 00:37:00.901 rmmod nvme_fabrics 00:37:00.901 rmmod nvme_keyring 00:37:00.901 08:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:00.901 08:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:37:00.901 08:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:37:00.901 08:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 3129285 ']' 00:37:00.901 08:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 3129285 00:37:00.901 08:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 3129285 ']' 00:37:00.901 08:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 3129285 00:37:00.901 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3129285) - No such process 00:37:00.901 08:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 3129285 is not found' 00:37:00.901 Process with pid 3129285 is not found 00:37:00.901 08:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:00.901 08:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:00.901 08:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:00.901 08:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:37:00.901 08:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:37:00.901 08:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:00.901 08:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:37:00.901 08:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:00.901 08:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:00.901 08:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:00.901 08:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:00.901 08:00:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:02.806 08:00:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:02.806 00:37:02.806 real 0m52.328s 00:37:02.806 user 1m34.566s 00:37:02.806 sys 0m11.099s 00:37:02.806 08:00:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:02.806 08:00:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:37:02.806 ************************************ 00:37:02.806 END TEST nvmf_digest 00:37:02.806 ************************************ 00:37:02.806 08:00:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:37:02.806 08:00:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:37:02.806 08:00:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:37:02.806 08:00:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:37:02.806 08:00:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:37:02.806 08:00:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:02.806 08:00:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:37:02.806 ************************************ 00:37:02.806 START TEST nvmf_bdevperf 00:37:02.806 ************************************ 00:37:02.806 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:37:02.806 * Looking for test storage... 00:37:02.806 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:37:02.806 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:02.806 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:37:02.806 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:02.806 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:02.806 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:02.806 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:02.806 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:02.806 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:37:02.806 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:37:02.806 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:37:02.806 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:37:02.806 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:37:02.806 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:37:02.806 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:37:02.806 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:02.806 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:37:02.806 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:37:02.806 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:02.806 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:02.806 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:37:02.806 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:37:02.806 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:02.806 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:37:02.806 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:37:02.807 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:37:02.807 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:37:02.807 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:02.807 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:37:02.807 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:37:02.807 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:02.807 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:02.807 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:37:02.807 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:02.807 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:02.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:02.807 --rc genhtml_branch_coverage=1 00:37:02.807 --rc genhtml_function_coverage=1 00:37:02.807 --rc genhtml_legend=1 00:37:02.807 --rc geninfo_all_blocks=1 00:37:02.807 --rc geninfo_unexecuted_blocks=1 00:37:02.807 00:37:02.807 ' 00:37:02.807 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:02.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:02.807 --rc genhtml_branch_coverage=1 00:37:02.807 --rc genhtml_function_coverage=1 00:37:02.807 --rc genhtml_legend=1 00:37:02.807 --rc geninfo_all_blocks=1 00:37:02.807 --rc geninfo_unexecuted_blocks=1 00:37:02.807 00:37:02.807 ' 00:37:02.807 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:02.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:02.807 --rc genhtml_branch_coverage=1 00:37:02.807 --rc genhtml_function_coverage=1 00:37:02.807 --rc genhtml_legend=1 00:37:02.807 --rc geninfo_all_blocks=1 00:37:02.807 --rc geninfo_unexecuted_blocks=1 00:37:02.807 00:37:02.807 ' 00:37:02.807 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:02.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:02.807 --rc genhtml_branch_coverage=1 00:37:02.807 --rc genhtml_function_coverage=1 00:37:02.807 --rc genhtml_legend=1 00:37:02.807 --rc geninfo_all_blocks=1 00:37:02.807 --rc geninfo_unexecuted_blocks=1 00:37:02.807 00:37:02.807 ' 00:37:02.807 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:02.807 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:37:02.807 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:02.807 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:02.807 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:02.807 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:02.807 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:02.807 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:02.807 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:02.807 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:02.807 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:02.807 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:03.065 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:03.065 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:03.065 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:03.065 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:03.065 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:03.065 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:03.065 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:03.065 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:37:03.065 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:03.065 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:03.065 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:03.065 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:03.065 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:03.065 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:03.065 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:37:03.065 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:03.065 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:37:03.065 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:03.065 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:03.065 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:03.065 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:03.065 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:03.065 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:03.065 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:03.065 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:03.065 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:03.065 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:03.065 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:03.065 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:03.065 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:37:03.065 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:03.065 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:03.065 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:03.065 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:03.065 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:03.065 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:03.065 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:03.065 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:03.065 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:03.065 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:03.065 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:37:03.065 08:00:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:04.965 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:04.965 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:37:04.965 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:04.965 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:04.965 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:04.965 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:04.965 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:04.965 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:37:04.965 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:04.965 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:37:04.965 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:37:04.965 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:37:04.965 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:37:04.965 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:37:04.965 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:37:04.965 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:04.965 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:04.965 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:04.965 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:04.965 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:04.965 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:04.965 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:04.965 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:04.965 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:04.965 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:04.965 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:04.965 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:04.965 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:04.965 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:04.965 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:04.965 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:04.965 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:04.965 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:04.965 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:04.965 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:04.965 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:04.965 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:04.965 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:04.965 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:04.965 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:04.965 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:04.965 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:04.965 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:04.965 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:04.965 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:04.965 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:04.965 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:04.965 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:04.965 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:04.965 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:04.965 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:04.965 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:04.965 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:04.965 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:04.965 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:04.965 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:04.965 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:04.965 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:04.965 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:04.965 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:04.965 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:04.965 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:04.965 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:04.965 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:04.965 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:04.965 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:04.965 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:04.966 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:04.966 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:04.966 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:04.966 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:04.966 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:04.966 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:04.966 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:37:04.966 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:04.966 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:04.966 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:04.966 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:04.966 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:04.966 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:04.966 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:04.966 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:04.966 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:04.966 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:04.966 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:04.966 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:04.966 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:04.966 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:04.966 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:04.966 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:04.966 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:04.966 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:04.966 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:04.966 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:04.966 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:04.966 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:04.966 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:04.966 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:04.966 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:04.966 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:04.966 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:04.966 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:37:04.966 00:37:04.966 --- 10.0.0.2 ping statistics --- 00:37:04.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:04.966 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:37:04.966 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:04.966 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:04.966 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms 00:37:04.966 00:37:04.966 --- 10.0.0.1 ping statistics --- 00:37:04.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:04.966 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:37:04.966 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:04.966 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:37:04.966 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:04.966 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:04.966 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:04.966 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:04.966 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:04.966 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:04.966 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:05.223 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:37:05.223 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:37:05.223 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:05.223 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:05.223 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:05.223 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3133932 00:37:05.223 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:37:05.224 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3133932 00:37:05.224 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 3133932 ']' 00:37:05.224 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:05.224 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:05.224 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:05.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:05.224 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:05.224 08:00:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:05.224 [2024-11-19 08:00:57.005628] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:37:05.224 [2024-11-19 08:00:57.005792] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:05.224 [2024-11-19 08:00:57.149729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:05.481 [2024-11-19 08:00:57.283092] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:05.481 [2024-11-19 08:00:57.283170] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:05.481 [2024-11-19 08:00:57.283195] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:05.481 [2024-11-19 08:00:57.283220] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:05.481 [2024-11-19 08:00:57.283240] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:05.481 [2024-11-19 08:00:57.285882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:05.481 [2024-11-19 08:00:57.285937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:05.481 [2024-11-19 08:00:57.285942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:06.046 08:00:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:06.046 08:00:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:37:06.046 08:00:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:06.046 08:00:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:06.046 08:00:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:06.304 08:00:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:06.304 08:00:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:06.304 08:00:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:06.304 08:00:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:06.304 [2024-11-19 08:00:57.992486] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:06.304 08:00:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:06.304 08:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:06.304 08:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:06.304 08:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:06.304 Malloc0 00:37:06.304 08:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:06.304 08:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:06.304 08:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:06.304 08:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:06.304 08:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:06.304 08:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:06.304 08:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:06.304 08:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:06.304 08:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:06.304 08:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:06.304 08:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:06.304 08:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:06.304 [2024-11-19 08:00:58.106950] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:06.305 08:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:06.305 08:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:37:06.305 08:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:37:06.305 08:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:37:06.305 08:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:37:06.305 08:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:06.305 08:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:06.305 { 00:37:06.305 "params": { 00:37:06.305 "name": "Nvme$subsystem", 00:37:06.305 "trtype": "$TEST_TRANSPORT", 00:37:06.305 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:06.305 "adrfam": "ipv4", 00:37:06.305 "trsvcid": "$NVMF_PORT", 00:37:06.305 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:06.305 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:06.305 "hdgst": ${hdgst:-false}, 00:37:06.305 "ddgst": ${ddgst:-false} 00:37:06.305 }, 00:37:06.305 "method": "bdev_nvme_attach_controller" 00:37:06.305 } 00:37:06.305 EOF 00:37:06.305 )") 00:37:06.305 08:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:37:06.305 08:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:37:06.305 08:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:37:06.305 08:00:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:06.305 "params": { 00:37:06.305 "name": "Nvme1", 00:37:06.305 "trtype": "tcp", 00:37:06.305 "traddr": "10.0.0.2", 00:37:06.305 "adrfam": "ipv4", 00:37:06.305 "trsvcid": "4420", 00:37:06.305 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:06.305 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:06.305 "hdgst": false, 00:37:06.305 "ddgst": false 00:37:06.305 }, 00:37:06.305 "method": "bdev_nvme_attach_controller" 00:37:06.305 }' 00:37:06.305 [2024-11-19 08:00:58.200600] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:37:06.305 [2024-11-19 08:00:58.200758] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3134118 ] 00:37:06.563 [2024-11-19 08:00:58.334314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:06.563 [2024-11-19 08:00:58.460812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:07.128 Running I/O for 1 seconds... 00:37:08.059 6200.00 IOPS, 24.22 MiB/s 00:37:08.059 Latency(us) 00:37:08.059 [2024-11-19T07:00:59.989Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:08.059 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:37:08.059 Verification LBA range: start 0x0 length 0x4000 00:37:08.059 Nvme1n1 : 1.01 6260.98 24.46 0.00 0.00 20325.77 1917.53 17087.91 00:37:08.059 [2024-11-19T07:00:59.989Z] =================================================================================================================== 00:37:08.059 [2024-11-19T07:00:59.989Z] Total : 6260.98 24.46 0.00 0.00 20325.77 1917.53 17087.91 00:37:08.992 08:01:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3134469 00:37:08.992 08:01:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:37:08.992 08:01:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:37:08.992 08:01:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:37:08.992 08:01:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:37:08.992 08:01:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:37:08.992 08:01:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:08.992 08:01:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:08.992 { 00:37:08.992 "params": { 00:37:08.992 "name": "Nvme$subsystem", 00:37:08.992 "trtype": "$TEST_TRANSPORT", 00:37:08.992 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:08.992 "adrfam": "ipv4", 00:37:08.992 "trsvcid": "$NVMF_PORT", 00:37:08.992 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:08.992 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:08.992 "hdgst": ${hdgst:-false}, 00:37:08.992 "ddgst": ${ddgst:-false} 00:37:08.992 }, 00:37:08.992 "method": "bdev_nvme_attach_controller" 00:37:08.992 } 00:37:08.992 EOF 00:37:08.992 )") 00:37:08.992 08:01:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:37:08.992 08:01:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:37:08.992 08:01:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:37:08.992 08:01:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:08.992 "params": { 00:37:08.992 "name": "Nvme1", 00:37:08.992 "trtype": "tcp", 00:37:08.992 "traddr": "10.0.0.2", 00:37:08.992 "adrfam": "ipv4", 00:37:08.992 "trsvcid": "4420", 00:37:08.992 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:08.992 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:08.992 "hdgst": false, 00:37:08.992 "ddgst": false 00:37:08.992 }, 00:37:08.992 "method": "bdev_nvme_attach_controller" 00:37:08.992 }' 00:37:08.992 [2024-11-19 08:01:00.850962] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:37:08.992 [2024-11-19 08:01:00.851114] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3134469 ] 00:37:09.250 [2024-11-19 08:01:00.989163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:09.250 [2024-11-19 08:01:01.113289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:09.815 Running I/O for 15 seconds... 00:37:12.124 6176.00 IOPS, 24.12 MiB/s [2024-11-19T07:01:04.054Z] 6256.00 IOPS, 24.44 MiB/s [2024-11-19T07:01:04.054Z] 08:01:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3133932 00:37:12.124 08:01:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:37:12.124 [2024-11-19 08:01:03.799418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:104088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:12.124 [2024-11-19 08:01:03.799493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.124 [2024-11-19 08:01:03.799554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:104096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:12.124 [2024-11-19 08:01:03.799592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.124 [2024-11-19 08:01:03.799621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:104104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:12.124 [2024-11-19 08:01:03.799648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.124 [2024-11-19 08:01:03.799687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:104112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:12.124 [2024-11-19 08:01:03.799725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.124 [2024-11-19 08:01:03.799768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:104120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:12.124 [2024-11-19 08:01:03.799792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.124 [2024-11-19 08:01:03.799817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:104128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:12.124 [2024-11-19 08:01:03.799841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.124 [2024-11-19 08:01:03.799866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:104136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:12.124 [2024-11-19 08:01:03.799890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.124 [2024-11-19 08:01:03.799915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:104144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:12.124 [2024-11-19 08:01:03.799938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.124 [2024-11-19 08:01:03.799962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:104152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:12.124 [2024-11-19 08:01:03.800008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.124 [2024-11-19 08:01:03.800033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:104160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:12.124 [2024-11-19 08:01:03.800099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.124 [2024-11-19 08:01:03.800129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:104168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:12.124 [2024-11-19 08:01:03.800154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.124 [2024-11-19 08:01:03.800190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:104176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:12.124 [2024-11-19 08:01:03.800216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.124 [2024-11-19 08:01:03.800244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:104184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:12.124 [2024-11-19 08:01:03.800269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.124 [2024-11-19 08:01:03.800301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:104192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:12.124 [2024-11-19 08:01:03.800325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.124 [2024-11-19 08:01:03.800352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:12.124 [2024-11-19 08:01:03.800379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.124 [2024-11-19 08:01:03.800406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:104208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:12.124 [2024-11-19 08:01:03.800430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.124 [2024-11-19 08:01:03.800459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:104224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.124 [2024-11-19 08:01:03.800484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.124 [2024-11-19 08:01:03.800511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:104232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.125 [2024-11-19 08:01:03.800536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.125 [2024-11-19 08:01:03.800562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:104240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.125 [2024-11-19 08:01:03.800587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.125 [2024-11-19 08:01:03.800614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:104248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.125 [2024-11-19 08:01:03.800639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.125 [2024-11-19 08:01:03.800665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:104256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.125 [2024-11-19 08:01:03.800697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.125 [2024-11-19 08:01:03.800742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:104264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.125 [2024-11-19 08:01:03.800765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.125 [2024-11-19 08:01:03.800790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:104272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.125 [2024-11-19 08:01:03.800812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.125 [2024-11-19 08:01:03.800837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:104280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.125 [2024-11-19 08:01:03.800863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.125 [2024-11-19 08:01:03.800889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:104288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.125 [2024-11-19 08:01:03.800911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.125 [2024-11-19 08:01:03.800935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.125 [2024-11-19 08:01:03.800957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.125 [2024-11-19 08:01:03.801007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:104304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.125 [2024-11-19 08:01:03.801028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.125 [2024-11-19 08:01:03.801070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:104312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.125 [2024-11-19 08:01:03.801095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.125 [2024-11-19 08:01:03.801122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:104320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.125 [2024-11-19 08:01:03.801146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.125 [2024-11-19 08:01:03.801172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:104328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.125 [2024-11-19 08:01:03.801197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.125 [2024-11-19 08:01:03.801224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:104336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.125 [2024-11-19 08:01:03.801248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.125 [2024-11-19 08:01:03.801274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:104344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.125 [2024-11-19 08:01:03.801299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.125 [2024-11-19 08:01:03.801326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:104352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.125 [2024-11-19 08:01:03.801351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.125 [2024-11-19 08:01:03.801377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:104360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.125 [2024-11-19 08:01:03.801401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.125 [2024-11-19 08:01:03.801428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:104368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.125 [2024-11-19 08:01:03.801452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.125 [2024-11-19 08:01:03.801479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:104376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.125 [2024-11-19 08:01:03.801503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.125 [2024-11-19 08:01:03.801539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:104384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.125 [2024-11-19 08:01:03.801565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.125 [2024-11-19 08:01:03.801591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.125 [2024-11-19 08:01:03.801616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.125 [2024-11-19 08:01:03.801643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:104400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.125 [2024-11-19 08:01:03.801667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.125 [2024-11-19 08:01:03.801711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:104408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.125 [2024-11-19 08:01:03.801752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.125 [2024-11-19 08:01:03.801778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:104416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.125 [2024-11-19 08:01:03.801801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.125 [2024-11-19 08:01:03.801825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:104424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.125 [2024-11-19 08:01:03.801848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.125 [2024-11-19 08:01:03.801872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:104432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.125 [2024-11-19 08:01:03.801894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.125 [2024-11-19 08:01:03.801919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:104440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.125 [2024-11-19 08:01:03.801941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.125 [2024-11-19 08:01:03.801965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:104448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.125 [2024-11-19 08:01:03.802000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.125 [2024-11-19 08:01:03.802024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:104456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.125 [2024-11-19 08:01:03.802063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.125 [2024-11-19 08:01:03.802091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:104464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.125 [2024-11-19 08:01:03.802115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.125 [2024-11-19 08:01:03.802142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:104472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.125 [2024-11-19 08:01:03.802167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.125 [2024-11-19 08:01:03.802194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:104480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.125 [2024-11-19 08:01:03.802223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.125 [2024-11-19 08:01:03.802252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.125 [2024-11-19 08:01:03.802276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.125 [2024-11-19 08:01:03.802303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:104496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.125 [2024-11-19 08:01:03.802328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.125 [2024-11-19 08:01:03.802358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:104504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.125 [2024-11-19 08:01:03.802383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.125 [2024-11-19 08:01:03.802410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.125 [2024-11-19 08:01:03.802435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.125 [2024-11-19 08:01:03.802462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:104520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.125 [2024-11-19 08:01:03.802487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.125 [2024-11-19 08:01:03.802514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:104528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.125 [2024-11-19 08:01:03.802539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.125 [2024-11-19 08:01:03.802566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:104536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.125 [2024-11-19 08:01:03.802591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.125 [2024-11-19 08:01:03.802619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:104544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.126 [2024-11-19 08:01:03.802644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.126 [2024-11-19 08:01:03.802671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:104552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.126 [2024-11-19 08:01:03.802706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.126 [2024-11-19 08:01:03.802735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:104560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.126 [2024-11-19 08:01:03.802774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.126 [2024-11-19 08:01:03.802798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:104568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.126 [2024-11-19 08:01:03.802821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.126 [2024-11-19 08:01:03.802846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:104576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.126 [2024-11-19 08:01:03.802868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.126 [2024-11-19 08:01:03.802898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:104584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.126 [2024-11-19 08:01:03.802921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.126 [2024-11-19 08:01:03.802946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.126 [2024-11-19 08:01:03.802968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.126 [2024-11-19 08:01:03.803010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:104600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.126 [2024-11-19 08:01:03.803035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.126 [2024-11-19 08:01:03.803071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:104608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.126 [2024-11-19 08:01:03.803096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.126 [2024-11-19 08:01:03.803133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:104616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.126 [2024-11-19 08:01:03.803157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.126 [2024-11-19 08:01:03.803184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:104624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.126 [2024-11-19 08:01:03.803209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.126 [2024-11-19 08:01:03.803236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:104632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.126 [2024-11-19 08:01:03.803260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.126 [2024-11-19 08:01:03.803288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:104640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.126 [2024-11-19 08:01:03.803313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.126 [2024-11-19 08:01:03.803340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:104648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.126 [2024-11-19 08:01:03.803365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.126 [2024-11-19 08:01:03.803392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:104656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.126 [2024-11-19 08:01:03.803416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.126 [2024-11-19 08:01:03.803443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:104664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.126 [2024-11-19 08:01:03.803468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.126 [2024-11-19 08:01:03.803501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:104672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.126 [2024-11-19 08:01:03.803525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.126 [2024-11-19 08:01:03.803563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:104680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.126 [2024-11-19 08:01:03.803601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.126 [2024-11-19 08:01:03.803636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:104688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.126 [2024-11-19 08:01:03.803662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.126 [2024-11-19 08:01:03.803703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:104696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.126 [2024-11-19 08:01:03.803746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.126 [2024-11-19 08:01:03.803772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:104704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.126 [2024-11-19 08:01:03.803795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.126 [2024-11-19 08:01:03.803820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:104712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.126 [2024-11-19 08:01:03.803842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.126 [2024-11-19 08:01:03.803866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:104720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.126 [2024-11-19 08:01:03.803889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.126 [2024-11-19 08:01:03.803914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:104728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.126 [2024-11-19 08:01:03.803937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.126 [2024-11-19 08:01:03.803960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:104736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.126 [2024-11-19 08:01:03.804008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.126 [2024-11-19 08:01:03.804036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:104744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.126 [2024-11-19 08:01:03.804062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.126 [2024-11-19 08:01:03.804089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:104752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.126 [2024-11-19 08:01:03.804117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.126 [2024-11-19 08:01:03.804145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:104760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.126 [2024-11-19 08:01:03.804169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.126 [2024-11-19 08:01:03.804196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.126 [2024-11-19 08:01:03.804220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.126 [2024-11-19 08:01:03.804248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:104776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.126 [2024-11-19 08:01:03.804272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.126 [2024-11-19 08:01:03.804308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:104784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.126 [2024-11-19 08:01:03.804337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.126 [2024-11-19 08:01:03.804373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:104792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.126 [2024-11-19 08:01:03.804398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.126 [2024-11-19 08:01:03.804424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:104800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.126 [2024-11-19 08:01:03.804452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.126 [2024-11-19 08:01:03.804478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:104808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.126 [2024-11-19 08:01:03.804504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.126 [2024-11-19 08:01:03.804531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:104816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.126 [2024-11-19 08:01:03.804556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.126 [2024-11-19 08:01:03.804582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:104824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.126 [2024-11-19 08:01:03.804606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.126 [2024-11-19 08:01:03.804633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:104832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.126 [2024-11-19 08:01:03.804658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.126 [2024-11-19 08:01:03.804697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:104840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.126 [2024-11-19 08:01:03.804724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.126 [2024-11-19 08:01:03.804766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:104848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.126 [2024-11-19 08:01:03.804789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.126 [2024-11-19 08:01:03.804813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:104856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.127 [2024-11-19 08:01:03.804836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.127 [2024-11-19 08:01:03.804860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:104864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.127 [2024-11-19 08:01:03.804882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.127 [2024-11-19 08:01:03.804907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:104872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.127 [2024-11-19 08:01:03.804930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.127 [2024-11-19 08:01:03.804955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:104880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.127 [2024-11-19 08:01:03.804994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.127 [2024-11-19 08:01:03.805022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:104888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.127 [2024-11-19 08:01:03.805057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.127 [2024-11-19 08:01:03.805085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:104896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.127 [2024-11-19 08:01:03.805110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.127 [2024-11-19 08:01:03.805138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:104904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.127 [2024-11-19 08:01:03.805163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.127 [2024-11-19 08:01:03.805189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:104912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.127 [2024-11-19 08:01:03.805213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.127 [2024-11-19 08:01:03.805240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:104920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.127 [2024-11-19 08:01:03.805264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.127 [2024-11-19 08:01:03.805291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:104928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.127 [2024-11-19 08:01:03.805325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.127 [2024-11-19 08:01:03.805352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:104936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.127 [2024-11-19 08:01:03.805386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.127 [2024-11-19 08:01:03.805412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:104944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.127 [2024-11-19 08:01:03.805437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.127 [2024-11-19 08:01:03.805464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:104952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.127 [2024-11-19 08:01:03.805488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.127 [2024-11-19 08:01:03.805514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:104960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.127 [2024-11-19 08:01:03.805539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.127 [2024-11-19 08:01:03.805566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:104968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.127 [2024-11-19 08:01:03.805590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.127 [2024-11-19 08:01:03.805617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:104976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.127 [2024-11-19 08:01:03.805641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.127 [2024-11-19 08:01:03.805680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:104984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.127 [2024-11-19 08:01:03.805718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.127 [2024-11-19 08:01:03.805761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:104992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.127 [2024-11-19 08:01:03.805784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.127 [2024-11-19 08:01:03.805808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:105000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.127 [2024-11-19 08:01:03.805831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.127 [2024-11-19 08:01:03.805855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:105008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.127 [2024-11-19 08:01:03.805878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.127 [2024-11-19 08:01:03.805903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:105016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.127 [2024-11-19 08:01:03.805926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.127 [2024-11-19 08:01:03.805950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:105024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.127 [2024-11-19 08:01:03.805972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.127 [2024-11-19 08:01:03.806010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:105032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.127 [2024-11-19 08:01:03.806031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.127 [2024-11-19 08:01:03.806072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:105040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.127 [2024-11-19 08:01:03.806103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.127 [2024-11-19 08:01:03.806130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.127 [2024-11-19 08:01:03.806158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.127 [2024-11-19 08:01:03.806185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:105056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.127 [2024-11-19 08:01:03.806210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.127 [2024-11-19 08:01:03.806237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:105064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.127 [2024-11-19 08:01:03.806262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.127 [2024-11-19 08:01:03.806289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:105072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.127 [2024-11-19 08:01:03.806313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.127 [2024-11-19 08:01:03.806341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:105080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.127 [2024-11-19 08:01:03.806366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.127 [2024-11-19 08:01:03.806398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:105088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.127 [2024-11-19 08:01:03.806423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.127 [2024-11-19 08:01:03.806450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:105096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.127 [2024-11-19 08:01:03.806474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.127 [2024-11-19 08:01:03.806502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:105104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.127 [2024-11-19 08:01:03.806527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.127 [2024-11-19 08:01:03.806551] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2f00 is same with the state(6) to be set 00:37:12.127 [2024-11-19 08:01:03.806588] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:12.127 [2024-11-19 08:01:03.806611] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:12.127 [2024-11-19 08:01:03.806633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104216 len:8 PRP1 0x0 PRP2 0x0 00:37:12.127 [2024-11-19 08:01:03.806656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.127 [2024-11-19 08:01:03.811375] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:12.127 [2024-11-19 08:01:03.811490] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.127 [2024-11-19 08:01:03.812285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.127 [2024-11-19 08:01:03.812340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.127 [2024-11-19 08:01:03.812365] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:12.127 [2024-11-19 08:01:03.812711] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.127 [2024-11-19 08:01:03.813017] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:12.127 [2024-11-19 08:01:03.813050] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:12.127 [2024-11-19 08:01:03.813077] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:12.127 [2024-11-19 08:01:03.813103] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:12.127 [2024-11-19 08:01:03.826447] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:12.127 [2024-11-19 08:01:03.826962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.128 [2024-11-19 08:01:03.827009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.128 [2024-11-19 08:01:03.827037] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:12.128 [2024-11-19 08:01:03.827341] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.128 [2024-11-19 08:01:03.827634] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:12.128 [2024-11-19 08:01:03.827678] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:12.128 [2024-11-19 08:01:03.827722] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:12.128 [2024-11-19 08:01:03.827747] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:12.128 [2024-11-19 08:01:03.841140] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:12.128 [2024-11-19 08:01:03.841637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.128 [2024-11-19 08:01:03.841698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.128 [2024-11-19 08:01:03.841727] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:12.128 [2024-11-19 08:01:03.842015] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.128 [2024-11-19 08:01:03.842306] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:12.128 [2024-11-19 08:01:03.842337] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:12.128 [2024-11-19 08:01:03.842360] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:12.128 [2024-11-19 08:01:03.842392] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:12.128 [2024-11-19 08:01:03.855870] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:12.128 [2024-11-19 08:01:03.856323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.128 [2024-11-19 08:01:03.856364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.128 [2024-11-19 08:01:03.856390] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:12.128 [2024-11-19 08:01:03.856704] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.128 [2024-11-19 08:01:03.857000] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:12.128 [2024-11-19 08:01:03.857031] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:12.128 [2024-11-19 08:01:03.857054] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:12.128 [2024-11-19 08:01:03.857076] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:12.128 [2024-11-19 08:01:03.870461] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:12.128 [2024-11-19 08:01:03.870925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.128 [2024-11-19 08:01:03.870966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.128 [2024-11-19 08:01:03.870992] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:12.128 [2024-11-19 08:01:03.871283] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.128 [2024-11-19 08:01:03.871571] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:12.128 [2024-11-19 08:01:03.871602] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:12.128 [2024-11-19 08:01:03.871624] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:12.128 [2024-11-19 08:01:03.871646] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:12.128 [2024-11-19 08:01:03.885085] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:12.128 [2024-11-19 08:01:03.885592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.128 [2024-11-19 08:01:03.885633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.128 [2024-11-19 08:01:03.885660] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:12.128 [2024-11-19 08:01:03.885955] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.128 [2024-11-19 08:01:03.886243] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:12.128 [2024-11-19 08:01:03.886274] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:12.128 [2024-11-19 08:01:03.886296] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:12.128 [2024-11-19 08:01:03.886317] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:12.128 [2024-11-19 08:01:03.899744] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:12.128 [2024-11-19 08:01:03.900200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.128 [2024-11-19 08:01:03.900241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.128 [2024-11-19 08:01:03.900267] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:12.128 [2024-11-19 08:01:03.900552] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.128 [2024-11-19 08:01:03.900851] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:12.128 [2024-11-19 08:01:03.900883] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:12.128 [2024-11-19 08:01:03.900905] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:12.128 [2024-11-19 08:01:03.900926] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:12.128 [2024-11-19 08:01:03.914315] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:12.128 [2024-11-19 08:01:03.914765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.128 [2024-11-19 08:01:03.914808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.128 [2024-11-19 08:01:03.914835] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:12.128 [2024-11-19 08:01:03.915119] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.128 [2024-11-19 08:01:03.915406] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:12.128 [2024-11-19 08:01:03.915437] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:12.128 [2024-11-19 08:01:03.915459] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:12.128 [2024-11-19 08:01:03.915481] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:12.128 [2024-11-19 08:01:03.928867] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:12.128 [2024-11-19 08:01:03.929320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.128 [2024-11-19 08:01:03.929361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.128 [2024-11-19 08:01:03.929393] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:12.128 [2024-11-19 08:01:03.929680] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.128 [2024-11-19 08:01:03.929980] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:12.128 [2024-11-19 08:01:03.930011] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:12.128 [2024-11-19 08:01:03.930033] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:12.128 [2024-11-19 08:01:03.930055] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:12.128 [2024-11-19 08:01:03.943380] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:12.128 [2024-11-19 08:01:03.943845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.128 [2024-11-19 08:01:03.943887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.128 [2024-11-19 08:01:03.943914] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:12.128 [2024-11-19 08:01:03.944199] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.128 [2024-11-19 08:01:03.944486] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:12.128 [2024-11-19 08:01:03.944517] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:12.128 [2024-11-19 08:01:03.944539] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:12.128 [2024-11-19 08:01:03.944560] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:12.128 [2024-11-19 08:01:03.957909] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:12.128 [2024-11-19 08:01:03.958308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.128 [2024-11-19 08:01:03.958349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.128 [2024-11-19 08:01:03.958375] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:12.128 [2024-11-19 08:01:03.958659] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.129 [2024-11-19 08:01:03.958956] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:12.129 [2024-11-19 08:01:03.958988] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:12.129 [2024-11-19 08:01:03.959010] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:12.129 [2024-11-19 08:01:03.959031] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:12.129 [2024-11-19 08:01:03.972357] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:12.129 [2024-11-19 08:01:03.972822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.129 [2024-11-19 08:01:03.972864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.129 [2024-11-19 08:01:03.972890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:12.129 [2024-11-19 08:01:03.973180] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.129 [2024-11-19 08:01:03.973466] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:12.129 [2024-11-19 08:01:03.973498] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:12.129 [2024-11-19 08:01:03.973520] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:12.129 [2024-11-19 08:01:03.973542] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:12.129 [2024-11-19 08:01:03.986950] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:12.129 [2024-11-19 08:01:03.987385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.129 [2024-11-19 08:01:03.987426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.129 [2024-11-19 08:01:03.987453] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:12.129 [2024-11-19 08:01:03.987749] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.129 [2024-11-19 08:01:03.988036] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:12.129 [2024-11-19 08:01:03.988067] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:12.129 [2024-11-19 08:01:03.988090] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:12.129 [2024-11-19 08:01:03.988112] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:12.129 [2024-11-19 08:01:04.001444] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:12.129 [2024-11-19 08:01:04.001922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.129 [2024-11-19 08:01:04.001963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.129 [2024-11-19 08:01:04.001995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:12.129 [2024-11-19 08:01:04.002280] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.129 [2024-11-19 08:01:04.002566] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:12.129 [2024-11-19 08:01:04.002598] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:12.129 [2024-11-19 08:01:04.002620] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:12.129 [2024-11-19 08:01:04.002643] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:12.129 [2024-11-19 08:01:04.016009] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:12.129 [2024-11-19 08:01:04.016458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.129 [2024-11-19 08:01:04.016498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.129 [2024-11-19 08:01:04.016525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:12.129 [2024-11-19 08:01:04.016823] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.129 [2024-11-19 08:01:04.017113] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:12.129 [2024-11-19 08:01:04.017150] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:12.129 [2024-11-19 08:01:04.017174] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:12.129 [2024-11-19 08:01:04.017196] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:12.129 [2024-11-19 08:01:04.030529] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:12.129 [2024-11-19 08:01:04.031015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.129 [2024-11-19 08:01:04.031056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.129 [2024-11-19 08:01:04.031082] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:12.129 [2024-11-19 08:01:04.031364] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.129 [2024-11-19 08:01:04.031650] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:12.129 [2024-11-19 08:01:04.031683] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:12.129 [2024-11-19 08:01:04.031717] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:12.129 [2024-11-19 08:01:04.031754] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:12.129 [2024-11-19 08:01:04.045076] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:12.129 [2024-11-19 08:01:04.045528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.129 [2024-11-19 08:01:04.045570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.129 [2024-11-19 08:01:04.045596] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:12.129 [2024-11-19 08:01:04.045895] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.129 [2024-11-19 08:01:04.046181] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:12.129 [2024-11-19 08:01:04.046212] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:12.129 [2024-11-19 08:01:04.046236] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:12.129 [2024-11-19 08:01:04.046258] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:12.388 [2024-11-19 08:01:04.059589] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:12.388 [2024-11-19 08:01:04.060048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.388 [2024-11-19 08:01:04.060090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.388 [2024-11-19 08:01:04.060116] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:12.388 [2024-11-19 08:01:04.060398] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.388 [2024-11-19 08:01:04.060684] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:12.388 [2024-11-19 08:01:04.060735] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:12.388 [2024-11-19 08:01:04.060759] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:12.388 [2024-11-19 08:01:04.060790] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:12.388 [2024-11-19 08:01:04.074116] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:12.388 [2024-11-19 08:01:04.074563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.388 [2024-11-19 08:01:04.074603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.388 [2024-11-19 08:01:04.074630] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:12.388 [2024-11-19 08:01:04.074925] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.388 [2024-11-19 08:01:04.075212] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:12.388 [2024-11-19 08:01:04.075242] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:12.388 [2024-11-19 08:01:04.075264] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:12.388 [2024-11-19 08:01:04.075287] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:12.388 [2024-11-19 08:01:04.088615] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:12.388 [2024-11-19 08:01:04.089118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.388 [2024-11-19 08:01:04.089179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.388 [2024-11-19 08:01:04.089205] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:12.388 [2024-11-19 08:01:04.089491] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.388 [2024-11-19 08:01:04.089789] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:12.388 [2024-11-19 08:01:04.089821] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:12.388 [2024-11-19 08:01:04.089843] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:12.388 [2024-11-19 08:01:04.089864] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:12.388 [2024-11-19 08:01:04.103182] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:12.388 [2024-11-19 08:01:04.103644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.388 [2024-11-19 08:01:04.103686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.388 [2024-11-19 08:01:04.103727] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:12.388 [2024-11-19 08:01:04.104011] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.388 [2024-11-19 08:01:04.104297] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:12.388 [2024-11-19 08:01:04.104329] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:12.388 [2024-11-19 08:01:04.104351] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:12.388 [2024-11-19 08:01:04.104372] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:12.388 [2024-11-19 08:01:04.117706] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:12.388 [2024-11-19 08:01:04.118202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.388 [2024-11-19 08:01:04.118262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.388 [2024-11-19 08:01:04.118287] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:12.388 [2024-11-19 08:01:04.118571] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.388 [2024-11-19 08:01:04.118870] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:12.388 [2024-11-19 08:01:04.118902] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:12.388 [2024-11-19 08:01:04.118925] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:12.388 [2024-11-19 08:01:04.118946] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:12.388 [2024-11-19 08:01:04.132289] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:12.388 [2024-11-19 08:01:04.132796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.388 [2024-11-19 08:01:04.132853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.388 [2024-11-19 08:01:04.132879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:12.388 [2024-11-19 08:01:04.133164] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.388 [2024-11-19 08:01:04.133450] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:12.388 [2024-11-19 08:01:04.133480] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:12.388 [2024-11-19 08:01:04.133503] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:12.388 [2024-11-19 08:01:04.133525] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:12.388 [2024-11-19 08:01:04.146804] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:12.388 [2024-11-19 08:01:04.147357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.388 [2024-11-19 08:01:04.147415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.388 [2024-11-19 08:01:04.147442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:12.388 [2024-11-19 08:01:04.147737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.388 [2024-11-19 08:01:04.148022] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:12.388 [2024-11-19 08:01:04.148053] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:12.388 [2024-11-19 08:01:04.148074] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:12.388 [2024-11-19 08:01:04.148096] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:12.388 [2024-11-19 08:01:04.161349] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:12.388 [2024-11-19 08:01:04.161828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.388 [2024-11-19 08:01:04.161870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.388 [2024-11-19 08:01:04.161902] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:12.388 [2024-11-19 08:01:04.162187] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.388 [2024-11-19 08:01:04.162473] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:12.388 [2024-11-19 08:01:04.162503] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:12.388 [2024-11-19 08:01:04.162526] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:12.388 [2024-11-19 08:01:04.162547] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:12.388 [2024-11-19 08:01:04.175798] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:12.388 [2024-11-19 08:01:04.176272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.388 [2024-11-19 08:01:04.176314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.389 [2024-11-19 08:01:04.176340] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:12.389 [2024-11-19 08:01:04.176625] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.389 [2024-11-19 08:01:04.176925] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:12.389 [2024-11-19 08:01:04.176956] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:12.389 [2024-11-19 08:01:04.176979] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:12.389 [2024-11-19 08:01:04.177000] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:12.389 [2024-11-19 08:01:04.190289] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:12.389 [2024-11-19 08:01:04.190741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.389 [2024-11-19 08:01:04.190781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.389 [2024-11-19 08:01:04.190807] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:12.389 [2024-11-19 08:01:04.191091] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.389 [2024-11-19 08:01:04.191376] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:12.389 [2024-11-19 08:01:04.191406] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:12.389 [2024-11-19 08:01:04.191429] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:12.389 [2024-11-19 08:01:04.191450] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:12.389 [2024-11-19 08:01:04.204748] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:12.389 [2024-11-19 08:01:04.205204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.389 [2024-11-19 08:01:04.205245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.389 [2024-11-19 08:01:04.205272] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:12.389 [2024-11-19 08:01:04.205555] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.389 [2024-11-19 08:01:04.205863] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:12.389 [2024-11-19 08:01:04.205894] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:12.389 [2024-11-19 08:01:04.205917] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:12.389 [2024-11-19 08:01:04.205939] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:12.389 [2024-11-19 08:01:04.219224] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:12.389 [2024-11-19 08:01:04.219663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.389 [2024-11-19 08:01:04.219714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.389 [2024-11-19 08:01:04.219742] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:12.389 [2024-11-19 08:01:04.220044] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.389 [2024-11-19 08:01:04.220331] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:12.389 [2024-11-19 08:01:04.220362] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:12.389 [2024-11-19 08:01:04.220384] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:12.389 [2024-11-19 08:01:04.220406] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:12.389 [2024-11-19 08:01:04.233697] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:12.389 [2024-11-19 08:01:04.234117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.389 [2024-11-19 08:01:04.234158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.389 [2024-11-19 08:01:04.234185] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:12.389 [2024-11-19 08:01:04.234469] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.389 [2024-11-19 08:01:04.234770] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:12.389 [2024-11-19 08:01:04.234802] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:12.389 [2024-11-19 08:01:04.234825] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:12.389 [2024-11-19 08:01:04.234846] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:12.389 [2024-11-19 08:01:04.248135] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:12.389 [2024-11-19 08:01:04.248614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.389 [2024-11-19 08:01:04.248654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.389 [2024-11-19 08:01:04.248680] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:12.389 [2024-11-19 08:01:04.248976] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.389 [2024-11-19 08:01:04.249262] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:12.389 [2024-11-19 08:01:04.249292] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:12.389 [2024-11-19 08:01:04.249322] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:12.389 [2024-11-19 08:01:04.249345] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:12.389 [2024-11-19 08:01:04.262725] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:12.389 [2024-11-19 08:01:04.263203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.389 [2024-11-19 08:01:04.263246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.389 [2024-11-19 08:01:04.263272] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:12.389 [2024-11-19 08:01:04.263555] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.389 [2024-11-19 08:01:04.263861] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:12.389 [2024-11-19 08:01:04.263894] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:12.389 [2024-11-19 08:01:04.263917] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:12.389 [2024-11-19 08:01:04.263939] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:12.389 [2024-11-19 08:01:04.277242] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:12.389 [2024-11-19 08:01:04.277719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.389 [2024-11-19 08:01:04.277761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.389 [2024-11-19 08:01:04.277788] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:12.389 [2024-11-19 08:01:04.278073] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.389 [2024-11-19 08:01:04.278360] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:12.389 [2024-11-19 08:01:04.278390] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:12.389 [2024-11-19 08:01:04.278413] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:12.389 [2024-11-19 08:01:04.278434] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:12.389 [2024-11-19 08:01:04.291703] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:12.389 [2024-11-19 08:01:04.292171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.389 [2024-11-19 08:01:04.292214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.389 [2024-11-19 08:01:04.292240] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:12.389 [2024-11-19 08:01:04.292523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.389 [2024-11-19 08:01:04.292822] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:12.389 [2024-11-19 08:01:04.292854] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:12.389 [2024-11-19 08:01:04.292876] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:12.389 [2024-11-19 08:01:04.292897] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:12.389 [2024-11-19 08:01:04.306201] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:12.389 [2024-11-19 08:01:04.306660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.389 [2024-11-19 08:01:04.306710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.389 [2024-11-19 08:01:04.306738] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:12.389 [2024-11-19 08:01:04.307021] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.389 [2024-11-19 08:01:04.307307] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:12.389 [2024-11-19 08:01:04.307339] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:12.389 [2024-11-19 08:01:04.307361] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:12.390 [2024-11-19 08:01:04.307382] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:12.648 [2024-11-19 08:01:04.320683] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:12.648 [2024-11-19 08:01:04.321143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.648 [2024-11-19 08:01:04.321185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.648 [2024-11-19 08:01:04.321212] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:12.648 [2024-11-19 08:01:04.321497] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.648 [2024-11-19 08:01:04.321794] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:12.648 [2024-11-19 08:01:04.321826] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:12.648 [2024-11-19 08:01:04.321848] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:12.648 [2024-11-19 08:01:04.321871] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:12.648 [2024-11-19 08:01:04.335142] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:12.648 [2024-11-19 08:01:04.335606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.648 [2024-11-19 08:01:04.335648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.648 [2024-11-19 08:01:04.335674] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:12.648 [2024-11-19 08:01:04.335970] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.648 [2024-11-19 08:01:04.336257] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:12.648 [2024-11-19 08:01:04.336288] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:12.648 [2024-11-19 08:01:04.336310] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:12.648 [2024-11-19 08:01:04.336333] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:12.648 [2024-11-19 08:01:04.349581] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:12.648 [2024-11-19 08:01:04.350042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.648 [2024-11-19 08:01:04.350090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.648 [2024-11-19 08:01:04.350116] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:12.648 [2024-11-19 08:01:04.350401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.648 [2024-11-19 08:01:04.350698] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:12.648 [2024-11-19 08:01:04.350737] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:12.648 [2024-11-19 08:01:04.350758] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:12.648 [2024-11-19 08:01:04.350780] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:12.649 [2024-11-19 08:01:04.364054] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:12.649 [2024-11-19 08:01:04.364509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.649 [2024-11-19 08:01:04.364549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.649 [2024-11-19 08:01:04.364575] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:12.649 [2024-11-19 08:01:04.364873] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.649 [2024-11-19 08:01:04.365158] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:12.649 [2024-11-19 08:01:04.365189] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:12.649 [2024-11-19 08:01:04.365211] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:12.649 [2024-11-19 08:01:04.365232] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:12.649 [2024-11-19 08:01:04.378520] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:12.649 [2024-11-19 08:01:04.378965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.649 [2024-11-19 08:01:04.379007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.649 [2024-11-19 08:01:04.379034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:12.649 [2024-11-19 08:01:04.379318] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.649 [2024-11-19 08:01:04.379604] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:12.649 [2024-11-19 08:01:04.379635] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:12.649 [2024-11-19 08:01:04.379657] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:12.649 [2024-11-19 08:01:04.379679] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:12.649 [2024-11-19 08:01:04.392966] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:12.649 [2024-11-19 08:01:04.393419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.649 [2024-11-19 08:01:04.393460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.649 [2024-11-19 08:01:04.393487] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:12.649 [2024-11-19 08:01:04.393791] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.649 [2024-11-19 08:01:04.394078] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:12.649 [2024-11-19 08:01:04.394109] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:12.649 [2024-11-19 08:01:04.394132] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:12.649 [2024-11-19 08:01:04.394154] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:12.649 [2024-11-19 08:01:04.407411] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:12.649 [2024-11-19 08:01:04.407884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.649 [2024-11-19 08:01:04.407926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.649 [2024-11-19 08:01:04.407953] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:12.649 [2024-11-19 08:01:04.408236] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.649 [2024-11-19 08:01:04.408519] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:12.649 [2024-11-19 08:01:04.408551] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:12.649 [2024-11-19 08:01:04.408573] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:12.649 [2024-11-19 08:01:04.408595] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:12.649 [2024-11-19 08:01:04.421894] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:12.649 [2024-11-19 08:01:04.422342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.649 [2024-11-19 08:01:04.422383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.649 [2024-11-19 08:01:04.422410] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:12.649 [2024-11-19 08:01:04.422704] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.649 [2024-11-19 08:01:04.422990] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:12.649 [2024-11-19 08:01:04.423021] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:12.649 [2024-11-19 08:01:04.423043] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:12.649 [2024-11-19 08:01:04.423065] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:12.649 [2024-11-19 08:01:04.436338] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:12.649 [2024-11-19 08:01:04.436760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.649 [2024-11-19 08:01:04.436802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.649 [2024-11-19 08:01:04.436828] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:12.649 [2024-11-19 08:01:04.437150] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.649 [2024-11-19 08:01:04.437455] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:12.649 [2024-11-19 08:01:04.437486] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:12.649 [2024-11-19 08:01:04.437509] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:12.649 [2024-11-19 08:01:04.437530] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:12.649 [2024-11-19 08:01:04.450858] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:12.649 [2024-11-19 08:01:04.451326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.649 [2024-11-19 08:01:04.451367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.649 [2024-11-19 08:01:04.451408] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:12.649 [2024-11-19 08:01:04.451702] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.649 [2024-11-19 08:01:04.451990] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:12.649 [2024-11-19 08:01:04.452021] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:12.649 [2024-11-19 08:01:04.452043] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:12.649 [2024-11-19 08:01:04.452065] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:12.649 [2024-11-19 08:01:04.465360] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:12.649 [2024-11-19 08:01:04.465834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.649 [2024-11-19 08:01:04.465876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.649 [2024-11-19 08:01:04.465903] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:12.649 [2024-11-19 08:01:04.466188] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.649 [2024-11-19 08:01:04.466472] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:12.649 [2024-11-19 08:01:04.466503] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:12.649 [2024-11-19 08:01:04.466525] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:12.649 [2024-11-19 08:01:04.466547] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:12.649 [2024-11-19 08:01:04.479803] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:12.649 [2024-11-19 08:01:04.480286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.649 [2024-11-19 08:01:04.480327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.649 [2024-11-19 08:01:04.480353] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:12.649 [2024-11-19 08:01:04.480637] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.649 [2024-11-19 08:01:04.480937] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:12.649 [2024-11-19 08:01:04.480969] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:12.649 [2024-11-19 08:01:04.480998] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:12.649 [2024-11-19 08:01:04.481021] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:12.649 [2024-11-19 08:01:04.494288] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:12.649 [2024-11-19 08:01:04.494759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.649 [2024-11-19 08:01:04.494800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.649 [2024-11-19 08:01:04.494827] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:12.649 [2024-11-19 08:01:04.495110] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.649 [2024-11-19 08:01:04.495395] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:12.650 [2024-11-19 08:01:04.495426] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:12.650 [2024-11-19 08:01:04.495448] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:12.650 [2024-11-19 08:01:04.495470] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:12.650 [2024-11-19 08:01:04.508753] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:12.650 [2024-11-19 08:01:04.509174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.650 [2024-11-19 08:01:04.509215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.650 [2024-11-19 08:01:04.509241] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:12.650 [2024-11-19 08:01:04.509525] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.650 [2024-11-19 08:01:04.509825] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:12.650 [2024-11-19 08:01:04.509858] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:12.650 [2024-11-19 08:01:04.509880] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:12.650 [2024-11-19 08:01:04.509902] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:12.650 [2024-11-19 08:01:04.523202] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:12.650 [2024-11-19 08:01:04.523642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.650 [2024-11-19 08:01:04.523683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.650 [2024-11-19 08:01:04.523720] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:12.650 [2024-11-19 08:01:04.524005] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.650 [2024-11-19 08:01:04.524290] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:12.650 [2024-11-19 08:01:04.524323] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:12.650 [2024-11-19 08:01:04.524346] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:12.650 [2024-11-19 08:01:04.524368] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:12.650 [2024-11-19 08:01:04.537617] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:12.650 [2024-11-19 08:01:04.538085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.650 [2024-11-19 08:01:04.538126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.650 [2024-11-19 08:01:04.538152] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:12.650 [2024-11-19 08:01:04.538435] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.650 [2024-11-19 08:01:04.538734] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:12.650 [2024-11-19 08:01:04.538766] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:12.650 [2024-11-19 08:01:04.538789] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:12.650 [2024-11-19 08:01:04.538811] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:12.650 [2024-11-19 08:01:04.552080] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:12.650 [2024-11-19 08:01:04.552504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.650 [2024-11-19 08:01:04.552544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.650 [2024-11-19 08:01:04.552570] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:12.650 [2024-11-19 08:01:04.552866] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.650 [2024-11-19 08:01:04.553152] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:12.650 [2024-11-19 08:01:04.553183] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:12.650 [2024-11-19 08:01:04.553205] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:12.650 [2024-11-19 08:01:04.553227] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:12.650 [2024-11-19 08:01:04.566514] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:12.650 [2024-11-19 08:01:04.566972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.650 [2024-11-19 08:01:04.567013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.650 [2024-11-19 08:01:04.567039] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:12.650 [2024-11-19 08:01:04.567323] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.650 [2024-11-19 08:01:04.567609] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:12.650 [2024-11-19 08:01:04.567640] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:12.650 [2024-11-19 08:01:04.567662] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:12.650 [2024-11-19 08:01:04.567684] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:12.909 [2024-11-19 08:01:04.581025] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:12.909 [2024-11-19 08:01:04.581454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.909 [2024-11-19 08:01:04.581495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.909 [2024-11-19 08:01:04.581527] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:12.909 [2024-11-19 08:01:04.581823] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.909 [2024-11-19 08:01:04.582109] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:12.909 [2024-11-19 08:01:04.582140] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:12.909 [2024-11-19 08:01:04.582162] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:12.909 [2024-11-19 08:01:04.582183] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:12.909 [2024-11-19 08:01:04.595462] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:12.909 [2024-11-19 08:01:04.595913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.909 [2024-11-19 08:01:04.595954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.909 [2024-11-19 08:01:04.595980] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:12.909 [2024-11-19 08:01:04.596263] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.909 [2024-11-19 08:01:04.596548] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:12.909 [2024-11-19 08:01:04.596579] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:12.909 [2024-11-19 08:01:04.596602] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:12.909 [2024-11-19 08:01:04.596624] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:12.909 [2024-11-19 08:01:04.609899] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:12.909 [2024-11-19 08:01:04.610355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.909 [2024-11-19 08:01:04.610395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.909 [2024-11-19 08:01:04.610422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:12.909 [2024-11-19 08:01:04.610717] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.909 [2024-11-19 08:01:04.611003] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:12.909 [2024-11-19 08:01:04.611034] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:12.909 [2024-11-19 08:01:04.611056] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:12.909 [2024-11-19 08:01:04.611077] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:12.909 [2024-11-19 08:01:04.624364] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:12.909 [2024-11-19 08:01:04.624821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.909 [2024-11-19 08:01:04.624862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.909 [2024-11-19 08:01:04.624889] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:12.909 [2024-11-19 08:01:04.625178] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.909 [2024-11-19 08:01:04.625463] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:12.909 [2024-11-19 08:01:04.625494] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:12.909 [2024-11-19 08:01:04.625516] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:12.909 [2024-11-19 08:01:04.625537] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:12.909 [2024-11-19 08:01:04.638811] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:12.909 [2024-11-19 08:01:04.639277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.909 [2024-11-19 08:01:04.639318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.909 [2024-11-19 08:01:04.639344] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:12.909 [2024-11-19 08:01:04.639626] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.909 [2024-11-19 08:01:04.639923] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:12.909 [2024-11-19 08:01:04.639955] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:12.909 [2024-11-19 08:01:04.639977] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:12.909 [2024-11-19 08:01:04.639999] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:12.909 [2024-11-19 08:01:04.653262] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:12.909 [2024-11-19 08:01:04.653714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.909 [2024-11-19 08:01:04.653754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.909 [2024-11-19 08:01:04.653780] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:12.909 [2024-11-19 08:01:04.654064] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.909 [2024-11-19 08:01:04.654365] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:12.909 [2024-11-19 08:01:04.654396] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:12.909 [2024-11-19 08:01:04.654419] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:12.909 [2024-11-19 08:01:04.654440] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:12.909 [2024-11-19 08:01:04.667686] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:12.909 [2024-11-19 08:01:04.668129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.909 [2024-11-19 08:01:04.668170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.909 [2024-11-19 08:01:04.668196] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:12.909 [2024-11-19 08:01:04.668479] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.909 [2024-11-19 08:01:04.668779] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:12.910 [2024-11-19 08:01:04.668816] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:12.910 [2024-11-19 08:01:04.668841] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:12.910 [2024-11-19 08:01:04.668863] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:12.910 [2024-11-19 08:01:04.682142] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:12.910 [2024-11-19 08:01:04.682623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.910 [2024-11-19 08:01:04.682664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.910 [2024-11-19 08:01:04.682701] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:12.910 [2024-11-19 08:01:04.682989] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.910 [2024-11-19 08:01:04.683275] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:12.910 [2024-11-19 08:01:04.683306] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:12.910 [2024-11-19 08:01:04.683328] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:12.910 [2024-11-19 08:01:04.683349] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:12.910 [2024-11-19 08:01:04.696638] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:12.910 [2024-11-19 08:01:04.697099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.910 [2024-11-19 08:01:04.697140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.910 [2024-11-19 08:01:04.697167] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:12.910 [2024-11-19 08:01:04.697449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.910 [2024-11-19 08:01:04.697751] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:12.910 [2024-11-19 08:01:04.697782] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:12.910 [2024-11-19 08:01:04.697805] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:12.910 [2024-11-19 08:01:04.697826] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:12.910 4337.00 IOPS, 16.94 MiB/s [2024-11-19T07:01:04.840Z] [2024-11-19 08:01:04.711014] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:12.910 [2024-11-19 08:01:04.711436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.910 [2024-11-19 08:01:04.711477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.910 [2024-11-19 08:01:04.711504] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:12.910 [2024-11-19 08:01:04.711804] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.910 [2024-11-19 08:01:04.712090] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:12.910 [2024-11-19 08:01:04.712121] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:12.910 [2024-11-19 08:01:04.712144] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:12.910 [2024-11-19 08:01:04.712171] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:12.910 [2024-11-19 08:01:04.725478] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:12.910 [2024-11-19 08:01:04.726037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.910 [2024-11-19 08:01:04.726080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.910 [2024-11-19 08:01:04.726106] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:12.910 [2024-11-19 08:01:04.726390] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.910 [2024-11-19 08:01:04.726675] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:12.910 [2024-11-19 08:01:04.726719] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:12.910 [2024-11-19 08:01:04.726742] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:12.910 [2024-11-19 08:01:04.726764] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:12.910 [2024-11-19 08:01:04.740039] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:12.910 [2024-11-19 08:01:04.740466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.910 [2024-11-19 08:01:04.740507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.910 [2024-11-19 08:01:04.740533] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:12.910 [2024-11-19 08:01:04.740833] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.910 [2024-11-19 08:01:04.741117] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:12.910 [2024-11-19 08:01:04.741148] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:12.910 [2024-11-19 08:01:04.741171] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:12.910 [2024-11-19 08:01:04.741193] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:12.910 [2024-11-19 08:01:04.754485] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:12.910 [2024-11-19 08:01:04.754961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.910 [2024-11-19 08:01:04.755001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.910 [2024-11-19 08:01:04.755027] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:12.910 [2024-11-19 08:01:04.755312] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.910 [2024-11-19 08:01:04.755597] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:12.910 [2024-11-19 08:01:04.755629] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:12.910 [2024-11-19 08:01:04.755651] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:12.910 [2024-11-19 08:01:04.755673] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:12.910 [2024-11-19 08:01:04.768970] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:12.910 [2024-11-19 08:01:04.769421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.910 [2024-11-19 08:01:04.769462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.910 [2024-11-19 08:01:04.769488] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:12.910 [2024-11-19 08:01:04.769784] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.910 [2024-11-19 08:01:04.770070] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:12.910 [2024-11-19 08:01:04.770101] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:12.910 [2024-11-19 08:01:04.770124] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:12.910 [2024-11-19 08:01:04.770146] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:12.910 [2024-11-19 08:01:04.783408] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:12.910 [2024-11-19 08:01:04.783879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.910 [2024-11-19 08:01:04.783921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.910 [2024-11-19 08:01:04.783947] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:12.910 [2024-11-19 08:01:04.784229] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.910 [2024-11-19 08:01:04.784514] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:12.910 [2024-11-19 08:01:04.784546] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:12.910 [2024-11-19 08:01:04.784568] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:12.910 [2024-11-19 08:01:04.784589] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:12.910 [2024-11-19 08:01:04.797872] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:12.910 [2024-11-19 08:01:04.798352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.910 [2024-11-19 08:01:04.798393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.910 [2024-11-19 08:01:04.798419] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:12.910 [2024-11-19 08:01:04.798715] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.910 [2024-11-19 08:01:04.799002] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:12.910 [2024-11-19 08:01:04.799033] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:12.910 [2024-11-19 08:01:04.799055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:12.910 [2024-11-19 08:01:04.799077] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:12.910 [2024-11-19 08:01:04.812324] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:12.910 [2024-11-19 08:01:04.812778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.910 [2024-11-19 08:01:04.812819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.911 [2024-11-19 08:01:04.812852] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:12.911 [2024-11-19 08:01:04.813136] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.911 [2024-11-19 08:01:04.813422] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:12.911 [2024-11-19 08:01:04.813452] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:12.911 [2024-11-19 08:01:04.813474] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:12.911 [2024-11-19 08:01:04.813496] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:12.911 [2024-11-19 08:01:04.826833] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:12.911 [2024-11-19 08:01:04.827256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.911 [2024-11-19 08:01:04.827298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:12.911 [2024-11-19 08:01:04.827324] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:12.911 [2024-11-19 08:01:04.827608] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:12.911 [2024-11-19 08:01:04.827909] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:12.911 [2024-11-19 08:01:04.827940] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:12.911 [2024-11-19 08:01:04.827962] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:12.911 [2024-11-19 08:01:04.827984] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:12.911 [2024-11-19 08:01:04.841279] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.170 [2024-11-19 08:01:04.841722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.170 [2024-11-19 08:01:04.841763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.170 [2024-11-19 08:01:04.841789] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.170 [2024-11-19 08:01:04.842073] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.170 [2024-11-19 08:01:04.842361] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.170 [2024-11-19 08:01:04.842391] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.170 [2024-11-19 08:01:04.842414] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.170 [2024-11-19 08:01:04.842435] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.170 [2024-11-19 08:01:04.855737] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.170 [2024-11-19 08:01:04.856186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.170 [2024-11-19 08:01:04.856227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.170 [2024-11-19 08:01:04.856253] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.170 [2024-11-19 08:01:04.856542] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.170 [2024-11-19 08:01:04.856840] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.170 [2024-11-19 08:01:04.856872] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.170 [2024-11-19 08:01:04.856937] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.170 [2024-11-19 08:01:04.856960] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.170 [2024-11-19 08:01:04.870253] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.170 [2024-11-19 08:01:04.870703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.170 [2024-11-19 08:01:04.870750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.170 [2024-11-19 08:01:04.870777] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.170 [2024-11-19 08:01:04.871063] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.170 [2024-11-19 08:01:04.871348] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.170 [2024-11-19 08:01:04.871379] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.170 [2024-11-19 08:01:04.871402] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.170 [2024-11-19 08:01:04.871423] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.170 [2024-11-19 08:01:04.884729] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.170 [2024-11-19 08:01:04.885183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.170 [2024-11-19 08:01:04.885224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.170 [2024-11-19 08:01:04.885250] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.170 [2024-11-19 08:01:04.885534] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.170 [2024-11-19 08:01:04.885833] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.170 [2024-11-19 08:01:04.885864] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.170 [2024-11-19 08:01:04.885887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.170 [2024-11-19 08:01:04.885909] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.170 [2024-11-19 08:01:04.899213] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.170 [2024-11-19 08:01:04.899677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.170 [2024-11-19 08:01:04.899730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.170 [2024-11-19 08:01:04.899757] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.170 [2024-11-19 08:01:04.900041] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.170 [2024-11-19 08:01:04.900327] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.170 [2024-11-19 08:01:04.900364] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.170 [2024-11-19 08:01:04.900387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.170 [2024-11-19 08:01:04.900409] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.170 [2024-11-19 08:01:04.913740] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.170 [2024-11-19 08:01:04.914181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.170 [2024-11-19 08:01:04.914223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.170 [2024-11-19 08:01:04.914249] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.170 [2024-11-19 08:01:04.914532] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.170 [2024-11-19 08:01:04.914831] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.170 [2024-11-19 08:01:04.914864] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.170 [2024-11-19 08:01:04.914886] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.170 [2024-11-19 08:01:04.914907] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.170 [2024-11-19 08:01:04.928196] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.170 [2024-11-19 08:01:04.928625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.170 [2024-11-19 08:01:04.928667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.170 [2024-11-19 08:01:04.928702] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.170 [2024-11-19 08:01:04.928988] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.170 [2024-11-19 08:01:04.929274] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.170 [2024-11-19 08:01:04.929305] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.170 [2024-11-19 08:01:04.929328] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.170 [2024-11-19 08:01:04.929349] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.170 [2024-11-19 08:01:04.942646] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.171 [2024-11-19 08:01:04.943175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.171 [2024-11-19 08:01:04.943233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.171 [2024-11-19 08:01:04.943260] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.171 [2024-11-19 08:01:04.943544] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.171 [2024-11-19 08:01:04.943841] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.171 [2024-11-19 08:01:04.943873] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.171 [2024-11-19 08:01:04.943895] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.171 [2024-11-19 08:01:04.943922] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.171 [2024-11-19 08:01:04.957212] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.171 [2024-11-19 08:01:04.957647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.171 [2024-11-19 08:01:04.957695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.171 [2024-11-19 08:01:04.957723] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.171 [2024-11-19 08:01:04.958006] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.171 [2024-11-19 08:01:04.958290] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.171 [2024-11-19 08:01:04.958322] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.171 [2024-11-19 08:01:04.958345] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.171 [2024-11-19 08:01:04.958367] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.171 [2024-11-19 08:01:04.971668] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.171 [2024-11-19 08:01:04.972095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.171 [2024-11-19 08:01:04.972135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.171 [2024-11-19 08:01:04.972161] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.171 [2024-11-19 08:01:04.972444] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.171 [2024-11-19 08:01:04.972736] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.171 [2024-11-19 08:01:04.972768] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.171 [2024-11-19 08:01:04.972790] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.171 [2024-11-19 08:01:04.972812] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.171 [2024-11-19 08:01:04.986056] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.171 [2024-11-19 08:01:04.986494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.171 [2024-11-19 08:01:04.986536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.171 [2024-11-19 08:01:04.986563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.171 [2024-11-19 08:01:04.986858] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.171 [2024-11-19 08:01:04.987143] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.171 [2024-11-19 08:01:04.987173] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.171 [2024-11-19 08:01:04.987195] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.171 [2024-11-19 08:01:04.987216] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.171 [2024-11-19 08:01:05.000570] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.171 [2024-11-19 08:01:05.001044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.171 [2024-11-19 08:01:05.001086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.171 [2024-11-19 08:01:05.001113] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.171 [2024-11-19 08:01:05.001396] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.171 [2024-11-19 08:01:05.001682] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.171 [2024-11-19 08:01:05.001724] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.171 [2024-11-19 08:01:05.001748] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.171 [2024-11-19 08:01:05.001769] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.171 [2024-11-19 08:01:05.015044] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.171 [2024-11-19 08:01:05.015489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.171 [2024-11-19 08:01:05.015529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.171 [2024-11-19 08:01:05.015555] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.171 [2024-11-19 08:01:05.015851] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.171 [2024-11-19 08:01:05.016137] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.171 [2024-11-19 08:01:05.016169] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.171 [2024-11-19 08:01:05.016192] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.171 [2024-11-19 08:01:05.016215] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.171 [2024-11-19 08:01:05.029567] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.171 [2024-11-19 08:01:05.030040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.171 [2024-11-19 08:01:05.030081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.171 [2024-11-19 08:01:05.030108] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.171 [2024-11-19 08:01:05.030393] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.171 [2024-11-19 08:01:05.030678] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.171 [2024-11-19 08:01:05.030718] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.171 [2024-11-19 08:01:05.030742] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.171 [2024-11-19 08:01:05.030764] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.171 [2024-11-19 08:01:05.044101] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.171 [2024-11-19 08:01:05.044560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.171 [2024-11-19 08:01:05.044601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.171 [2024-11-19 08:01:05.044632] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.171 [2024-11-19 08:01:05.044939] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.171 [2024-11-19 08:01:05.045224] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.171 [2024-11-19 08:01:05.045255] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.171 [2024-11-19 08:01:05.045277] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.171 [2024-11-19 08:01:05.045298] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.171 [2024-11-19 08:01:05.058572] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.171 [2024-11-19 08:01:05.059031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.171 [2024-11-19 08:01:05.059072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.171 [2024-11-19 08:01:05.059098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.171 [2024-11-19 08:01:05.059380] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.171 [2024-11-19 08:01:05.059667] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.171 [2024-11-19 08:01:05.059709] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.171 [2024-11-19 08:01:05.059733] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.171 [2024-11-19 08:01:05.059755] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.171 [2024-11-19 08:01:05.073178] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.171 [2024-11-19 08:01:05.073641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.171 [2024-11-19 08:01:05.073683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.171 [2024-11-19 08:01:05.073719] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.171 [2024-11-19 08:01:05.074012] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.171 [2024-11-19 08:01:05.074297] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.171 [2024-11-19 08:01:05.074329] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.171 [2024-11-19 08:01:05.074352] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.172 [2024-11-19 08:01:05.074374] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.172 [2024-11-19 08:01:05.087770] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.172 [2024-11-19 08:01:05.088244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.172 [2024-11-19 08:01:05.088285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.172 [2024-11-19 08:01:05.088310] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.172 [2024-11-19 08:01:05.088596] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.172 [2024-11-19 08:01:05.088901] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.172 [2024-11-19 08:01:05.088933] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.172 [2024-11-19 08:01:05.088957] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.172 [2024-11-19 08:01:05.088978] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.172 [2024-11-19 08:01:05.102410] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.430 [2024-11-19 08:01:05.102875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.430 [2024-11-19 08:01:05.102916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.430 [2024-11-19 08:01:05.102942] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.431 [2024-11-19 08:01:05.103227] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.431 [2024-11-19 08:01:05.103514] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.431 [2024-11-19 08:01:05.103545] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.431 [2024-11-19 08:01:05.103568] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.431 [2024-11-19 08:01:05.103590] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.431 [2024-11-19 08:01:05.117048] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.431 [2024-11-19 08:01:05.117496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.431 [2024-11-19 08:01:05.117553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.431 [2024-11-19 08:01:05.117579] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.431 [2024-11-19 08:01:05.117874] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.431 [2024-11-19 08:01:05.118166] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.431 [2024-11-19 08:01:05.118197] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.431 [2024-11-19 08:01:05.118219] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.431 [2024-11-19 08:01:05.118240] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.431 [2024-11-19 08:01:05.131733] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.431 [2024-11-19 08:01:05.132183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.431 [2024-11-19 08:01:05.132224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.431 [2024-11-19 08:01:05.132251] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.431 [2024-11-19 08:01:05.132539] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.431 [2024-11-19 08:01:05.132843] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.431 [2024-11-19 08:01:05.132876] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.431 [2024-11-19 08:01:05.132906] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.431 [2024-11-19 08:01:05.132929] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.431 [2024-11-19 08:01:05.146375] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.431 [2024-11-19 08:01:05.146834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.431 [2024-11-19 08:01:05.146875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.431 [2024-11-19 08:01:05.146902] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.431 [2024-11-19 08:01:05.147188] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.431 [2024-11-19 08:01:05.147475] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.431 [2024-11-19 08:01:05.147506] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.431 [2024-11-19 08:01:05.147530] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.431 [2024-11-19 08:01:05.147552] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.431 [2024-11-19 08:01:05.160955] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.431 [2024-11-19 08:01:05.161380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.431 [2024-11-19 08:01:05.161421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.431 [2024-11-19 08:01:05.161447] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.431 [2024-11-19 08:01:05.161742] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.431 [2024-11-19 08:01:05.162030] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.431 [2024-11-19 08:01:05.162061] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.431 [2024-11-19 08:01:05.162084] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.431 [2024-11-19 08:01:05.162106] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.431 [2024-11-19 08:01:05.175446] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.431 [2024-11-19 08:01:05.175881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.431 [2024-11-19 08:01:05.175922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.431 [2024-11-19 08:01:05.175948] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.431 [2024-11-19 08:01:05.176233] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.431 [2024-11-19 08:01:05.176520] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.431 [2024-11-19 08:01:05.176552] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.431 [2024-11-19 08:01:05.176574] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.431 [2024-11-19 08:01:05.176596] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.431 [2024-11-19 08:01:05.189984] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.431 [2024-11-19 08:01:05.190439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.431 [2024-11-19 08:01:05.190480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.431 [2024-11-19 08:01:05.190507] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.431 [2024-11-19 08:01:05.190813] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.431 [2024-11-19 08:01:05.191116] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.431 [2024-11-19 08:01:05.191147] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.431 [2024-11-19 08:01:05.191170] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.431 [2024-11-19 08:01:05.191192] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.431 [2024-11-19 08:01:05.204560] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.431 [2024-11-19 08:01:05.204974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.431 [2024-11-19 08:01:05.205015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.431 [2024-11-19 08:01:05.205042] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.431 [2024-11-19 08:01:05.205327] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.431 [2024-11-19 08:01:05.205615] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.431 [2024-11-19 08:01:05.205646] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.431 [2024-11-19 08:01:05.205668] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.431 [2024-11-19 08:01:05.205700] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.431 [2024-11-19 08:01:05.219099] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.431 [2024-11-19 08:01:05.219568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.431 [2024-11-19 08:01:05.219609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.431 [2024-11-19 08:01:05.219635] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.431 [2024-11-19 08:01:05.219932] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.431 [2024-11-19 08:01:05.220221] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.431 [2024-11-19 08:01:05.220251] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.431 [2024-11-19 08:01:05.220273] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.431 [2024-11-19 08:01:05.220295] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.431 [2024-11-19 08:01:05.233695] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.431 [2024-11-19 08:01:05.234150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.431 [2024-11-19 08:01:05.234197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.431 [2024-11-19 08:01:05.234224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.431 [2024-11-19 08:01:05.234510] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.431 [2024-11-19 08:01:05.234810] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.431 [2024-11-19 08:01:05.234842] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.431 [2024-11-19 08:01:05.234864] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.431 [2024-11-19 08:01:05.234886] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.431 [2024-11-19 08:01:05.248231] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.432 [2024-11-19 08:01:05.248669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.432 [2024-11-19 08:01:05.248718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.432 [2024-11-19 08:01:05.248745] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.432 [2024-11-19 08:01:05.249031] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.432 [2024-11-19 08:01:05.249321] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.432 [2024-11-19 08:01:05.249352] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.432 [2024-11-19 08:01:05.249374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.432 [2024-11-19 08:01:05.249396] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.432 [2024-11-19 08:01:05.262793] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.432 [2024-11-19 08:01:05.263273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.432 [2024-11-19 08:01:05.263314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.432 [2024-11-19 08:01:05.263340] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.432 [2024-11-19 08:01:05.263626] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.432 [2024-11-19 08:01:05.263926] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.432 [2024-11-19 08:01:05.263958] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.432 [2024-11-19 08:01:05.263981] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.432 [2024-11-19 08:01:05.264002] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.432 [2024-11-19 08:01:05.277397] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.432 [2024-11-19 08:01:05.277868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.432 [2024-11-19 08:01:05.277922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.432 [2024-11-19 08:01:05.277949] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.432 [2024-11-19 08:01:05.278246] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.432 [2024-11-19 08:01:05.278533] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.432 [2024-11-19 08:01:05.278564] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.432 [2024-11-19 08:01:05.278587] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.432 [2024-11-19 08:01:05.278608] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.432 [2024-11-19 08:01:05.292016] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.432 [2024-11-19 08:01:05.292470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.432 [2024-11-19 08:01:05.292511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.432 [2024-11-19 08:01:05.292537] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.432 [2024-11-19 08:01:05.292837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.432 [2024-11-19 08:01:05.293126] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.432 [2024-11-19 08:01:05.293157] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.432 [2024-11-19 08:01:05.293179] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.432 [2024-11-19 08:01:05.293201] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.432 [2024-11-19 08:01:05.306538] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.432 [2024-11-19 08:01:05.307003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.432 [2024-11-19 08:01:05.307044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.432 [2024-11-19 08:01:05.307071] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.432 [2024-11-19 08:01:05.307357] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.432 [2024-11-19 08:01:05.307645] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.432 [2024-11-19 08:01:05.307676] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.432 [2024-11-19 08:01:05.307711] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.432 [2024-11-19 08:01:05.307734] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.432 [2024-11-19 08:01:05.321079] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.432 [2024-11-19 08:01:05.321531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.432 [2024-11-19 08:01:05.321571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.432 [2024-11-19 08:01:05.321598] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.432 [2024-11-19 08:01:05.321895] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.432 [2024-11-19 08:01:05.322184] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.432 [2024-11-19 08:01:05.322221] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.432 [2024-11-19 08:01:05.322244] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.432 [2024-11-19 08:01:05.322266] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.432 [2024-11-19 08:01:05.335664] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.432 [2024-11-19 08:01:05.336179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.432 [2024-11-19 08:01:05.336222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.432 [2024-11-19 08:01:05.336248] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.432 [2024-11-19 08:01:05.336537] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.432 [2024-11-19 08:01:05.336839] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.432 [2024-11-19 08:01:05.336872] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.432 [2024-11-19 08:01:05.336896] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.432 [2024-11-19 08:01:05.336917] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.432 [2024-11-19 08:01:05.350371] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.432 [2024-11-19 08:01:05.350813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.432 [2024-11-19 08:01:05.350854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.432 [2024-11-19 08:01:05.350880] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.432 [2024-11-19 08:01:05.351167] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.432 [2024-11-19 08:01:05.351456] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.432 [2024-11-19 08:01:05.351487] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.432 [2024-11-19 08:01:05.351520] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.432 [2024-11-19 08:01:05.351542] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.692 [2024-11-19 08:01:05.364985] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.692 [2024-11-19 08:01:05.365424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.692 [2024-11-19 08:01:05.365464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.692 [2024-11-19 08:01:05.365491] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.692 [2024-11-19 08:01:05.365789] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.692 [2024-11-19 08:01:05.366076] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.692 [2024-11-19 08:01:05.366107] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.692 [2024-11-19 08:01:05.366136] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.692 [2024-11-19 08:01:05.366159] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.692 [2024-11-19 08:01:05.379593] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.692 [2024-11-19 08:01:05.380072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.692 [2024-11-19 08:01:05.380112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.692 [2024-11-19 08:01:05.380138] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.692 [2024-11-19 08:01:05.380425] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.692 [2024-11-19 08:01:05.380726] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.692 [2024-11-19 08:01:05.380758] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.692 [2024-11-19 08:01:05.380781] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.692 [2024-11-19 08:01:05.380803] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.692 [2024-11-19 08:01:05.394228] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.692 [2024-11-19 08:01:05.394683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.692 [2024-11-19 08:01:05.394733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.692 [2024-11-19 08:01:05.394759] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.692 [2024-11-19 08:01:05.395044] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.692 [2024-11-19 08:01:05.395332] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.692 [2024-11-19 08:01:05.395363] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.692 [2024-11-19 08:01:05.395385] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.692 [2024-11-19 08:01:05.395406] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.692 [2024-11-19 08:01:05.408802] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.692 [2024-11-19 08:01:05.409254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.692 [2024-11-19 08:01:05.409295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.692 [2024-11-19 08:01:05.409321] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.692 [2024-11-19 08:01:05.409604] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.692 [2024-11-19 08:01:05.409904] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.692 [2024-11-19 08:01:05.409936] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.692 [2024-11-19 08:01:05.409960] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.692 [2024-11-19 08:01:05.409982] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.692 [2024-11-19 08:01:05.423332] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.692 [2024-11-19 08:01:05.423796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.692 [2024-11-19 08:01:05.423836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.692 [2024-11-19 08:01:05.423862] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.692 [2024-11-19 08:01:05.424148] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.692 [2024-11-19 08:01:05.424455] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.692 [2024-11-19 08:01:05.424487] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.692 [2024-11-19 08:01:05.424510] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.692 [2024-11-19 08:01:05.424532] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.692 [2024-11-19 08:01:05.437896] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.692 [2024-11-19 08:01:05.438326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.692 [2024-11-19 08:01:05.438366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.692 [2024-11-19 08:01:05.438393] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.692 [2024-11-19 08:01:05.438679] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.692 [2024-11-19 08:01:05.438980] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.692 [2024-11-19 08:01:05.439011] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.692 [2024-11-19 08:01:05.439034] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.692 [2024-11-19 08:01:05.439055] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.692 [2024-11-19 08:01:05.452435] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.692 [2024-11-19 08:01:05.452881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.692 [2024-11-19 08:01:05.452922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.692 [2024-11-19 08:01:05.452948] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.692 [2024-11-19 08:01:05.453234] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.692 [2024-11-19 08:01:05.453521] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.692 [2024-11-19 08:01:05.453552] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.693 [2024-11-19 08:01:05.453575] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.693 [2024-11-19 08:01:05.453596] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.693 [2024-11-19 08:01:05.466993] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.693 [2024-11-19 08:01:05.467417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.693 [2024-11-19 08:01:05.467457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.693 [2024-11-19 08:01:05.467490] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.693 [2024-11-19 08:01:05.467789] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.693 [2024-11-19 08:01:05.468077] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.693 [2024-11-19 08:01:05.468109] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.693 [2024-11-19 08:01:05.468132] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.693 [2024-11-19 08:01:05.468153] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.693 [2024-11-19 08:01:05.481540] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.693 [2024-11-19 08:01:05.481978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.693 [2024-11-19 08:01:05.482018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.693 [2024-11-19 08:01:05.482044] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.693 [2024-11-19 08:01:05.482346] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.693 [2024-11-19 08:01:05.482634] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.693 [2024-11-19 08:01:05.482666] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.693 [2024-11-19 08:01:05.482697] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.693 [2024-11-19 08:01:05.482722] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.693 [2024-11-19 08:01:05.496057] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.693 [2024-11-19 08:01:05.496498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.693 [2024-11-19 08:01:05.496539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.693 [2024-11-19 08:01:05.496566] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.693 [2024-11-19 08:01:05.496865] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.693 [2024-11-19 08:01:05.497152] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.693 [2024-11-19 08:01:05.497183] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.693 [2024-11-19 08:01:05.497206] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.693 [2024-11-19 08:01:05.497227] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.693 [2024-11-19 08:01:05.510609] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.693 [2024-11-19 08:01:05.511054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.693 [2024-11-19 08:01:05.511095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.693 [2024-11-19 08:01:05.511121] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.693 [2024-11-19 08:01:05.511412] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.693 [2024-11-19 08:01:05.511714] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.693 [2024-11-19 08:01:05.511750] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.693 [2024-11-19 08:01:05.511773] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.693 [2024-11-19 08:01:05.511801] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.693 [2024-11-19 08:01:05.525282] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.693 [2024-11-19 08:01:05.525737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.693 [2024-11-19 08:01:05.525779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.693 [2024-11-19 08:01:05.525806] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.693 [2024-11-19 08:01:05.526093] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.693 [2024-11-19 08:01:05.526381] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.693 [2024-11-19 08:01:05.526413] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.693 [2024-11-19 08:01:05.526436] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.693 [2024-11-19 08:01:05.526458] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.693 [2024-11-19 08:01:05.539918] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.693 [2024-11-19 08:01:05.540402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.693 [2024-11-19 08:01:05.540443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.693 [2024-11-19 08:01:05.540470] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.693 [2024-11-19 08:01:05.540767] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.693 [2024-11-19 08:01:05.541055] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.693 [2024-11-19 08:01:05.541086] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.693 [2024-11-19 08:01:05.541109] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.693 [2024-11-19 08:01:05.541131] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.693 [2024-11-19 08:01:05.554521] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.693 [2024-11-19 08:01:05.554978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.693 [2024-11-19 08:01:05.555019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.693 [2024-11-19 08:01:05.555045] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.693 [2024-11-19 08:01:05.555329] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.693 [2024-11-19 08:01:05.555616] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.693 [2024-11-19 08:01:05.555653] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.693 [2024-11-19 08:01:05.555678] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.693 [2024-11-19 08:01:05.555712] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.693 [2024-11-19 08:01:05.569055] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.693 [2024-11-19 08:01:05.569489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.693 [2024-11-19 08:01:05.569530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.693 [2024-11-19 08:01:05.569556] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.693 [2024-11-19 08:01:05.569853] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.693 [2024-11-19 08:01:05.570141] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.693 [2024-11-19 08:01:05.570172] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.693 [2024-11-19 08:01:05.570194] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.693 [2024-11-19 08:01:05.570215] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.693 [2024-11-19 08:01:05.583652] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.693 [2024-11-19 08:01:05.584126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.693 [2024-11-19 08:01:05.584167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.693 [2024-11-19 08:01:05.584194] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.693 [2024-11-19 08:01:05.584480] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.693 [2024-11-19 08:01:05.584782] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.693 [2024-11-19 08:01:05.584814] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.693 [2024-11-19 08:01:05.584836] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.693 [2024-11-19 08:01:05.584858] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.693 [2024-11-19 08:01:05.598317] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.693 [2024-11-19 08:01:05.598779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.694 [2024-11-19 08:01:05.598828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.694 [2024-11-19 08:01:05.598854] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.694 [2024-11-19 08:01:05.599143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.694 [2024-11-19 08:01:05.599433] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.694 [2024-11-19 08:01:05.599464] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.694 [2024-11-19 08:01:05.599487] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.694 [2024-11-19 08:01:05.599518] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.694 [2024-11-19 08:01:05.612997] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.694 [2024-11-19 08:01:05.613474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.694 [2024-11-19 08:01:05.613516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.694 [2024-11-19 08:01:05.613543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.694 [2024-11-19 08:01:05.613851] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.694 [2024-11-19 08:01:05.614138] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.694 [2024-11-19 08:01:05.614171] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.694 [2024-11-19 08:01:05.614193] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.694 [2024-11-19 08:01:05.614215] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.953 [2024-11-19 08:01:05.627712] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.953 [2024-11-19 08:01:05.628166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.953 [2024-11-19 08:01:05.628206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.953 [2024-11-19 08:01:05.628233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.953 [2024-11-19 08:01:05.628519] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.953 [2024-11-19 08:01:05.628819] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.953 [2024-11-19 08:01:05.628851] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.953 [2024-11-19 08:01:05.628875] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.953 [2024-11-19 08:01:05.628897] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.953 [2024-11-19 08:01:05.642307] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.953 [2024-11-19 08:01:05.642745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.953 [2024-11-19 08:01:05.642786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.953 [2024-11-19 08:01:05.642812] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.953 [2024-11-19 08:01:05.643097] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.953 [2024-11-19 08:01:05.643383] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.953 [2024-11-19 08:01:05.643414] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.953 [2024-11-19 08:01:05.643438] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.953 [2024-11-19 08:01:05.643459] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.953 [2024-11-19 08:01:05.656835] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.953 [2024-11-19 08:01:05.657306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.953 [2024-11-19 08:01:05.657347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.953 [2024-11-19 08:01:05.657373] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.953 [2024-11-19 08:01:05.657659] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.953 [2024-11-19 08:01:05.657959] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.953 [2024-11-19 08:01:05.657991] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.953 [2024-11-19 08:01:05.658013] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.953 [2024-11-19 08:01:05.658035] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.953 [2024-11-19 08:01:05.671423] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.953 [2024-11-19 08:01:05.671895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.953 [2024-11-19 08:01:05.671936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.953 [2024-11-19 08:01:05.671962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.953 [2024-11-19 08:01:05.672247] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.953 [2024-11-19 08:01:05.672536] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.953 [2024-11-19 08:01:05.672567] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.953 [2024-11-19 08:01:05.672589] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.953 [2024-11-19 08:01:05.672610] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.953 [2024-11-19 08:01:05.686026] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.953 [2024-11-19 08:01:05.686463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.953 [2024-11-19 08:01:05.686506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.953 [2024-11-19 08:01:05.686532] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.953 [2024-11-19 08:01:05.686827] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.953 [2024-11-19 08:01:05.687115] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.953 [2024-11-19 08:01:05.687159] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.953 [2024-11-19 08:01:05.687182] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.953 [2024-11-19 08:01:05.687204] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.953 [2024-11-19 08:01:05.700606] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.953 [2024-11-19 08:01:05.701047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.953 [2024-11-19 08:01:05.701087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.953 [2024-11-19 08:01:05.701120] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.953 [2024-11-19 08:01:05.701406] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.953 [2024-11-19 08:01:05.701702] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.953 [2024-11-19 08:01:05.701734] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.953 [2024-11-19 08:01:05.701756] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.953 [2024-11-19 08:01:05.701778] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.953 3252.75 IOPS, 12.71 MiB/s [2024-11-19T07:01:05.883Z] [2024-11-19 08:01:05.715074] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.953 [2024-11-19 08:01:05.715506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.953 [2024-11-19 08:01:05.715547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.954 [2024-11-19 08:01:05.715575] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.954 [2024-11-19 08:01:05.715874] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.954 [2024-11-19 08:01:05.716161] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.954 [2024-11-19 08:01:05.716192] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.954 [2024-11-19 08:01:05.716215] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.954 [2024-11-19 08:01:05.716237] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.954 [2024-11-19 08:01:05.729656] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.954 [2024-11-19 08:01:05.730106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.954 [2024-11-19 08:01:05.730149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.954 [2024-11-19 08:01:05.730175] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.954 [2024-11-19 08:01:05.730460] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.954 [2024-11-19 08:01:05.730764] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.954 [2024-11-19 08:01:05.730796] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.954 [2024-11-19 08:01:05.730818] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.954 [2024-11-19 08:01:05.730839] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.954 [2024-11-19 08:01:05.744181] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.954 [2024-11-19 08:01:05.744630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.954 [2024-11-19 08:01:05.744671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.954 [2024-11-19 08:01:05.744707] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.954 [2024-11-19 08:01:05.744993] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.954 [2024-11-19 08:01:05.745287] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.954 [2024-11-19 08:01:05.745319] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.954 [2024-11-19 08:01:05.745341] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.954 [2024-11-19 08:01:05.745363] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.954 [2024-11-19 08:01:05.758698] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.954 [2024-11-19 08:01:05.759130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.954 [2024-11-19 08:01:05.759170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.954 [2024-11-19 08:01:05.759196] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.954 [2024-11-19 08:01:05.759480] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.954 [2024-11-19 08:01:05.759780] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.954 [2024-11-19 08:01:05.759812] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.954 [2024-11-19 08:01:05.759835] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.954 [2024-11-19 08:01:05.759857] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.954 [2024-11-19 08:01:05.773214] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.954 [2024-11-19 08:01:05.773657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.954 [2024-11-19 08:01:05.773705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.954 [2024-11-19 08:01:05.773733] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.954 [2024-11-19 08:01:05.774018] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.954 [2024-11-19 08:01:05.774305] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.954 [2024-11-19 08:01:05.774336] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.954 [2024-11-19 08:01:05.774358] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.954 [2024-11-19 08:01:05.774380] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.954 [2024-11-19 08:01:05.787751] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.954 [2024-11-19 08:01:05.788177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.954 [2024-11-19 08:01:05.788218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.954 [2024-11-19 08:01:05.788244] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.954 [2024-11-19 08:01:05.788529] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.954 [2024-11-19 08:01:05.788830] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.954 [2024-11-19 08:01:05.788867] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.954 [2024-11-19 08:01:05.788891] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.954 [2024-11-19 08:01:05.788913] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.954 [2024-11-19 08:01:05.802292] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.954 [2024-11-19 08:01:05.802732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.954 [2024-11-19 08:01:05.802774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.954 [2024-11-19 08:01:05.802801] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.954 [2024-11-19 08:01:05.803086] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.954 [2024-11-19 08:01:05.803374] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.954 [2024-11-19 08:01:05.803405] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.954 [2024-11-19 08:01:05.803428] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.954 [2024-11-19 08:01:05.803450] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.954 [2024-11-19 08:01:05.816794] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.954 [2024-11-19 08:01:05.817222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.954 [2024-11-19 08:01:05.817263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.954 [2024-11-19 08:01:05.817290] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.954 [2024-11-19 08:01:05.817575] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.954 [2024-11-19 08:01:05.817874] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.954 [2024-11-19 08:01:05.817906] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.954 [2024-11-19 08:01:05.817929] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.954 [2024-11-19 08:01:05.817950] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.954 [2024-11-19 08:01:05.831382] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.954 [2024-11-19 08:01:05.831903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.954 [2024-11-19 08:01:05.831946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.954 [2024-11-19 08:01:05.831973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.954 [2024-11-19 08:01:05.832260] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.954 [2024-11-19 08:01:05.832549] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.954 [2024-11-19 08:01:05.832581] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.954 [2024-11-19 08:01:05.832604] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.954 [2024-11-19 08:01:05.832632] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.954 [2024-11-19 08:01:05.845813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.954 [2024-11-19 08:01:05.846288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.954 [2024-11-19 08:01:05.846330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.954 [2024-11-19 08:01:05.846357] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.954 [2024-11-19 08:01:05.846642] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.954 [2024-11-19 08:01:05.846942] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.954 [2024-11-19 08:01:05.846974] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.954 [2024-11-19 08:01:05.846996] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.954 [2024-11-19 08:01:05.847018] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.954 [2024-11-19 08:01:05.860450] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.955 [2024-11-19 08:01:05.860892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.955 [2024-11-19 08:01:05.860933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.955 [2024-11-19 08:01:05.860959] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.955 [2024-11-19 08:01:05.861250] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.955 [2024-11-19 08:01:05.861542] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.955 [2024-11-19 08:01:05.861573] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.955 [2024-11-19 08:01:05.861595] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.955 [2024-11-19 08:01:05.861617] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:13.955 [2024-11-19 08:01:05.875054] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:13.955 [2024-11-19 08:01:05.875507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:13.955 [2024-11-19 08:01:05.875548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:13.955 [2024-11-19 08:01:05.875574] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:13.955 [2024-11-19 08:01:05.875872] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:13.955 [2024-11-19 08:01:05.876162] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:13.955 [2024-11-19 08:01:05.876194] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:13.955 [2024-11-19 08:01:05.876217] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:13.955 [2024-11-19 08:01:05.876239] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.214 [2024-11-19 08:01:05.889680] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.214 [2024-11-19 08:01:05.890133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.214 [2024-11-19 08:01:05.890174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.214 [2024-11-19 08:01:05.890200] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.214 [2024-11-19 08:01:05.890487] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.214 [2024-11-19 08:01:05.890787] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.214 [2024-11-19 08:01:05.890819] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.214 [2024-11-19 08:01:05.890843] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.214 [2024-11-19 08:01:05.890880] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.214 [2024-11-19 08:01:05.904284] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.214 [2024-11-19 08:01:05.904722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.214 [2024-11-19 08:01:05.904764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.214 [2024-11-19 08:01:05.904791] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.214 [2024-11-19 08:01:05.905078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.214 [2024-11-19 08:01:05.905369] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.214 [2024-11-19 08:01:05.905400] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.214 [2024-11-19 08:01:05.905422] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.214 [2024-11-19 08:01:05.905444] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.214 [2024-11-19 08:01:05.918874] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.214 [2024-11-19 08:01:05.919324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.214 [2024-11-19 08:01:05.919366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.214 [2024-11-19 08:01:05.919393] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.214 [2024-11-19 08:01:05.919679] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.214 [2024-11-19 08:01:05.919977] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.214 [2024-11-19 08:01:05.920008] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.214 [2024-11-19 08:01:05.920031] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.214 [2024-11-19 08:01:05.920054] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.214 [2024-11-19 08:01:05.933444] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.214 [2024-11-19 08:01:05.933860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.215 [2024-11-19 08:01:05.933902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.215 [2024-11-19 08:01:05.933934] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.215 [2024-11-19 08:01:05.934222] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.215 [2024-11-19 08:01:05.934509] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.215 [2024-11-19 08:01:05.934540] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.215 [2024-11-19 08:01:05.934562] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.215 [2024-11-19 08:01:05.934583] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.215 [2024-11-19 08:01:05.947967] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.215 [2024-11-19 08:01:05.948426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.215 [2024-11-19 08:01:05.948467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.215 [2024-11-19 08:01:05.948493] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.215 [2024-11-19 08:01:05.948792] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.215 [2024-11-19 08:01:05.949080] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.215 [2024-11-19 08:01:05.949112] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.215 [2024-11-19 08:01:05.949135] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.215 [2024-11-19 08:01:05.949156] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.215 [2024-11-19 08:01:05.962587] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.215 [2024-11-19 08:01:05.963102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.215 [2024-11-19 08:01:05.963144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.215 [2024-11-19 08:01:05.963171] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.215 [2024-11-19 08:01:05.963458] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.215 [2024-11-19 08:01:05.963759] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.215 [2024-11-19 08:01:05.963791] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.215 [2024-11-19 08:01:05.963814] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.215 [2024-11-19 08:01:05.963837] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.215 [2024-11-19 08:01:05.977216] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.215 [2024-11-19 08:01:05.977667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.215 [2024-11-19 08:01:05.977714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.215 [2024-11-19 08:01:05.977741] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.215 [2024-11-19 08:01:05.978028] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.215 [2024-11-19 08:01:05.978322] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.215 [2024-11-19 08:01:05.978353] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.215 [2024-11-19 08:01:05.978376] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.215 [2024-11-19 08:01:05.978398] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.215 [2024-11-19 08:01:05.991782] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.215 [2024-11-19 08:01:05.992184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.215 [2024-11-19 08:01:05.992225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.215 [2024-11-19 08:01:05.992251] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.215 [2024-11-19 08:01:05.992536] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.215 [2024-11-19 08:01:05.992835] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.215 [2024-11-19 08:01:05.992867] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.215 [2024-11-19 08:01:05.992889] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.215 [2024-11-19 08:01:05.992910] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.215 [2024-11-19 08:01:06.006268] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.215 [2024-11-19 08:01:06.006735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.215 [2024-11-19 08:01:06.006777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.215 [2024-11-19 08:01:06.006803] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.215 [2024-11-19 08:01:06.007090] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.215 [2024-11-19 08:01:06.007378] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.215 [2024-11-19 08:01:06.007409] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.215 [2024-11-19 08:01:06.007431] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.215 [2024-11-19 08:01:06.007452] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.215 [2024-11-19 08:01:06.020807] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.215 [2024-11-19 08:01:06.021234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.215 [2024-11-19 08:01:06.021275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.215 [2024-11-19 08:01:06.021302] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.215 [2024-11-19 08:01:06.021588] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.215 [2024-11-19 08:01:06.021887] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.215 [2024-11-19 08:01:06.021920] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.215 [2024-11-19 08:01:06.021949] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.215 [2024-11-19 08:01:06.021972] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.215 [2024-11-19 08:01:06.035471] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.215 [2024-11-19 08:01:06.035967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.215 [2024-11-19 08:01:06.036009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.215 [2024-11-19 08:01:06.036036] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.215 [2024-11-19 08:01:06.036322] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.215 [2024-11-19 08:01:06.036611] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.215 [2024-11-19 08:01:06.036642] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.215 [2024-11-19 08:01:06.036665] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.215 [2024-11-19 08:01:06.036697] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.215 [2024-11-19 08:01:06.049549] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.215 [2024-11-19 08:01:06.050031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.215 [2024-11-19 08:01:06.050067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.215 [2024-11-19 08:01:06.050091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.215 [2024-11-19 08:01:06.050362] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.215 [2024-11-19 08:01:06.050597] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.215 [2024-11-19 08:01:06.050623] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.215 [2024-11-19 08:01:06.050641] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.215 [2024-11-19 08:01:06.050659] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.215 [2024-11-19 08:01:06.063613] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.215 [2024-11-19 08:01:06.064076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.215 [2024-11-19 08:01:06.064129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.215 [2024-11-19 08:01:06.064154] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.215 [2024-11-19 08:01:06.064446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.215 [2024-11-19 08:01:06.064709] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.215 [2024-11-19 08:01:06.064738] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.215 [2024-11-19 08:01:06.064758] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.216 [2024-11-19 08:01:06.064779] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.216 [2024-11-19 08:01:06.077732] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.216 [2024-11-19 08:01:06.078143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.216 [2024-11-19 08:01:06.078195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.216 [2024-11-19 08:01:06.078220] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.216 [2024-11-19 08:01:06.078502] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.216 [2024-11-19 08:01:06.078765] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.216 [2024-11-19 08:01:06.078793] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.216 [2024-11-19 08:01:06.078812] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.216 [2024-11-19 08:01:06.078830] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.216 [2024-11-19 08:01:06.091684] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.216 [2024-11-19 08:01:06.092105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.216 [2024-11-19 08:01:06.092157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.216 [2024-11-19 08:01:06.092181] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.216 [2024-11-19 08:01:06.092479] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.216 [2024-11-19 08:01:06.092758] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.216 [2024-11-19 08:01:06.092786] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.216 [2024-11-19 08:01:06.092807] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.216 [2024-11-19 08:01:06.092825] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.216 [2024-11-19 08:01:06.105532] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.216 [2024-11-19 08:01:06.105964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.216 [2024-11-19 08:01:06.106001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.216 [2024-11-19 08:01:06.106024] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.216 [2024-11-19 08:01:06.106314] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.216 [2024-11-19 08:01:06.106549] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.216 [2024-11-19 08:01:06.106575] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.216 [2024-11-19 08:01:06.106594] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.216 [2024-11-19 08:01:06.106612] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.216 [2024-11-19 08:01:06.119437] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.216 [2024-11-19 08:01:06.119916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.216 [2024-11-19 08:01:06.119958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.216 [2024-11-19 08:01:06.119983] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.216 [2024-11-19 08:01:06.120264] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.216 [2024-11-19 08:01:06.120499] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.216 [2024-11-19 08:01:06.120524] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.216 [2024-11-19 08:01:06.120543] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.216 [2024-11-19 08:01:06.120561] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.216 [2024-11-19 08:01:06.133204] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.216 [2024-11-19 08:01:06.133634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.216 [2024-11-19 08:01:06.133671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.216 [2024-11-19 08:01:06.133703] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.216 [2024-11-19 08:01:06.133975] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.216 [2024-11-19 08:01:06.134242] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.216 [2024-11-19 08:01:06.134268] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.216 [2024-11-19 08:01:06.134287] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.216 [2024-11-19 08:01:06.134304] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.476 [2024-11-19 08:01:06.147405] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.476 [2024-11-19 08:01:06.147881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.476 [2024-11-19 08:01:06.147920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.476 [2024-11-19 08:01:06.147945] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.476 [2024-11-19 08:01:06.148230] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.476 [2024-11-19 08:01:06.148519] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.476 [2024-11-19 08:01:06.148548] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.476 [2024-11-19 08:01:06.148568] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.476 [2024-11-19 08:01:06.148587] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.476 [2024-11-19 08:01:06.161250] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.476 [2024-11-19 08:01:06.161739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.476 [2024-11-19 08:01:06.161778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.476 [2024-11-19 08:01:06.161801] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.476 [2024-11-19 08:01:06.162074] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.476 [2024-11-19 08:01:06.162311] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.476 [2024-11-19 08:01:06.162337] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.476 [2024-11-19 08:01:06.162356] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.476 [2024-11-19 08:01:06.162373] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.476 [2024-11-19 08:01:06.175168] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.476 [2024-11-19 08:01:06.175607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.476 [2024-11-19 08:01:06.175643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.476 [2024-11-19 08:01:06.175666] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.476 [2024-11-19 08:01:06.175956] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.476 [2024-11-19 08:01:06.176209] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.476 [2024-11-19 08:01:06.176235] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.476 [2024-11-19 08:01:06.176253] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.476 [2024-11-19 08:01:06.176271] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.476 [2024-11-19 08:01:06.188880] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.476 [2024-11-19 08:01:06.189399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.476 [2024-11-19 08:01:06.189450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.476 [2024-11-19 08:01:06.189474] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.476 [2024-11-19 08:01:06.189780] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.476 [2024-11-19 08:01:06.190054] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.476 [2024-11-19 08:01:06.190094] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.476 [2024-11-19 08:01:06.190113] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.476 [2024-11-19 08:01:06.190131] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.476 [2024-11-19 08:01:06.202653] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.476 [2024-11-19 08:01:06.203170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.476 [2024-11-19 08:01:06.203207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.476 [2024-11-19 08:01:06.203231] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.476 [2024-11-19 08:01:06.203529] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.476 [2024-11-19 08:01:06.203822] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.476 [2024-11-19 08:01:06.203855] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.476 [2024-11-19 08:01:06.203890] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.476 [2024-11-19 08:01:06.203909] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.476 [2024-11-19 08:01:06.216364] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.476 [2024-11-19 08:01:06.216794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.477 [2024-11-19 08:01:06.216831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.477 [2024-11-19 08:01:06.216855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.477 [2024-11-19 08:01:06.217139] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.477 [2024-11-19 08:01:06.217377] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.477 [2024-11-19 08:01:06.217403] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.477 [2024-11-19 08:01:06.217421] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.477 [2024-11-19 08:01:06.217439] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.477 [2024-11-19 08:01:06.230144] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.477 [2024-11-19 08:01:06.230553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.477 [2024-11-19 08:01:06.230589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.477 [2024-11-19 08:01:06.230613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.477 [2024-11-19 08:01:06.230904] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.477 [2024-11-19 08:01:06.231195] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.477 [2024-11-19 08:01:06.231221] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.477 [2024-11-19 08:01:06.231240] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.477 [2024-11-19 08:01:06.231259] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.477 [2024-11-19 08:01:06.243896] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.477 [2024-11-19 08:01:06.244340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.477 [2024-11-19 08:01:06.244376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.477 [2024-11-19 08:01:06.244401] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.477 [2024-11-19 08:01:06.244681] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.477 [2024-11-19 08:01:06.244957] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.477 [2024-11-19 08:01:06.244984] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.477 [2024-11-19 08:01:06.245023] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.477 [2024-11-19 08:01:06.245043] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.477 [2024-11-19 08:01:06.257644] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.477 [2024-11-19 08:01:06.258103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.477 [2024-11-19 08:01:06.258154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.477 [2024-11-19 08:01:06.258177] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.477 [2024-11-19 08:01:06.258452] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.477 [2024-11-19 08:01:06.258714] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.477 [2024-11-19 08:01:06.258756] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.477 [2024-11-19 08:01:06.258776] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.477 [2024-11-19 08:01:06.258795] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.477 [2024-11-19 08:01:06.271376] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.477 [2024-11-19 08:01:06.271794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.477 [2024-11-19 08:01:06.271831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.477 [2024-11-19 08:01:06.271855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.477 [2024-11-19 08:01:06.272135] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.477 [2024-11-19 08:01:06.272371] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.477 [2024-11-19 08:01:06.272397] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.477 [2024-11-19 08:01:06.272415] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.477 [2024-11-19 08:01:06.272433] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.477 [2024-11-19 08:01:06.285008] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.477 [2024-11-19 08:01:06.285406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.477 [2024-11-19 08:01:06.285443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.477 [2024-11-19 08:01:06.285467] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.477 [2024-11-19 08:01:06.285778] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.477 [2024-11-19 08:01:06.286051] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.477 [2024-11-19 08:01:06.286093] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.477 [2024-11-19 08:01:06.286112] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.477 [2024-11-19 08:01:06.286146] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.477 [2024-11-19 08:01:06.298635] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.477 [2024-11-19 08:01:06.299064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.477 [2024-11-19 08:01:06.299114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.477 [2024-11-19 08:01:06.299150] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.477 [2024-11-19 08:01:06.299425] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.477 [2024-11-19 08:01:06.299661] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.477 [2024-11-19 08:01:06.299710] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.477 [2024-11-19 08:01:06.299730] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.477 [2024-11-19 08:01:06.299765] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.477 [2024-11-19 08:01:06.312344] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.477 [2024-11-19 08:01:06.312770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.477 [2024-11-19 08:01:06.312807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.477 [2024-11-19 08:01:06.312831] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.477 [2024-11-19 08:01:06.313129] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.477 [2024-11-19 08:01:06.313367] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.477 [2024-11-19 08:01:06.313392] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.477 [2024-11-19 08:01:06.313410] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.477 [2024-11-19 08:01:06.313428] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.477 [2024-11-19 08:01:06.326106] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.477 [2024-11-19 08:01:06.326485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.477 [2024-11-19 08:01:06.326536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.477 [2024-11-19 08:01:06.326559] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.477 [2024-11-19 08:01:06.326869] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.477 [2024-11-19 08:01:06.327148] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.477 [2024-11-19 08:01:06.327174] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.477 [2024-11-19 08:01:06.327193] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.477 [2024-11-19 08:01:06.327211] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.477 [2024-11-19 08:01:06.340053] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.477 [2024-11-19 08:01:06.340426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.478 [2024-11-19 08:01:06.340477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.478 [2024-11-19 08:01:06.340505] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.478 [2024-11-19 08:01:06.340802] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.478 [2024-11-19 08:01:06.341082] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.478 [2024-11-19 08:01:06.341109] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.478 [2024-11-19 08:01:06.341127] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.478 [2024-11-19 08:01:06.341145] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.478 [2024-11-19 08:01:06.353942] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.478 [2024-11-19 08:01:06.354378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.478 [2024-11-19 08:01:06.354428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.478 [2024-11-19 08:01:06.354451] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.478 [2024-11-19 08:01:06.354775] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.478 [2024-11-19 08:01:06.355043] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.478 [2024-11-19 08:01:06.355068] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.478 [2024-11-19 08:01:06.355087] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.478 [2024-11-19 08:01:06.355105] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.478 [2024-11-19 08:01:06.367599] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.478 [2024-11-19 08:01:06.368005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.478 [2024-11-19 08:01:06.368057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.478 [2024-11-19 08:01:06.368082] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.478 [2024-11-19 08:01:06.368363] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.478 [2024-11-19 08:01:06.368597] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.478 [2024-11-19 08:01:06.368623] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.478 [2024-11-19 08:01:06.368641] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.478 [2024-11-19 08:01:06.368659] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.478 [2024-11-19 08:01:06.381264] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.478 [2024-11-19 08:01:06.381695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.478 [2024-11-19 08:01:06.381746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.478 [2024-11-19 08:01:06.381785] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.478 [2024-11-19 08:01:06.382090] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.478 [2024-11-19 08:01:06.382325] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.478 [2024-11-19 08:01:06.382350] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.478 [2024-11-19 08:01:06.382369] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.478 [2024-11-19 08:01:06.382387] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.478 [2024-11-19 08:01:06.394974] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.478 [2024-11-19 08:01:06.395453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.478 [2024-11-19 08:01:06.395505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.478 [2024-11-19 08:01:06.395529] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.478 [2024-11-19 08:01:06.395810] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.478 [2024-11-19 08:01:06.396091] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.478 [2024-11-19 08:01:06.396117] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.478 [2024-11-19 08:01:06.396135] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.478 [2024-11-19 08:01:06.396152] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.738 [2024-11-19 08:01:06.409083] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.738 [2024-11-19 08:01:06.409495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.738 [2024-11-19 08:01:06.409531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.738 [2024-11-19 08:01:06.409555] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.738 [2024-11-19 08:01:06.409825] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.738 [2024-11-19 08:01:06.410086] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.738 [2024-11-19 08:01:06.410115] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.738 [2024-11-19 08:01:06.410135] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.738 [2024-11-19 08:01:06.410155] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.738 [2024-11-19 08:01:06.423029] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.738 [2024-11-19 08:01:06.423446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.738 [2024-11-19 08:01:06.423482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.738 [2024-11-19 08:01:06.423505] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.739 [2024-11-19 08:01:06.423803] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.739 [2024-11-19 08:01:06.424093] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.739 [2024-11-19 08:01:06.424124] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.739 [2024-11-19 08:01:06.424144] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.739 [2024-11-19 08:01:06.424162] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.739 [2024-11-19 08:01:06.436859] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.739 [2024-11-19 08:01:06.437406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.739 [2024-11-19 08:01:06.437442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.739 [2024-11-19 08:01:06.437482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.739 [2024-11-19 08:01:06.437793] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.739 [2024-11-19 08:01:06.438045] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.739 [2024-11-19 08:01:06.438086] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.739 [2024-11-19 08:01:06.438105] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.739 [2024-11-19 08:01:06.438124] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.739 [2024-11-19 08:01:06.450573] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.739 [2024-11-19 08:01:06.451030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.739 [2024-11-19 08:01:06.451068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.739 [2024-11-19 08:01:06.451092] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.739 [2024-11-19 08:01:06.451374] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.739 [2024-11-19 08:01:06.451609] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.739 [2024-11-19 08:01:06.451635] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.739 [2024-11-19 08:01:06.451653] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.739 [2024-11-19 08:01:06.451671] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.739 [2024-11-19 08:01:06.464309] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.739 [2024-11-19 08:01:06.464676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.739 [2024-11-19 08:01:06.464735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.739 [2024-11-19 08:01:06.464774] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.739 [2024-11-19 08:01:06.465058] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.739 [2024-11-19 08:01:06.465294] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.739 [2024-11-19 08:01:06.465320] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.739 [2024-11-19 08:01:06.465339] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.739 [2024-11-19 08:01:06.465362] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.739 [2024-11-19 08:01:06.478046] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.739 [2024-11-19 08:01:06.478484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.739 [2024-11-19 08:01:06.478521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.739 [2024-11-19 08:01:06.478544] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.739 [2024-11-19 08:01:06.478829] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.739 [2024-11-19 08:01:06.479090] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.739 [2024-11-19 08:01:06.479116] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.739 [2024-11-19 08:01:06.479136] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.739 [2024-11-19 08:01:06.479153] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.739 [2024-11-19 08:01:06.491823] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.739 [2024-11-19 08:01:06.492272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.739 [2024-11-19 08:01:06.492308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.739 [2024-11-19 08:01:06.492332] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.739 [2024-11-19 08:01:06.492612] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.739 [2024-11-19 08:01:06.492897] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.739 [2024-11-19 08:01:06.492925] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.739 [2024-11-19 08:01:06.492945] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.739 [2024-11-19 08:01:06.492964] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.739 [2024-11-19 08:01:06.505682] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.739 [2024-11-19 08:01:06.506110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.739 [2024-11-19 08:01:06.506161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.739 [2024-11-19 08:01:06.506184] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.739 [2024-11-19 08:01:06.506480] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.739 [2024-11-19 08:01:06.506761] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.739 [2024-11-19 08:01:06.506788] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.739 [2024-11-19 08:01:06.506808] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.739 [2024-11-19 08:01:06.506827] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.739 [2024-11-19 08:01:06.519495] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.739 [2024-11-19 08:01:06.519899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.739 [2024-11-19 08:01:06.519950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.739 [2024-11-19 08:01:06.519974] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.739 [2024-11-19 08:01:06.520257] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.739 [2024-11-19 08:01:06.520493] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.739 [2024-11-19 08:01:06.520518] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.739 [2024-11-19 08:01:06.520536] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.739 [2024-11-19 08:01:06.520554] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.739 [2024-11-19 08:01:06.534193] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.739 [2024-11-19 08:01:06.534650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.739 [2024-11-19 08:01:06.534700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.739 [2024-11-19 08:01:06.534729] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.739 [2024-11-19 08:01:06.535018] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.739 [2024-11-19 08:01:06.535309] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.740 [2024-11-19 08:01:06.535341] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.740 [2024-11-19 08:01:06.535364] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.740 [2024-11-19 08:01:06.535385] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.740 [2024-11-19 08:01:06.548876] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.740 [2024-11-19 08:01:06.549321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.740 [2024-11-19 08:01:06.549361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.740 [2024-11-19 08:01:06.549387] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.740 [2024-11-19 08:01:06.549672] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.740 [2024-11-19 08:01:06.549974] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.740 [2024-11-19 08:01:06.550005] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.740 [2024-11-19 08:01:06.550029] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.740 [2024-11-19 08:01:06.550050] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.740 [2024-11-19 08:01:06.563551] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.740 [2024-11-19 08:01:06.564074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.740 [2024-11-19 08:01:06.564115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.740 [2024-11-19 08:01:06.564147] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.740 [2024-11-19 08:01:06.564436] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.740 [2024-11-19 08:01:06.564741] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.740 [2024-11-19 08:01:06.564774] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.740 [2024-11-19 08:01:06.564796] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.740 [2024-11-19 08:01:06.564819] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.740 [2024-11-19 08:01:06.578238] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.740 [2024-11-19 08:01:06.578703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.740 [2024-11-19 08:01:06.578744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.740 [2024-11-19 08:01:06.578770] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.740 [2024-11-19 08:01:06.579057] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.740 [2024-11-19 08:01:06.579345] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.740 [2024-11-19 08:01:06.579376] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.740 [2024-11-19 08:01:06.579398] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.740 [2024-11-19 08:01:06.579419] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.740 [2024-11-19 08:01:06.592809] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.740 [2024-11-19 08:01:06.593286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.740 [2024-11-19 08:01:06.593326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.740 [2024-11-19 08:01:06.593353] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.740 [2024-11-19 08:01:06.593638] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.740 [2024-11-19 08:01:06.593938] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.740 [2024-11-19 08:01:06.593969] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.740 [2024-11-19 08:01:06.593993] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.740 [2024-11-19 08:01:06.594014] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.740 [2024-11-19 08:01:06.607409] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.740 [2024-11-19 08:01:06.607878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.740 [2024-11-19 08:01:06.607920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.740 [2024-11-19 08:01:06.607955] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.740 [2024-11-19 08:01:06.608239] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.740 [2024-11-19 08:01:06.608532] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.740 [2024-11-19 08:01:06.608564] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.740 [2024-11-19 08:01:06.608587] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.740 [2024-11-19 08:01:06.608609] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.740 [2024-11-19 08:01:06.621995] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.740 [2024-11-19 08:01:06.622425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.740 [2024-11-19 08:01:06.622465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.740 [2024-11-19 08:01:06.622491] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.740 [2024-11-19 08:01:06.622788] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.740 [2024-11-19 08:01:06.623076] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.740 [2024-11-19 08:01:06.623108] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.740 [2024-11-19 08:01:06.623131] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.740 [2024-11-19 08:01:06.623152] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.740 [2024-11-19 08:01:06.636520] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.740 [2024-11-19 08:01:06.636994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.740 [2024-11-19 08:01:06.637035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.740 [2024-11-19 08:01:06.637062] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.740 [2024-11-19 08:01:06.637347] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.740 [2024-11-19 08:01:06.637634] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.740 [2024-11-19 08:01:06.637665] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.740 [2024-11-19 08:01:06.637697] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.740 [2024-11-19 08:01:06.637723] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.740 [2024-11-19 08:01:06.651088] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.740 [2024-11-19 08:01:06.651542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.740 [2024-11-19 08:01:06.651582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.740 [2024-11-19 08:01:06.651608] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.740 [2024-11-19 08:01:06.651919] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.740 [2024-11-19 08:01:06.652205] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.740 [2024-11-19 08:01:06.652236] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.741 [2024-11-19 08:01:06.652266] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.741 [2024-11-19 08:01:06.652289] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:14.741 [2024-11-19 08:01:06.665700] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:14.741 [2024-11-19 08:01:06.666125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.741 [2024-11-19 08:01:06.666166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:14.741 [2024-11-19 08:01:06.666192] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:14.741 [2024-11-19 08:01:06.666478] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:14.741 [2024-11-19 08:01:06.666781] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:14.741 [2024-11-19 08:01:06.666813] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:14.741 [2024-11-19 08:01:06.666835] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:14.741 [2024-11-19 08:01:06.666856] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.031 [2024-11-19 08:01:06.680247] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.031 [2024-11-19 08:01:06.680721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.031 [2024-11-19 08:01:06.680763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.031 [2024-11-19 08:01:06.680789] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.031 [2024-11-19 08:01:06.681075] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.031 [2024-11-19 08:01:06.681361] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.031 [2024-11-19 08:01:06.681392] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.031 [2024-11-19 08:01:06.681415] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.031 [2024-11-19 08:01:06.681436] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.031 [2024-11-19 08:01:06.694833] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.031 [2024-11-19 08:01:06.695268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.031 [2024-11-19 08:01:06.695309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.031 [2024-11-19 08:01:06.695336] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.031 [2024-11-19 08:01:06.695621] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.031 [2024-11-19 08:01:06.695919] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.031 [2024-11-19 08:01:06.695952] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.031 [2024-11-19 08:01:06.695991] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.031 [2024-11-19 08:01:06.696014] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.031 2602.20 IOPS, 10.16 MiB/s [2024-11-19T07:01:06.961Z] [2024-11-19 08:01:06.711340] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.031 [2024-11-19 08:01:06.711789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.031 [2024-11-19 08:01:06.711838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.031 [2024-11-19 08:01:06.711866] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.031 [2024-11-19 08:01:06.712152] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.031 [2024-11-19 08:01:06.712440] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.031 [2024-11-19 08:01:06.712471] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.031 [2024-11-19 08:01:06.712494] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.031 [2024-11-19 08:01:06.712516] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.031 [2024-11-19 08:01:06.725990] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.031 [2024-11-19 08:01:06.726454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.031 [2024-11-19 08:01:06.726496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.031 [2024-11-19 08:01:06.726522] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.031 [2024-11-19 08:01:06.726821] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.031 [2024-11-19 08:01:06.727111] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.031 [2024-11-19 08:01:06.727142] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.031 [2024-11-19 08:01:06.727164] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.031 [2024-11-19 08:01:06.727185] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.031 [2024-11-19 08:01:06.740444] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.031 [2024-11-19 08:01:06.740923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.031 [2024-11-19 08:01:06.740966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.031 [2024-11-19 08:01:06.740993] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.031 [2024-11-19 08:01:06.741279] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.031 [2024-11-19 08:01:06.741568] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.031 [2024-11-19 08:01:06.741599] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.032 [2024-11-19 08:01:06.741622] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.032 [2024-11-19 08:01:06.741643] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.032 [2024-11-19 08:01:06.755073] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.032 [2024-11-19 08:01:06.755548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.032 [2024-11-19 08:01:06.755590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.032 [2024-11-19 08:01:06.755616] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.032 [2024-11-19 08:01:06.755910] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.032 [2024-11-19 08:01:06.756199] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.032 [2024-11-19 08:01:06.756230] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.032 [2024-11-19 08:01:06.756254] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.032 [2024-11-19 08:01:06.756276] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.032 [2024-11-19 08:01:06.769648] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.032 [2024-11-19 08:01:06.770114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.032 [2024-11-19 08:01:06.770155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.032 [2024-11-19 08:01:06.770181] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.032 [2024-11-19 08:01:06.770467] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.032 [2024-11-19 08:01:06.770766] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.032 [2024-11-19 08:01:06.770798] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.032 [2024-11-19 08:01:06.770820] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.032 [2024-11-19 08:01:06.770843] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.032 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3133932 Killed "${NVMF_APP[@]}" "$@" 00:37:15.032 08:01:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:37:15.032 08:01:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:37:15.032 08:01:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:15.032 08:01:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:15.032 08:01:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:15.032 08:01:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3135136 00:37:15.032 08:01:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:37:15.032 08:01:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3135136 00:37:15.032 08:01:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 3135136 ']' 00:37:15.032 08:01:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:15.032 08:01:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:15.032 08:01:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:15.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:15.032 08:01:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:15.032 08:01:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:15.032 [2024-11-19 08:01:06.784277] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.032 [2024-11-19 08:01:06.784740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.032 [2024-11-19 08:01:06.784782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.032 [2024-11-19 08:01:06.784809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.032 [2024-11-19 08:01:06.785097] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.032 [2024-11-19 08:01:06.785386] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.032 [2024-11-19 08:01:06.785418] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.032 [2024-11-19 08:01:06.785440] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.032 [2024-11-19 08:01:06.785462] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.032 [2024-11-19 08:01:06.798892] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.032 [2024-11-19 08:01:06.799361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.032 [2024-11-19 08:01:06.799402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.032 [2024-11-19 08:01:06.799428] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.032 [2024-11-19 08:01:06.799724] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.032 [2024-11-19 08:01:06.800012] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.032 [2024-11-19 08:01:06.800043] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.032 [2024-11-19 08:01:06.800065] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.032 [2024-11-19 08:01:06.800086] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.032 [2024-11-19 08:01:06.813501] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.032 [2024-11-19 08:01:06.813987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.032 [2024-11-19 08:01:06.814029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.032 [2024-11-19 08:01:06.814056] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.032 [2024-11-19 08:01:06.814346] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.032 [2024-11-19 08:01:06.814638] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.032 [2024-11-19 08:01:06.814669] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.032 [2024-11-19 08:01:06.814701] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.032 [2024-11-19 08:01:06.814725] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.032 [2024-11-19 08:01:06.828114] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.032 [2024-11-19 08:01:06.828664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.032 [2024-11-19 08:01:06.828727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.032 [2024-11-19 08:01:06.828757] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.032 [2024-11-19 08:01:06.829052] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.032 [2024-11-19 08:01:06.829383] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.032 [2024-11-19 08:01:06.829415] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.032 [2024-11-19 08:01:06.829441] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.032 [2024-11-19 08:01:06.829465] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.032 [2024-11-19 08:01:06.842880] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.032 [2024-11-19 08:01:06.843344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.032 [2024-11-19 08:01:06.843387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.032 [2024-11-19 08:01:06.843415] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.032 [2024-11-19 08:01:06.843719] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.032 [2024-11-19 08:01:06.844015] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.032 [2024-11-19 08:01:06.844047] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.032 [2024-11-19 08:01:06.844070] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.032 [2024-11-19 08:01:06.844093] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.032 [2024-11-19 08:01:06.857561] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.032 [2024-11-19 08:01:06.858017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.032 [2024-11-19 08:01:06.858059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.032 [2024-11-19 08:01:06.858086] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.032 [2024-11-19 08:01:06.858379] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.032 [2024-11-19 08:01:06.858672] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.032 [2024-11-19 08:01:06.858714] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.032 [2024-11-19 08:01:06.858738] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.032 [2024-11-19 08:01:06.858762] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.032 [2024-11-19 08:01:06.872107] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.033 [2024-11-19 08:01:06.872573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.033 [2024-11-19 08:01:06.872615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.033 [2024-11-19 08:01:06.872642] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.033 [2024-11-19 08:01:06.872950] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.033 [2024-11-19 08:01:06.873241] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.033 [2024-11-19 08:01:06.873273] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.033 [2024-11-19 08:01:06.873296] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.033 [2024-11-19 08:01:06.873319] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.033 [2024-11-19 08:01:06.881769] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:37:15.033 [2024-11-19 08:01:06.881896] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:15.033 [2024-11-19 08:01:06.886678] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.033 [2024-11-19 08:01:06.887152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.033 [2024-11-19 08:01:06.887193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.033 [2024-11-19 08:01:06.887219] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.033 [2024-11-19 08:01:06.887510] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.033 [2024-11-19 08:01:06.887814] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.033 [2024-11-19 08:01:06.887846] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.033 [2024-11-19 08:01:06.887869] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.033 [2024-11-19 08:01:06.887891] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.033 [2024-11-19 08:01:06.901270] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.033 [2024-11-19 08:01:06.901703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.033 [2024-11-19 08:01:06.901745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.033 [2024-11-19 08:01:06.901773] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.033 [2024-11-19 08:01:06.902063] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.033 [2024-11-19 08:01:06.902358] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.033 [2024-11-19 08:01:06.902389] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.033 [2024-11-19 08:01:06.902411] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.033 [2024-11-19 08:01:06.902434] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.033 [2024-11-19 08:01:06.915882] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.033 [2024-11-19 08:01:06.916384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.033 [2024-11-19 08:01:06.916426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.033 [2024-11-19 08:01:06.916462] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.033 [2024-11-19 08:01:06.916766] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.033 [2024-11-19 08:01:06.917060] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.033 [2024-11-19 08:01:06.917092] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.033 [2024-11-19 08:01:06.917115] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.033 [2024-11-19 08:01:06.917137] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.033 [2024-11-19 08:01:06.930571] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.033 [2024-11-19 08:01:06.931031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.033 [2024-11-19 08:01:06.931074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.033 [2024-11-19 08:01:06.931101] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.033 [2024-11-19 08:01:06.931411] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.033 [2024-11-19 08:01:06.931723] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.033 [2024-11-19 08:01:06.931755] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.033 [2024-11-19 08:01:06.931778] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.033 [2024-11-19 08:01:06.931801] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.316 [2024-11-19 08:01:06.945208] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.316 [2024-11-19 08:01:06.945697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.316 [2024-11-19 08:01:06.945738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.316 [2024-11-19 08:01:06.945765] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.316 [2024-11-19 08:01:06.946053] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.316 [2024-11-19 08:01:06.946344] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.316 [2024-11-19 08:01:06.946376] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.316 [2024-11-19 08:01:06.946399] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.316 [2024-11-19 08:01:06.946421] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.316 [2024-11-19 08:01:06.959859] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.316 [2024-11-19 08:01:06.960325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.316 [2024-11-19 08:01:06.960366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.316 [2024-11-19 08:01:06.960392] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.316 [2024-11-19 08:01:06.960700] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.316 [2024-11-19 08:01:06.960998] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.316 [2024-11-19 08:01:06.961030] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.316 [2024-11-19 08:01:06.961054] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.316 [2024-11-19 08:01:06.961076] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.316 [2024-11-19 08:01:06.974497] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.316 [2024-11-19 08:01:06.974961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.316 [2024-11-19 08:01:06.975002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.316 [2024-11-19 08:01:06.975029] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.316 [2024-11-19 08:01:06.975317] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.316 [2024-11-19 08:01:06.975609] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.316 [2024-11-19 08:01:06.975640] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.316 [2024-11-19 08:01:06.975663] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.316 [2024-11-19 08:01:06.975685] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.316 [2024-11-19 08:01:06.989230] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.316 [2024-11-19 08:01:06.989704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.316 [2024-11-19 08:01:06.989746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.316 [2024-11-19 08:01:06.989774] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.316 [2024-11-19 08:01:06.990064] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.316 [2024-11-19 08:01:06.990356] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.316 [2024-11-19 08:01:06.990388] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.316 [2024-11-19 08:01:06.990410] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.316 [2024-11-19 08:01:06.990432] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.316 [2024-11-19 08:01:07.003782] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.316 [2024-11-19 08:01:07.004220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.316 [2024-11-19 08:01:07.004260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.316 [2024-11-19 08:01:07.004287] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.316 [2024-11-19 08:01:07.004573] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.316 [2024-11-19 08:01:07.004874] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.316 [2024-11-19 08:01:07.004906] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.316 [2024-11-19 08:01:07.004935] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.316 [2024-11-19 08:01:07.004958] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.316 [2024-11-19 08:01:07.018477] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.316 [2024-11-19 08:01:07.018946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.316 [2024-11-19 08:01:07.018988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.316 [2024-11-19 08:01:07.019015] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.316 [2024-11-19 08:01:07.019303] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.316 [2024-11-19 08:01:07.019592] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.316 [2024-11-19 08:01:07.019624] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.316 [2024-11-19 08:01:07.019647] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.316 [2024-11-19 08:01:07.019669] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.316 [2024-11-19 08:01:07.033005] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.316 [2024-11-19 08:01:07.033483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.316 [2024-11-19 08:01:07.033525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.317 [2024-11-19 08:01:07.033552] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.317 [2024-11-19 08:01:07.033853] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.317 [2024-11-19 08:01:07.034143] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.317 [2024-11-19 08:01:07.034174] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.317 [2024-11-19 08:01:07.034198] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.317 [2024-11-19 08:01:07.034220] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.317 [2024-11-19 08:01:07.047651] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.317 [2024-11-19 08:01:07.048132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.317 [2024-11-19 08:01:07.048174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.317 [2024-11-19 08:01:07.048201] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.317 [2024-11-19 08:01:07.048491] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.317 [2024-11-19 08:01:07.048793] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.317 [2024-11-19 08:01:07.048826] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.317 [2024-11-19 08:01:07.048848] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.317 [2024-11-19 08:01:07.048871] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.317 [2024-11-19 08:01:07.059506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:15.317 [2024-11-19 08:01:07.062249] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.317 [2024-11-19 08:01:07.062717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.317 [2024-11-19 08:01:07.062759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.317 [2024-11-19 08:01:07.062785] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.317 [2024-11-19 08:01:07.063092] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.317 [2024-11-19 08:01:07.063383] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.317 [2024-11-19 08:01:07.063415] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.317 [2024-11-19 08:01:07.063437] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.317 [2024-11-19 08:01:07.063458] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.317 [2024-11-19 08:01:07.076917] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.317 [2024-11-19 08:01:07.077464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.317 [2024-11-19 08:01:07.077509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.317 [2024-11-19 08:01:07.077537] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.317 [2024-11-19 08:01:07.077849] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.317 [2024-11-19 08:01:07.078162] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.317 [2024-11-19 08:01:07.078194] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.317 [2024-11-19 08:01:07.078218] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.317 [2024-11-19 08:01:07.078242] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.317 [2024-11-19 08:01:07.091589] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.317 [2024-11-19 08:01:07.092134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.317 [2024-11-19 08:01:07.092180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.317 [2024-11-19 08:01:07.092210] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.317 [2024-11-19 08:01:07.092505] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.317 [2024-11-19 08:01:07.092812] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.317 [2024-11-19 08:01:07.092846] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.317 [2024-11-19 08:01:07.092871] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.317 [2024-11-19 08:01:07.092894] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.317 [2024-11-19 08:01:07.106438] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.317 [2024-11-19 08:01:07.106932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.317 [2024-11-19 08:01:07.106973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.317 [2024-11-19 08:01:07.106999] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.317 [2024-11-19 08:01:07.107290] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.317 [2024-11-19 08:01:07.107586] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.317 [2024-11-19 08:01:07.107618] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.317 [2024-11-19 08:01:07.107641] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.317 [2024-11-19 08:01:07.107663] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.317 [2024-11-19 08:01:07.121062] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.317 [2024-11-19 08:01:07.121532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.317 [2024-11-19 08:01:07.121589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.317 [2024-11-19 08:01:07.121616] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.317 [2024-11-19 08:01:07.121922] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.317 [2024-11-19 08:01:07.122217] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.317 [2024-11-19 08:01:07.122248] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.317 [2024-11-19 08:01:07.122270] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.317 [2024-11-19 08:01:07.122293] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.317 [2024-11-19 08:01:07.135583] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.317 [2024-11-19 08:01:07.136024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.317 [2024-11-19 08:01:07.136065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.317 [2024-11-19 08:01:07.136092] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.317 [2024-11-19 08:01:07.136381] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.317 [2024-11-19 08:01:07.136670] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.317 [2024-11-19 08:01:07.136712] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.317 [2024-11-19 08:01:07.136736] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.317 [2024-11-19 08:01:07.136758] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.317 [2024-11-19 08:01:07.150221] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.317 [2024-11-19 08:01:07.150670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.317 [2024-11-19 08:01:07.150720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.317 [2024-11-19 08:01:07.150754] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.317 [2024-11-19 08:01:07.151044] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.317 [2024-11-19 08:01:07.151335] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.317 [2024-11-19 08:01:07.151366] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.317 [2024-11-19 08:01:07.151389] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.317 [2024-11-19 08:01:07.151411] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.317 [2024-11-19 08:01:07.164992] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.317 [2024-11-19 08:01:07.165463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.317 [2024-11-19 08:01:07.165504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.317 [2024-11-19 08:01:07.165531] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.317 [2024-11-19 08:01:07.165839] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.317 [2024-11-19 08:01:07.166134] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.317 [2024-11-19 08:01:07.166166] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.317 [2024-11-19 08:01:07.166190] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.317 [2024-11-19 08:01:07.166213] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.318 [2024-11-19 08:01:07.179736] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.318 [2024-11-19 08:01:07.180279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.318 [2024-11-19 08:01:07.180321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.318 [2024-11-19 08:01:07.180348] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.318 [2024-11-19 08:01:07.180641] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.318 [2024-11-19 08:01:07.180945] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.318 [2024-11-19 08:01:07.180977] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.318 [2024-11-19 08:01:07.181000] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.318 [2024-11-19 08:01:07.181023] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.318 [2024-11-19 08:01:07.194423] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.318 [2024-11-19 08:01:07.194896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.318 [2024-11-19 08:01:07.194936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.318 [2024-11-19 08:01:07.194963] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.318 [2024-11-19 08:01:07.195251] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.318 [2024-11-19 08:01:07.195547] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.318 [2024-11-19 08:01:07.195579] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.318 [2024-11-19 08:01:07.195602] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.318 [2024-11-19 08:01:07.195624] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.318 [2024-11-19 08:01:07.204602] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:15.318 [2024-11-19 08:01:07.204652] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:15.318 [2024-11-19 08:01:07.204676] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:15.318 [2024-11-19 08:01:07.204712] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:15.318 [2024-11-19 08:01:07.204733] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:15.318 [2024-11-19 08:01:07.207430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:15.318 [2024-11-19 08:01:07.207482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:15.318 [2024-11-19 08:01:07.207489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:15.318 [2024-11-19 08:01:07.209058] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.318 [2024-11-19 08:01:07.209527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.318 [2024-11-19 08:01:07.209569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.318 [2024-11-19 08:01:07.209595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.318 [2024-11-19 08:01:07.209895] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.318 [2024-11-19 08:01:07.210190] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.318 [2024-11-19 08:01:07.210221] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.318 [2024-11-19 08:01:07.210244] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.318 [2024-11-19 08:01:07.210266] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.318 [2024-11-19 08:01:07.223804] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.318 [2024-11-19 08:01:07.224491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.318 [2024-11-19 08:01:07.224545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.318 [2024-11-19 08:01:07.224577] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.318 [2024-11-19 08:01:07.224893] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.318 [2024-11-19 08:01:07.225198] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.318 [2024-11-19 08:01:07.225232] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.318 [2024-11-19 08:01:07.225258] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.318 [2024-11-19 08:01:07.225286] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.318 [2024-11-19 08:01:07.238716] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.318 [2024-11-19 08:01:07.239184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.318 [2024-11-19 08:01:07.239226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.318 [2024-11-19 08:01:07.239252] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.318 [2024-11-19 08:01:07.239544] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.318 [2024-11-19 08:01:07.239850] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.318 [2024-11-19 08:01:07.239883] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.318 [2024-11-19 08:01:07.239907] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.318 [2024-11-19 08:01:07.239929] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.579 [2024-11-19 08:01:07.253494] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.579 [2024-11-19 08:01:07.253958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.579 [2024-11-19 08:01:07.254000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.579 [2024-11-19 08:01:07.254026] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.579 [2024-11-19 08:01:07.254317] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.579 [2024-11-19 08:01:07.254607] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.579 [2024-11-19 08:01:07.254638] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.579 [2024-11-19 08:01:07.254661] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.579 [2024-11-19 08:01:07.254682] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.579 [2024-11-19 08:01:07.268054] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.579 [2024-11-19 08:01:07.268535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.579 [2024-11-19 08:01:07.268577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.579 [2024-11-19 08:01:07.268604] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.579 [2024-11-19 08:01:07.268913] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.579 [2024-11-19 08:01:07.269201] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.579 [2024-11-19 08:01:07.269233] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.579 [2024-11-19 08:01:07.269260] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.579 [2024-11-19 08:01:07.269282] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.579 [2024-11-19 08:01:07.282634] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.579 [2024-11-19 08:01:07.283079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.579 [2024-11-19 08:01:07.283120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.579 [2024-11-19 08:01:07.283153] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.579 [2024-11-19 08:01:07.283449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.579 [2024-11-19 08:01:07.283763] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.579 [2024-11-19 08:01:07.283795] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.579 [2024-11-19 08:01:07.283818] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.579 [2024-11-19 08:01:07.283839] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.579 [2024-11-19 08:01:07.297390] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.579 [2024-11-19 08:01:07.298077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.579 [2024-11-19 08:01:07.298130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.579 [2024-11-19 08:01:07.298161] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.579 [2024-11-19 08:01:07.298466] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.579 [2024-11-19 08:01:07.298785] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.579 [2024-11-19 08:01:07.298820] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.579 [2024-11-19 08:01:07.298846] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.579 [2024-11-19 08:01:07.298874] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.579 [2024-11-19 08:01:07.312297] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.579 [2024-11-19 08:01:07.312941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.579 [2024-11-19 08:01:07.312994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.579 [2024-11-19 08:01:07.313027] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.579 [2024-11-19 08:01:07.313334] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.579 [2024-11-19 08:01:07.313637] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.579 [2024-11-19 08:01:07.313670] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.579 [2024-11-19 08:01:07.313706] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.579 [2024-11-19 08:01:07.313736] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.579 [2024-11-19 08:01:07.327046] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.580 [2024-11-19 08:01:07.327582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.580 [2024-11-19 08:01:07.327630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.580 [2024-11-19 08:01:07.327661] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.580 [2024-11-19 08:01:07.327988] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.580 [2024-11-19 08:01:07.328291] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.580 [2024-11-19 08:01:07.328324] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.580 [2024-11-19 08:01:07.328348] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.580 [2024-11-19 08:01:07.328372] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.580 [2024-11-19 08:01:07.341612] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.580 [2024-11-19 08:01:07.342105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.580 [2024-11-19 08:01:07.342146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.580 [2024-11-19 08:01:07.342173] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.580 [2024-11-19 08:01:07.342463] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.580 [2024-11-19 08:01:07.342769] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.580 [2024-11-19 08:01:07.342803] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.580 [2024-11-19 08:01:07.342826] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.580 [2024-11-19 08:01:07.342848] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.580 [2024-11-19 08:01:07.356354] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.580 [2024-11-19 08:01:07.356815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.580 [2024-11-19 08:01:07.356858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.580 [2024-11-19 08:01:07.356885] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.580 [2024-11-19 08:01:07.357179] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.580 [2024-11-19 08:01:07.357475] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.580 [2024-11-19 08:01:07.357507] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.580 [2024-11-19 08:01:07.357531] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.580 [2024-11-19 08:01:07.357553] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.580 [2024-11-19 08:01:07.370935] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.580 [2024-11-19 08:01:07.371380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.580 [2024-11-19 08:01:07.371422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.580 [2024-11-19 08:01:07.371449] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.580 [2024-11-19 08:01:07.371762] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.580 [2024-11-19 08:01:07.372065] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.580 [2024-11-19 08:01:07.372103] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.580 [2024-11-19 08:01:07.372128] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.580 [2024-11-19 08:01:07.372150] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.580 [2024-11-19 08:01:07.385583] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.580 [2024-11-19 08:01:07.386009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.580 [2024-11-19 08:01:07.386050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.580 [2024-11-19 08:01:07.386077] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.580 [2024-11-19 08:01:07.386363] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.580 [2024-11-19 08:01:07.386652] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.580 [2024-11-19 08:01:07.386683] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.580 [2024-11-19 08:01:07.386720] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.580 [2024-11-19 08:01:07.386743] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.580 [2024-11-19 08:01:07.400170] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.580 [2024-11-19 08:01:07.400605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.580 [2024-11-19 08:01:07.400645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.580 [2024-11-19 08:01:07.400671] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.580 [2024-11-19 08:01:07.400969] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.580 [2024-11-19 08:01:07.401259] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.580 [2024-11-19 08:01:07.401290] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.580 [2024-11-19 08:01:07.401312] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.580 [2024-11-19 08:01:07.401334] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.580 [2024-11-19 08:01:07.414724] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.580 [2024-11-19 08:01:07.415176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.580 [2024-11-19 08:01:07.415217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.580 [2024-11-19 08:01:07.415244] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.580 [2024-11-19 08:01:07.415529] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.580 [2024-11-19 08:01:07.415832] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.580 [2024-11-19 08:01:07.415864] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.580 [2024-11-19 08:01:07.415887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.580 [2024-11-19 08:01:07.415914] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.580 [2024-11-19 08:01:07.429363] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.580 [2024-11-19 08:01:07.429809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.580 [2024-11-19 08:01:07.429850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.580 [2024-11-19 08:01:07.429876] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.580 [2024-11-19 08:01:07.430162] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.580 [2024-11-19 08:01:07.430452] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.580 [2024-11-19 08:01:07.430484] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.580 [2024-11-19 08:01:07.430507] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.580 [2024-11-19 08:01:07.430529] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.580 [2024-11-19 08:01:07.444030] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.580 [2024-11-19 08:01:07.444467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.580 [2024-11-19 08:01:07.444507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.580 [2024-11-19 08:01:07.444533] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.580 [2024-11-19 08:01:07.444829] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.580 [2024-11-19 08:01:07.445118] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.580 [2024-11-19 08:01:07.445149] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.580 [2024-11-19 08:01:07.445172] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.580 [2024-11-19 08:01:07.445194] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.580 [2024-11-19 08:01:07.458599] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.580 [2024-11-19 08:01:07.459294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.580 [2024-11-19 08:01:07.459348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.580 [2024-11-19 08:01:07.459380] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.580 [2024-11-19 08:01:07.459680] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.580 [2024-11-19 08:01:07.459992] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.580 [2024-11-19 08:01:07.460025] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.581 [2024-11-19 08:01:07.460053] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.581 [2024-11-19 08:01:07.460081] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.581 [2024-11-19 08:01:07.473519] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.581 [2024-11-19 08:01:07.474196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.581 [2024-11-19 08:01:07.474248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.581 [2024-11-19 08:01:07.474280] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.581 [2024-11-19 08:01:07.474584] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.581 [2024-11-19 08:01:07.474894] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.581 [2024-11-19 08:01:07.474928] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.581 [2024-11-19 08:01:07.474953] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.581 [2024-11-19 08:01:07.474982] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.581 [2024-11-19 08:01:07.488422] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.581 [2024-11-19 08:01:07.488853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.581 [2024-11-19 08:01:07.488894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.581 [2024-11-19 08:01:07.488921] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.581 [2024-11-19 08:01:07.489216] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.581 [2024-11-19 08:01:07.489513] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.581 [2024-11-19 08:01:07.489544] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.581 [2024-11-19 08:01:07.489568] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.581 [2024-11-19 08:01:07.489590] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.581 [2024-11-19 08:01:07.503192] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.581 [2024-11-19 08:01:07.503611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.581 [2024-11-19 08:01:07.503651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.581 [2024-11-19 08:01:07.503677] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.581 [2024-11-19 08:01:07.503982] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.581 [2024-11-19 08:01:07.504279] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.581 [2024-11-19 08:01:07.504313] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.581 [2024-11-19 08:01:07.504336] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.581 [2024-11-19 08:01:07.504358] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.841 [2024-11-19 08:01:07.517830] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.841 [2024-11-19 08:01:07.518267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.841 [2024-11-19 08:01:07.518307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.841 [2024-11-19 08:01:07.518340] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.841 [2024-11-19 08:01:07.518630] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.841 [2024-11-19 08:01:07.518934] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.841 [2024-11-19 08:01:07.518967] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.841 [2024-11-19 08:01:07.518990] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.841 [2024-11-19 08:01:07.519013] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.841 [2024-11-19 08:01:07.532537] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.841 [2024-11-19 08:01:07.533009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.841 [2024-11-19 08:01:07.533051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.841 [2024-11-19 08:01:07.533079] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.841 [2024-11-19 08:01:07.533367] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.841 [2024-11-19 08:01:07.533657] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.841 [2024-11-19 08:01:07.533713] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.841 [2024-11-19 08:01:07.533737] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.841 [2024-11-19 08:01:07.533759] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.841 [2024-11-19 08:01:07.546978] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.841 [2024-11-19 08:01:07.547366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.841 [2024-11-19 08:01:07.547402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.841 [2024-11-19 08:01:07.547426] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.841 [2024-11-19 08:01:07.547723] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.841 [2024-11-19 08:01:07.547983] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.841 [2024-11-19 08:01:07.548026] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.841 [2024-11-19 08:01:07.548046] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.841 [2024-11-19 08:01:07.548065] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.841 [2024-11-19 08:01:07.561174] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.841 [2024-11-19 08:01:07.561586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.841 [2024-11-19 08:01:07.561623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.841 [2024-11-19 08:01:07.561647] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.841 [2024-11-19 08:01:07.561918] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.841 [2024-11-19 08:01:07.562201] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.841 [2024-11-19 08:01:07.562230] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.841 [2024-11-19 08:01:07.562250] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.841 [2024-11-19 08:01:07.562269] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.841 [2024-11-19 08:01:07.575362] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.841 [2024-11-19 08:01:07.575856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.841 [2024-11-19 08:01:07.575895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.841 [2024-11-19 08:01:07.575921] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.841 [2024-11-19 08:01:07.576197] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.841 [2024-11-19 08:01:07.576455] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.841 [2024-11-19 08:01:07.576482] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.841 [2024-11-19 08:01:07.576504] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.841 [2024-11-19 08:01:07.576526] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.841 [2024-11-19 08:01:07.589509] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.841 [2024-11-19 08:01:07.589945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.841 [2024-11-19 08:01:07.589982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.842 [2024-11-19 08:01:07.590005] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.842 [2024-11-19 08:01:07.590280] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.842 [2024-11-19 08:01:07.590532] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.842 [2024-11-19 08:01:07.590560] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.842 [2024-11-19 08:01:07.590580] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.842 [2024-11-19 08:01:07.590599] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.842 [2024-11-19 08:01:07.603582] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.842 [2024-11-19 08:01:07.603988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.842 [2024-11-19 08:01:07.604025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.842 [2024-11-19 08:01:07.604048] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.842 [2024-11-19 08:01:07.604309] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.842 [2024-11-19 08:01:07.604586] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.842 [2024-11-19 08:01:07.604614] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.842 [2024-11-19 08:01:07.604639] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.842 [2024-11-19 08:01:07.604659] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.842 [2024-11-19 08:01:07.617614] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.842 [2024-11-19 08:01:07.618052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.842 [2024-11-19 08:01:07.618089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.842 [2024-11-19 08:01:07.618112] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.842 [2024-11-19 08:01:07.618386] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.842 [2024-11-19 08:01:07.618640] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.842 [2024-11-19 08:01:07.618668] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.842 [2024-11-19 08:01:07.618687] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.842 [2024-11-19 08:01:07.618735] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.842 [2024-11-19 08:01:07.631582] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.842 [2024-11-19 08:01:07.631984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.842 [2024-11-19 08:01:07.632022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.842 [2024-11-19 08:01:07.632045] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.842 [2024-11-19 08:01:07.632317] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.842 [2024-11-19 08:01:07.632605] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.842 [2024-11-19 08:01:07.632633] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.842 [2024-11-19 08:01:07.632653] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.842 [2024-11-19 08:01:07.632672] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.842 [2024-11-19 08:01:07.645660] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.842 [2024-11-19 08:01:07.646119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.842 [2024-11-19 08:01:07.646156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.842 [2024-11-19 08:01:07.646179] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.842 [2024-11-19 08:01:07.646449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.842 [2024-11-19 08:01:07.646709] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.842 [2024-11-19 08:01:07.646738] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.842 [2024-11-19 08:01:07.646758] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.842 [2024-11-19 08:01:07.646777] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.842 [2024-11-19 08:01:07.659734] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.842 [2024-11-19 08:01:07.660137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.842 [2024-11-19 08:01:07.660173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.842 [2024-11-19 08:01:07.660198] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.842 [2024-11-19 08:01:07.660469] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.842 [2024-11-19 08:01:07.660729] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.842 [2024-11-19 08:01:07.660756] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.842 [2024-11-19 08:01:07.660776] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.842 [2024-11-19 08:01:07.660795] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.842 [2024-11-19 08:01:07.673728] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.842 [2024-11-19 08:01:07.674144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.842 [2024-11-19 08:01:07.674181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.842 [2024-11-19 08:01:07.674205] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.842 [2024-11-19 08:01:07.674476] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.842 [2024-11-19 08:01:07.674736] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.842 [2024-11-19 08:01:07.674763] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.842 [2024-11-19 08:01:07.674783] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.842 [2024-11-19 08:01:07.674802] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.842 [2024-11-19 08:01:07.687668] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.842 [2024-11-19 08:01:07.688078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.842 [2024-11-19 08:01:07.688115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.842 [2024-11-19 08:01:07.688139] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.842 [2024-11-19 08:01:07.688394] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.842 [2024-11-19 08:01:07.688652] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.842 [2024-11-19 08:01:07.688681] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.842 [2024-11-19 08:01:07.688711] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.842 [2024-11-19 08:01:07.688732] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.842 [2024-11-19 08:01:07.701707] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.842 [2024-11-19 08:01:07.702115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.842 [2024-11-19 08:01:07.702156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.842 [2024-11-19 08:01:07.702181] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.842 [2024-11-19 08:01:07.702451] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.842 [2024-11-19 08:01:07.702729] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.842 [2024-11-19 08:01:07.702759] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.842 [2024-11-19 08:01:07.702780] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.842 [2024-11-19 08:01:07.702800] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.842 2168.50 IOPS, 8.47 MiB/s [2024-11-19T07:01:07.772Z] [2024-11-19 08:01:07.716355] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.842 [2024-11-19 08:01:07.716783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.842 [2024-11-19 08:01:07.716821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.842 [2024-11-19 08:01:07.716844] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.842 [2024-11-19 08:01:07.717117] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.842 [2024-11-19 08:01:07.717368] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.842 [2024-11-19 08:01:07.717396] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.842 [2024-11-19 08:01:07.717416] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.842 [2024-11-19 08:01:07.717435] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.843 [2024-11-19 08:01:07.730391] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.843 [2024-11-19 08:01:07.730806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.843 [2024-11-19 08:01:07.730844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.843 [2024-11-19 08:01:07.730868] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.843 [2024-11-19 08:01:07.731136] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.843 [2024-11-19 08:01:07.731386] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.843 [2024-11-19 08:01:07.731414] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.843 [2024-11-19 08:01:07.731433] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.843 [2024-11-19 08:01:07.731466] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.843 [2024-11-19 08:01:07.744471] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.843 [2024-11-19 08:01:07.744880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.843 [2024-11-19 08:01:07.744917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.843 [2024-11-19 08:01:07.744941] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.843 [2024-11-19 08:01:07.745218] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.843 [2024-11-19 08:01:07.745468] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.843 [2024-11-19 08:01:07.745496] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.843 [2024-11-19 08:01:07.745516] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.843 [2024-11-19 08:01:07.745534] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.843 [2024-11-19 08:01:07.758559] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.843 [2024-11-19 08:01:07.758966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.843 [2024-11-19 08:01:07.759010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.843 [2024-11-19 08:01:07.759044] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:15.843 [2024-11-19 08:01:07.759314] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:15.843 [2024-11-19 08:01:07.759565] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:15.843 [2024-11-19 08:01:07.759593] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:15.843 [2024-11-19 08:01:07.759613] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:15.843 [2024-11-19 08:01:07.759632] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:15.843 [2024-11-19 08:01:07.772791] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:15.843 [2024-11-19 08:01:07.773185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.843 [2024-11-19 08:01:07.773222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:15.843 [2024-11-19 08:01:07.773245] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.102 [2024-11-19 08:01:07.773526] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.102 [2024-11-19 08:01:07.773801] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.102 [2024-11-19 08:01:07.773830] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.102 [2024-11-19 08:01:07.773849] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.102 [2024-11-19 08:01:07.773868] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.102 [2024-11-19 08:01:07.786965] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.102 [2024-11-19 08:01:07.787389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.102 [2024-11-19 08:01:07.787425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.102 [2024-11-19 08:01:07.787449] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.102 [2024-11-19 08:01:07.787743] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.102 [2024-11-19 08:01:07.788010] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.102 [2024-11-19 08:01:07.788054] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.102 [2024-11-19 08:01:07.788075] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.102 [2024-11-19 08:01:07.788094] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.102 [2024-11-19 08:01:07.800879] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.102 [2024-11-19 08:01:07.801320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.102 [2024-11-19 08:01:07.801358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.102 [2024-11-19 08:01:07.801383] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.102 [2024-11-19 08:01:07.801655] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.102 [2024-11-19 08:01:07.801938] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.102 [2024-11-19 08:01:07.801983] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.102 [2024-11-19 08:01:07.802004] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.102 [2024-11-19 08:01:07.802024] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.102 [2024-11-19 08:01:07.814974] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.102 [2024-11-19 08:01:07.815364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.102 [2024-11-19 08:01:07.815401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.102 [2024-11-19 08:01:07.815425] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.102 [2024-11-19 08:01:07.815719] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.102 [2024-11-19 08:01:07.815986] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.102 [2024-11-19 08:01:07.816029] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.102 [2024-11-19 08:01:07.816049] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.102 [2024-11-19 08:01:07.816068] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.102 [2024-11-19 08:01:07.829095] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.102 [2024-11-19 08:01:07.829536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.102 [2024-11-19 08:01:07.829574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.102 [2024-11-19 08:01:07.829598] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.102 [2024-11-19 08:01:07.829865] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.102 [2024-11-19 08:01:07.830125] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.102 [2024-11-19 08:01:07.830154] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.102 [2024-11-19 08:01:07.830180] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.102 [2024-11-19 08:01:07.830202] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.102 08:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:16.102 08:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:37:16.102 08:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:16.102 08:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:16.102 08:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:16.102 [2024-11-19 08:01:07.843253] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.102 [2024-11-19 08:01:07.843718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.102 [2024-11-19 08:01:07.843757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.103 [2024-11-19 08:01:07.843781] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.103 [2024-11-19 08:01:07.844060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.103 [2024-11-19 08:01:07.844314] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.103 [2024-11-19 08:01:07.844341] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.103 [2024-11-19 08:01:07.844362] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.103 [2024-11-19 08:01:07.844381] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.103 08:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:16.103 08:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:16.103 08:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:16.103 08:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:16.103 [2024-11-19 08:01:07.857495] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.103 [2024-11-19 08:01:07.857911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.103 [2024-11-19 08:01:07.857949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.103 [2024-11-19 08:01:07.857972] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.103 [2024-11-19 08:01:07.858142] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:16.103 [2024-11-19 08:01:07.858247] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.103 [2024-11-19 08:01:07.858508] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.103 [2024-11-19 08:01:07.858535] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.103 [2024-11-19 08:01:07.858556] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.103 [2024-11-19 08:01:07.858575] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.103 08:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:16.103 08:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:16.103 08:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:16.103 08:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:16.103 [2024-11-19 08:01:07.871780] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.103 [2024-11-19 08:01:07.872242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.103 [2024-11-19 08:01:07.872281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.103 [2024-11-19 08:01:07.872307] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.103 [2024-11-19 08:01:07.872580] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.103 [2024-11-19 08:01:07.872875] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.103 [2024-11-19 08:01:07.872904] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.103 [2024-11-19 08:01:07.872926] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.103 [2024-11-19 08:01:07.872947] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.103 [2024-11-19 08:01:07.885831] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.103 [2024-11-19 08:01:07.886224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.103 [2024-11-19 08:01:07.886259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.103 [2024-11-19 08:01:07.886282] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.103 [2024-11-19 08:01:07.886550] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.103 [2024-11-19 08:01:07.886829] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.103 [2024-11-19 08:01:07.886857] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.103 [2024-11-19 08:01:07.886877] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.103 [2024-11-19 08:01:07.886896] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.103 [2024-11-19 08:01:07.900087] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.103 [2024-11-19 08:01:07.900683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.103 [2024-11-19 08:01:07.900741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.103 [2024-11-19 08:01:07.900769] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.103 [2024-11-19 08:01:07.901056] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.103 [2024-11-19 08:01:07.901317] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.103 [2024-11-19 08:01:07.901347] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.103 [2024-11-19 08:01:07.901371] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.103 [2024-11-19 08:01:07.901395] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.103 [2024-11-19 08:01:07.914395] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.103 [2024-11-19 08:01:07.914849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.103 [2024-11-19 08:01:07.914895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.103 [2024-11-19 08:01:07.914919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.103 [2024-11-19 08:01:07.915196] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.103 [2024-11-19 08:01:07.915461] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.103 [2024-11-19 08:01:07.915489] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.103 [2024-11-19 08:01:07.915509] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.103 [2024-11-19 08:01:07.915528] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.103 [2024-11-19 08:01:07.928510] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.103 [2024-11-19 08:01:07.928935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.103 [2024-11-19 08:01:07.928973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.103 [2024-11-19 08:01:07.928997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.103 [2024-11-19 08:01:07.929272] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.103 [2024-11-19 08:01:07.929527] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.103 [2024-11-19 08:01:07.929555] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.103 [2024-11-19 08:01:07.929574] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.103 [2024-11-19 08:01:07.929594] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.103 [2024-11-19 08:01:07.942479] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.103 [2024-11-19 08:01:07.942966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.103 [2024-11-19 08:01:07.943005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.104 [2024-11-19 08:01:07.943029] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.104 [2024-11-19 08:01:07.943288] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.104 [2024-11-19 08:01:07.943549] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.104 [2024-11-19 08:01:07.943577] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.104 [2024-11-19 08:01:07.943598] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.104 [2024-11-19 08:01:07.943618] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.104 [2024-11-19 08:01:07.956540] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.104 [2024-11-19 08:01:07.956960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.104 [2024-11-19 08:01:07.956998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.104 [2024-11-19 08:01:07.957028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.104 [2024-11-19 08:01:07.957303] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.104 [2024-11-19 08:01:07.957558] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.104 [2024-11-19 08:01:07.957602] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.104 [2024-11-19 08:01:07.957623] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.104 [2024-11-19 08:01:07.957643] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.104 Malloc0 00:37:16.104 08:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:16.104 08:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:16.104 08:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:16.104 08:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:16.104 08:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:16.104 08:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:16.104 08:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:16.104 08:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:16.104 [2024-11-19 08:01:07.970751] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.104 [2024-11-19 08:01:07.971176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.104 [2024-11-19 08:01:07.971214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:37:16.104 [2024-11-19 08:01:07.971237] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:37:16.104 [2024-11-19 08:01:07.971510] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:37:16.104 [2024-11-19 08:01:07.971791] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:16.104 [2024-11-19 08:01:07.971821] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:16.104 [2024-11-19 08:01:07.971842] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:16.104 [2024-11-19 08:01:07.971861] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:16.104 08:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:16.104 08:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:16.104 08:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:16.104 08:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:16.104 [2024-11-19 08:01:07.977713] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:16.104 08:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:16.104 08:01:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3134469 00:37:16.104 [2024-11-19 08:01:07.984966] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:16.361 [2024-11-19 08:01:08.099867] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:37:17.862 2402.57 IOPS, 9.39 MiB/s [2024-11-19T07:01:10.724Z] 2889.88 IOPS, 11.29 MiB/s [2024-11-19T07:01:12.101Z] 3249.78 IOPS, 12.69 MiB/s [2024-11-19T07:01:13.034Z] 3552.80 IOPS, 13.88 MiB/s [2024-11-19T07:01:13.973Z] 3802.91 IOPS, 14.86 MiB/s [2024-11-19T07:01:14.906Z] 4012.75 IOPS, 15.67 MiB/s [2024-11-19T07:01:15.840Z] 4194.46 IOPS, 16.38 MiB/s [2024-11-19T07:01:16.775Z] 4352.71 IOPS, 17.00 MiB/s [2024-11-19T07:01:16.775Z] 4483.47 IOPS, 17.51 MiB/s 00:37:24.845 Latency(us) 00:37:24.845 [2024-11-19T07:01:16.775Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:24.845 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:37:24.845 Verification LBA range: start 0x0 length 0x4000 00:37:24.845 Nvme1n1 : 15.01 4487.82 17.53 9331.72 0.00 9233.81 1159.02 44661.57 00:37:24.845 [2024-11-19T07:01:16.775Z] =================================================================================================================== 00:37:24.845 [2024-11-19T07:01:16.775Z] Total : 4487.82 17.53 9331.72 0.00 9233.81 1159.02 44661.57 00:37:25.777 08:01:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:37:25.777 08:01:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:25.777 08:01:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:25.777 08:01:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:25.777 08:01:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:25.777 08:01:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:37:25.777 08:01:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:37:25.777 08:01:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:25.777 08:01:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:37:25.777 08:01:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:25.777 08:01:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:37:25.777 08:01:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:25.777 08:01:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:25.777 rmmod nvme_tcp 00:37:25.777 rmmod nvme_fabrics 00:37:25.777 rmmod nvme_keyring 00:37:25.777 08:01:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:25.777 08:01:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:37:25.777 08:01:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:37:25.777 08:01:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 3135136 ']' 00:37:25.777 08:01:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 3135136 00:37:25.777 08:01:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 3135136 ']' 00:37:25.777 08:01:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 3135136 00:37:25.777 08:01:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:37:25.777 08:01:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:25.777 08:01:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3135136 00:37:25.777 08:01:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:25.777 08:01:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:25.777 08:01:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3135136' 00:37:25.777 killing process with pid 3135136 00:37:25.777 08:01:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 3135136 00:37:25.777 08:01:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 3135136 00:37:27.155 08:01:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:27.155 08:01:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:27.155 08:01:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:27.155 08:01:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:37:27.155 08:01:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:37:27.155 08:01:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:27.155 08:01:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:37:27.155 08:01:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:27.155 08:01:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:27.155 08:01:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:27.155 08:01:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:27.155 08:01:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:29.054 08:01:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:29.054 00:37:29.054 real 0m26.326s 00:37:29.055 user 1m11.189s 00:37:29.055 sys 0m5.058s 00:37:29.055 08:01:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:29.055 08:01:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:29.055 ************************************ 00:37:29.055 END TEST nvmf_bdevperf 00:37:29.055 ************************************ 00:37:29.055 08:01:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:37:29.055 08:01:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:37:29.055 08:01:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:29.055 08:01:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:37:29.055 ************************************ 00:37:29.055 START TEST nvmf_target_disconnect 00:37:29.055 ************************************ 00:37:29.055 08:01:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:37:29.314 * Looking for test storage... 00:37:29.314 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:37:29.314 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:29.314 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:37:29.314 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:29.314 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:29.314 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:29.314 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:29.314 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:29.314 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:37:29.314 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:37:29.314 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:37:29.314 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:37:29.314 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:37:29.314 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:37:29.314 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:37:29.314 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:29.314 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:37:29.314 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:37:29.314 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:29.314 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:29.314 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:37:29.314 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:37:29.314 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:29.314 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:37:29.314 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:37:29.314 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:37:29.314 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:37:29.314 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:29.314 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:37:29.314 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:37:29.314 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:29.314 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:29.314 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:37:29.314 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:29.314 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:29.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:29.314 --rc genhtml_branch_coverage=1 00:37:29.314 --rc genhtml_function_coverage=1 00:37:29.314 --rc genhtml_legend=1 00:37:29.314 --rc geninfo_all_blocks=1 00:37:29.314 --rc geninfo_unexecuted_blocks=1 00:37:29.314 00:37:29.314 ' 00:37:29.314 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:29.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:29.314 --rc genhtml_branch_coverage=1 00:37:29.314 --rc genhtml_function_coverage=1 00:37:29.314 --rc genhtml_legend=1 00:37:29.314 --rc geninfo_all_blocks=1 00:37:29.314 --rc geninfo_unexecuted_blocks=1 00:37:29.314 00:37:29.314 ' 00:37:29.314 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:29.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:29.314 --rc genhtml_branch_coverage=1 00:37:29.314 --rc genhtml_function_coverage=1 00:37:29.315 --rc genhtml_legend=1 00:37:29.315 --rc geninfo_all_blocks=1 00:37:29.315 --rc geninfo_unexecuted_blocks=1 00:37:29.315 00:37:29.315 ' 00:37:29.315 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:29.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:29.315 --rc genhtml_branch_coverage=1 00:37:29.315 --rc genhtml_function_coverage=1 00:37:29.315 --rc genhtml_legend=1 00:37:29.315 --rc geninfo_all_blocks=1 00:37:29.315 --rc geninfo_unexecuted_blocks=1 00:37:29.315 00:37:29.315 ' 00:37:29.315 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:29.315 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:37:29.315 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:29.315 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:29.315 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:29.315 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:29.315 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:29.315 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:29.315 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:29.315 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:29.315 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:29.315 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:29.315 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:29.315 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:29.315 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:29.315 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:29.315 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:29.315 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:29.315 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:29.315 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:37:29.315 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:29.315 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:29.315 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:29.315 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:29.315 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:29.315 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:29.315 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:37:29.315 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:29.315 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:37:29.315 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:29.315 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:29.315 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:29.315 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:29.315 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:29.315 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:29.315 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:29.315 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:29.315 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:29.315 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:29.315 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:37:29.315 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:37:29.315 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:37:29.315 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:37:29.315 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:29.315 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:29.315 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:29.315 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:29.315 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:29.315 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:29.315 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:29.315 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:29.315 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:29.315 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:29.315 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:37:29.315 08:01:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:31.214 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:31.214 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:37:31.214 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:31.214 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:31.214 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:31.214 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:31.214 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:31.214 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:37:31.214 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:31.214 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:37:31.214 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:37:31.214 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:37:31.214 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:37:31.214 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:37:31.214 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:37:31.214 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:31.214 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:31.214 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:31.214 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:31.214 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:31.214 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:31.214 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:31.214 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:31.214 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:31.214 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:31.214 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:31.214 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:31.214 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:31.214 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:31.214 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:31.214 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:31.214 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:31.214 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:31.214 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:31.214 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:31.214 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:31.214 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:31.214 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:31.214 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:31.214 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:31.214 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:31.214 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:31.214 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:31.214 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:31.214 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:31.214 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:31.214 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:31.214 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:31.214 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:31.214 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:31.214 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:31.214 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:31.214 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:31.214 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:31.214 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:31.215 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:31.215 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:31.215 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:31.215 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:31.215 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:31.215 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:31.215 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:31.215 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:31.215 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:31.215 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:31.215 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:31.215 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:31.215 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:31.215 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:31.215 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:31.215 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:31.215 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:31.215 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:31.215 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:37:31.215 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:31.215 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:31.215 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:31.215 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:31.215 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:31.215 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:31.215 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:31.215 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:31.215 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:31.215 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:31.474 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:31.474 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:31.474 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:31.474 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:31.474 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:31.474 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:31.474 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:31.474 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:31.474 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:31.474 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:31.474 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:31.474 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:31.474 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:31.474 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:31.474 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:31.474 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:31.474 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:31.474 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.266 ms 00:37:31.474 00:37:31.474 --- 10.0.0.2 ping statistics --- 00:37:31.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:31.474 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:37:31.474 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:31.474 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:31.474 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:37:31.474 00:37:31.474 --- 10.0.0.1 ping statistics --- 00:37:31.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:31.474 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:37:31.474 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:31.474 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:37:31.474 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:31.474 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:31.474 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:31.474 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:31.474 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:31.474 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:31.474 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:31.474 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:37:31.474 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:31.474 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:31.474 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:31.474 ************************************ 00:37:31.474 START TEST nvmf_target_disconnect_tc1 00:37:31.474 ************************************ 00:37:31.474 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:37:31.474 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:31.474 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:37:31.474 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:31.474 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:37:31.474 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:31.474 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:37:31.474 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:31.474 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:37:31.474 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:31.474 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:37:31.474 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:37:31.474 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:31.732 [2024-11-19 08:01:23.533186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:31.732 [2024-11-19 08:01:23.533299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2280 with addr=10.0.0.2, port=4420 00:37:31.732 [2024-11-19 08:01:23.533393] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:37:31.732 [2024-11-19 08:01:23.533425] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:37:31.732 [2024-11-19 08:01:23.533450] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:37:31.732 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:37:31.732 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:37:31.732 Initializing NVMe Controllers 00:37:31.732 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:37:31.732 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:31.732 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:31.732 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:31.732 00:37:31.732 real 0m0.233s 00:37:31.732 user 0m0.099s 00:37:31.732 sys 0m0.134s 00:37:31.732 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:31.732 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:37:31.732 ************************************ 00:37:31.732 END TEST nvmf_target_disconnect_tc1 00:37:31.732 ************************************ 00:37:31.732 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:37:31.732 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:31.732 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:31.732 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:31.732 ************************************ 00:37:31.732 START TEST nvmf_target_disconnect_tc2 00:37:31.732 ************************************ 00:37:31.732 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:37:31.732 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:37:31.732 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:37:31.732 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:31.732 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:31.732 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:31.732 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3138556 00:37:31.732 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:37:31.733 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3138556 00:37:31.733 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3138556 ']' 00:37:31.733 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:31.733 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:31.733 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:31.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:31.733 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:31.733 08:01:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:31.990 [2024-11-19 08:01:23.730862] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:37:31.990 [2024-11-19 08:01:23.731013] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:31.990 [2024-11-19 08:01:23.875618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:32.246 [2024-11-19 08:01:23.996816] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:32.246 [2024-11-19 08:01:23.996901] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:32.246 [2024-11-19 08:01:23.996923] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:32.246 [2024-11-19 08:01:23.996943] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:32.246 [2024-11-19 08:01:23.996958] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:32.246 [2024-11-19 08:01:23.999290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:37:32.247 [2024-11-19 08:01:23.999401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:37:32.247 [2024-11-19 08:01:23.999454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:37:32.247 [2024-11-19 08:01:23.999552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:37:32.810 08:01:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:32.810 08:01:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:37:32.810 08:01:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:32.810 08:01:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:32.810 08:01:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:32.810 08:01:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:32.810 08:01:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:32.810 08:01:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:32.810 08:01:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:33.068 Malloc0 00:37:33.068 08:01:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:33.068 08:01:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:37:33.068 08:01:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:33.068 08:01:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:33.068 [2024-11-19 08:01:24.804812] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:33.068 08:01:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:33.068 08:01:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:33.068 08:01:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:33.068 08:01:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:33.068 08:01:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:33.068 08:01:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:33.068 08:01:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:33.068 08:01:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:33.068 08:01:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:33.068 08:01:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:33.068 08:01:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:33.068 08:01:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:33.068 [2024-11-19 08:01:24.834827] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:33.068 08:01:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:33.068 08:01:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:33.068 08:01:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:33.068 08:01:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:33.068 08:01:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:33.068 08:01:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3138709 00:37:33.068 08:01:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:37:33.068 08:01:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:34.970 08:01:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3138556 00:37:34.970 08:01:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:37:34.970 Read completed with error (sct=0, sc=8) 00:37:34.970 starting I/O failed 00:37:34.970 Read completed with error (sct=0, sc=8) 00:37:34.970 starting I/O failed 00:37:34.970 Read completed with error (sct=0, sc=8) 00:37:34.970 starting I/O failed 00:37:34.970 Read completed with error (sct=0, sc=8) 00:37:34.970 starting I/O failed 00:37:34.970 Read completed with error (sct=0, sc=8) 00:37:34.970 starting I/O failed 00:37:34.970 Read completed with error (sct=0, sc=8) 00:37:34.970 starting I/O failed 00:37:34.970 Write completed with error (sct=0, sc=8) 00:37:34.970 starting I/O failed 00:37:34.970 Read completed with error (sct=0, sc=8) 00:37:34.970 starting I/O failed 00:37:34.970 Write completed with error (sct=0, sc=8) 00:37:34.970 starting I/O failed 00:37:34.970 Read completed with error (sct=0, sc=8) 00:37:34.970 starting I/O failed 00:37:34.970 Read completed with error (sct=0, sc=8) 00:37:34.970 starting I/O failed 00:37:34.970 Write completed with error (sct=0, sc=8) 00:37:34.970 starting I/O failed 00:37:34.970 Write completed with error (sct=0, sc=8) 00:37:34.970 starting I/O failed 00:37:34.970 Write completed with error (sct=0, sc=8) 00:37:34.970 starting I/O failed 00:37:34.970 Write completed with error (sct=0, sc=8) 00:37:34.970 starting I/O failed 00:37:34.970 Read completed with error (sct=0, sc=8) 00:37:34.970 starting I/O failed 00:37:34.970 Read completed with error (sct=0, sc=8) 00:37:34.970 starting I/O failed 00:37:34.970 Write completed with error (sct=0, sc=8) 00:37:34.970 starting I/O failed 00:37:34.970 Read completed with error (sct=0, sc=8) 00:37:34.970 starting I/O failed 00:37:34.970 Read completed with error (sct=0, sc=8) 00:37:34.970 starting I/O failed 00:37:34.970 Read completed with error (sct=0, sc=8) 00:37:34.970 starting I/O failed 00:37:34.970 Read completed with error (sct=0, sc=8) 00:37:34.970 starting I/O failed 00:37:34.970 Read completed with error (sct=0, sc=8) 00:37:34.970 starting I/O failed 00:37:34.970 Write completed with error (sct=0, sc=8) 00:37:34.970 starting I/O failed 00:37:34.970 Read completed with error (sct=0, sc=8) 00:37:34.970 starting I/O failed 00:37:34.970 Write completed with error (sct=0, sc=8) 00:37:34.970 starting I/O failed 00:37:34.970 Write completed with error (sct=0, sc=8) 00:37:34.970 starting I/O failed 00:37:34.970 Read completed with error (sct=0, sc=8) 00:37:34.970 starting I/O failed 00:37:34.970 Write completed with error (sct=0, sc=8) 00:37:34.970 starting I/O failed 00:37:34.970 Write completed with error (sct=0, sc=8) 00:37:34.970 starting I/O failed 00:37:34.970 Write completed with error (sct=0, sc=8) 00:37:34.970 starting I/O failed 00:37:34.970 Write completed with error (sct=0, sc=8) 00:37:34.970 starting I/O failed 00:37:34.970 Read completed with error (sct=0, sc=8) 00:37:34.970 starting I/O failed 00:37:34.970 Read completed with error (sct=0, sc=8) 00:37:34.970 starting I/O failed 00:37:34.970 Read completed with error (sct=0, sc=8) 00:37:34.970 starting I/O failed 00:37:34.970 Read completed with error (sct=0, sc=8) 00:37:34.970 starting I/O failed 00:37:34.970 Read completed with error (sct=0, sc=8) 00:37:34.970 starting I/O failed 00:37:34.970 [2024-11-19 08:01:26.872648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:34.970 Read completed with error (sct=0, sc=8) 00:37:34.970 starting I/O failed 00:37:34.970 Read completed with error (sct=0, sc=8) 00:37:34.970 starting I/O failed 00:37:34.970 Read completed with error (sct=0, sc=8) 00:37:34.970 starting I/O failed 00:37:34.970 Read completed with error (sct=0, sc=8) 00:37:34.970 starting I/O failed 00:37:34.970 Write completed with error (sct=0, sc=8) 00:37:34.970 starting I/O failed 00:37:34.970 Write completed with error (sct=0, sc=8) 00:37:34.970 starting I/O failed 00:37:34.970 Write completed with error (sct=0, sc=8) 00:37:34.970 starting I/O failed 00:37:34.970 Read completed with error (sct=0, sc=8) 00:37:34.970 starting I/O failed 00:37:34.970 Read completed with error (sct=0, sc=8) 00:37:34.970 starting I/O failed 00:37:34.970 Read completed with error (sct=0, sc=8) 00:37:34.970 starting I/O failed 00:37:34.970 Read completed with error (sct=0, sc=8) 00:37:34.970 starting I/O failed 00:37:34.970 Write completed with error (sct=0, sc=8) 00:37:34.970 starting I/O failed 00:37:34.970 Read completed with error (sct=0, sc=8) 00:37:34.970 starting I/O failed 00:37:34.970 Write completed with error (sct=0, sc=8) 00:37:34.970 starting I/O failed 00:37:34.970 Write completed with error (sct=0, sc=8) 00:37:34.970 starting I/O failed 00:37:34.970 Write completed with error (sct=0, sc=8) 00:37:34.970 starting I/O failed 00:37:34.970 Read completed with error (sct=0, sc=8) 00:37:34.970 starting I/O failed 00:37:34.970 Read completed with error (sct=0, sc=8) 00:37:34.970 starting I/O failed 00:37:34.970 Read completed with error (sct=0, sc=8) 00:37:34.970 starting I/O failed 00:37:34.970 Read completed with error (sct=0, sc=8) 00:37:34.970 starting I/O failed 00:37:34.970 Write completed with error (sct=0, sc=8) 00:37:34.971 starting I/O failed 00:37:34.971 Write completed with error (sct=0, sc=8) 00:37:34.971 starting I/O failed 00:37:34.971 Read completed with error (sct=0, sc=8) 00:37:34.971 starting I/O failed 00:37:34.971 Write completed with error (sct=0, sc=8) 00:37:34.971 starting I/O failed 00:37:34.971 Write completed with error (sct=0, sc=8) 00:37:34.971 starting I/O failed 00:37:34.971 Write completed with error (sct=0, sc=8) 00:37:34.971 starting I/O failed 00:37:34.971 Read completed with error (sct=0, sc=8) 00:37:34.971 starting I/O failed 00:37:34.971 [2024-11-19 08:01:26.873292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:34.971 Read completed with error (sct=0, sc=8) 00:37:34.971 starting I/O failed 00:37:34.971 Read completed with error (sct=0, sc=8) 00:37:34.971 starting I/O failed 00:37:34.971 Read completed with error (sct=0, sc=8) 00:37:34.971 starting I/O failed 00:37:34.971 Read completed with error (sct=0, sc=8) 00:37:34.971 starting I/O failed 00:37:34.971 Read completed with error (sct=0, sc=8) 00:37:34.971 starting I/O failed 00:37:34.971 Read completed with error (sct=0, sc=8) 00:37:34.971 starting I/O failed 00:37:34.971 Read completed with error (sct=0, sc=8) 00:37:34.971 starting I/O failed 00:37:34.971 Read completed with error (sct=0, sc=8) 00:37:34.971 starting I/O failed 00:37:34.971 Read completed with error (sct=0, sc=8) 00:37:34.971 starting I/O failed 00:37:34.971 Read completed with error (sct=0, sc=8) 00:37:34.971 starting I/O failed 00:37:34.971 Read completed with error (sct=0, sc=8) 00:37:34.971 starting I/O failed 00:37:34.971 Read completed with error (sct=0, sc=8) 00:37:34.971 starting I/O failed 00:37:34.971 Write completed with error (sct=0, sc=8) 00:37:34.971 starting I/O failed 00:37:34.971 Read completed with error (sct=0, sc=8) 00:37:34.971 starting I/O failed 00:37:34.971 Read completed with error (sct=0, sc=8) 00:37:34.971 starting I/O failed 00:37:34.971 Read completed with error (sct=0, sc=8) 00:37:34.971 starting I/O failed 00:37:34.971 Write completed with error (sct=0, sc=8) 00:37:34.971 starting I/O failed 00:37:34.971 Write completed with error (sct=0, sc=8) 00:37:34.971 starting I/O failed 00:37:34.971 Read completed with error (sct=0, sc=8) 00:37:34.971 starting I/O failed 00:37:34.971 Read completed with error (sct=0, sc=8) 00:37:34.971 starting I/O failed 00:37:34.971 Read completed with error (sct=0, sc=8) 00:37:34.971 starting I/O failed 00:37:34.971 Write completed with error (sct=0, sc=8) 00:37:34.971 starting I/O failed 00:37:34.971 Write completed with error (sct=0, sc=8) 00:37:34.971 starting I/O failed 00:37:34.971 Read completed with error (sct=0, sc=8) 00:37:34.971 starting I/O failed 00:37:34.971 Write completed with error (sct=0, sc=8) 00:37:34.971 starting I/O failed 00:37:34.971 Write completed with error (sct=0, sc=8) 00:37:34.971 starting I/O failed 00:37:34.971 Write completed with error (sct=0, sc=8) 00:37:34.971 starting I/O failed 00:37:34.971 Write completed with error (sct=0, sc=8) 00:37:34.971 starting I/O failed 00:37:34.971 Write completed with error (sct=0, sc=8) 00:37:34.971 starting I/O failed 00:37:34.971 Write completed with error (sct=0, sc=8) 00:37:34.971 starting I/O failed 00:37:34.971 Write completed with error (sct=0, sc=8) 00:37:34.971 starting I/O failed 00:37:34.971 Write completed with error (sct=0, sc=8) 00:37:34.971 starting I/O failed 00:37:34.971 [2024-11-19 08:01:26.873863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:34.971 Read completed with error (sct=0, sc=8) 00:37:34.971 starting I/O failed 00:37:34.971 Read completed with error (sct=0, sc=8) 00:37:34.971 starting I/O failed 00:37:34.971 Read completed with error (sct=0, sc=8) 00:37:34.971 starting I/O failed 00:37:34.971 Read completed with error (sct=0, sc=8) 00:37:34.971 starting I/O failed 00:37:34.971 Read completed with error (sct=0, sc=8) 00:37:34.971 starting I/O failed 00:37:34.971 Read completed with error (sct=0, sc=8) 00:37:34.971 starting I/O failed 00:37:34.971 Read completed with error (sct=0, sc=8) 00:37:34.971 starting I/O failed 00:37:34.971 Read completed with error (sct=0, sc=8) 00:37:34.971 starting I/O failed 00:37:34.971 Read completed with error (sct=0, sc=8) 00:37:34.971 starting I/O failed 00:37:34.971 Read completed with error (sct=0, sc=8) 00:37:34.971 starting I/O failed 00:37:34.971 Read completed with error (sct=0, sc=8) 00:37:34.971 starting I/O failed 00:37:34.971 Read completed with error (sct=0, sc=8) 00:37:34.971 starting I/O failed 00:37:34.971 Read completed with error (sct=0, sc=8) 00:37:34.971 starting I/O failed 00:37:34.971 Read completed with error (sct=0, sc=8) 00:37:34.971 starting I/O failed 00:37:34.971 Read completed with error (sct=0, sc=8) 00:37:34.971 starting I/O failed 00:37:34.971 Read completed with error (sct=0, sc=8) 00:37:34.971 starting I/O failed 00:37:34.971 Read completed with error (sct=0, sc=8) 00:37:34.971 starting I/O failed 00:37:34.971 Read completed with error (sct=0, sc=8) 00:37:34.971 starting I/O failed 00:37:34.971 Read completed with error (sct=0, sc=8) 00:37:34.971 starting I/O failed 00:37:34.971 Read completed with error (sct=0, sc=8) 00:37:34.971 starting I/O failed 00:37:34.971 Read completed with error (sct=0, sc=8) 00:37:34.971 starting I/O failed 00:37:34.971 Write completed with error (sct=0, sc=8) 00:37:34.971 starting I/O failed 00:37:34.971 Write completed with error (sct=0, sc=8) 00:37:34.971 starting I/O failed 00:37:34.971 Write completed with error (sct=0, sc=8) 00:37:34.971 starting I/O failed 00:37:34.971 Write completed with error (sct=0, sc=8) 00:37:34.971 starting I/O failed 00:37:34.971 Read completed with error (sct=0, sc=8) 00:37:34.971 starting I/O failed 00:37:34.971 Write completed with error (sct=0, sc=8) 00:37:34.971 starting I/O failed 00:37:34.971 Write completed with error (sct=0, sc=8) 00:37:34.971 starting I/O failed 00:37:34.971 Read completed with error (sct=0, sc=8) 00:37:34.971 starting I/O failed 00:37:34.971 Write completed with error (sct=0, sc=8) 00:37:34.971 starting I/O failed 00:37:34.971 Write completed with error (sct=0, sc=8) 00:37:34.971 starting I/O failed 00:37:34.971 Write completed with error (sct=0, sc=8) 00:37:34.971 starting I/O failed 00:37:34.971 [2024-11-19 08:01:26.874450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:34.971 [2024-11-19 08:01:26.874696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.971 [2024-11-19 08:01:26.874749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.971 qpair failed and we were unable to recover it. 00:37:34.971 [2024-11-19 08:01:26.874882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.971 [2024-11-19 08:01:26.874920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.971 qpair failed and we were unable to recover it. 00:37:34.971 [2024-11-19 08:01:26.875044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.971 [2024-11-19 08:01:26.875079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.971 qpair failed and we were unable to recover it. 00:37:34.971 [2024-11-19 08:01:26.875198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.971 [2024-11-19 08:01:26.875233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.971 qpair failed and we were unable to recover it. 00:37:34.971 [2024-11-19 08:01:26.875397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.971 [2024-11-19 08:01:26.875432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.971 qpair failed and we were unable to recover it. 00:37:34.971 [2024-11-19 08:01:26.875576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.971 [2024-11-19 08:01:26.875642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.971 qpair failed and we were unable to recover it. 00:37:34.971 [2024-11-19 08:01:26.875770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.971 [2024-11-19 08:01:26.875808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.971 qpair failed and we were unable to recover it. 00:37:34.971 [2024-11-19 08:01:26.875968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.971 [2024-11-19 08:01:26.876018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:34.971 qpair failed and we were unable to recover it. 00:37:34.971 [2024-11-19 08:01:26.876236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.971 [2024-11-19 08:01:26.876292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:34.971 qpair failed and we were unable to recover it. 00:37:34.971 [2024-11-19 08:01:26.876506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.971 [2024-11-19 08:01:26.876542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:34.971 qpair failed and we were unable to recover it. 00:37:34.971 [2024-11-19 08:01:26.876680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.971 [2024-11-19 08:01:26.876724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:34.971 qpair failed and we were unable to recover it. 00:37:34.971 [2024-11-19 08:01:26.876861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.971 [2024-11-19 08:01:26.876895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:34.971 qpair failed and we were unable to recover it. 00:37:34.971 [2024-11-19 08:01:26.877089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.972 [2024-11-19 08:01:26.877127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:34.972 qpair failed and we were unable to recover it. 00:37:34.972 [2024-11-19 08:01:26.877276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.972 [2024-11-19 08:01:26.877310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:34.972 qpair failed and we were unable to recover it. 00:37:34.972 [2024-11-19 08:01:26.877477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.972 [2024-11-19 08:01:26.877510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:34.972 qpair failed and we were unable to recover it. 00:37:34.972 [2024-11-19 08:01:26.877650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.972 [2024-11-19 08:01:26.877697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:34.972 qpair failed and we were unable to recover it. 00:37:34.972 [2024-11-19 08:01:26.877879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.972 [2024-11-19 08:01:26.877913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:34.972 qpair failed and we were unable to recover it. 00:37:34.972 [2024-11-19 08:01:26.878057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.972 [2024-11-19 08:01:26.878091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:34.972 qpair failed and we were unable to recover it. 00:37:34.972 [2024-11-19 08:01:26.878259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.972 [2024-11-19 08:01:26.878293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:34.972 qpair failed and we were unable to recover it. 00:37:34.972 [2024-11-19 08:01:26.878401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.972 [2024-11-19 08:01:26.878436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:34.972 qpair failed and we were unable to recover it. 00:37:34.972 [2024-11-19 08:01:26.878568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.972 [2024-11-19 08:01:26.878617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.972 qpair failed and we were unable to recover it. 00:37:34.972 [2024-11-19 08:01:26.878783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.972 [2024-11-19 08:01:26.878822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.972 qpair failed and we were unable to recover it. 00:37:34.972 [2024-11-19 08:01:26.878941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.972 [2024-11-19 08:01:26.878978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.972 qpair failed and we were unable to recover it. 00:37:34.972 [2024-11-19 08:01:26.879115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.972 [2024-11-19 08:01:26.879151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.972 qpair failed and we were unable to recover it. 00:37:34.972 [2024-11-19 08:01:26.879258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.972 [2024-11-19 08:01:26.879298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.972 qpair failed and we were unable to recover it. 00:37:34.972 [2024-11-19 08:01:26.879462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.972 [2024-11-19 08:01:26.879510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.972 qpair failed and we were unable to recover it. 00:37:34.972 [2024-11-19 08:01:26.879633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.972 [2024-11-19 08:01:26.879668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:34.972 qpair failed and we were unable to recover it. 00:37:34.972 [2024-11-19 08:01:26.879794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.972 [2024-11-19 08:01:26.879828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:34.972 qpair failed and we were unable to recover it. 00:37:34.972 [2024-11-19 08:01:26.879961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.972 [2024-11-19 08:01:26.879994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:34.972 qpair failed and we were unable to recover it. 00:37:34.972 [2024-11-19 08:01:26.880202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.972 [2024-11-19 08:01:26.880237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:34.972 qpair failed and we were unable to recover it. 00:37:34.972 [2024-11-19 08:01:26.880348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.972 [2024-11-19 08:01:26.880381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:34.972 qpair failed and we were unable to recover it. 00:37:34.972 [2024-11-19 08:01:26.880548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.972 [2024-11-19 08:01:26.880582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:34.972 qpair failed and we were unable to recover it. 00:37:34.972 [2024-11-19 08:01:26.880773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.972 [2024-11-19 08:01:26.880822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.972 qpair failed and we were unable to recover it. 00:37:34.972 [2024-11-19 08:01:26.880970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.972 [2024-11-19 08:01:26.881007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.972 qpair failed and we were unable to recover it. 00:37:34.972 [2024-11-19 08:01:26.881195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.972 [2024-11-19 08:01:26.881246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.972 qpair failed and we were unable to recover it. 00:37:34.972 [2024-11-19 08:01:26.881372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.972 [2024-11-19 08:01:26.881407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.972 qpair failed and we were unable to recover it. 00:37:34.972 [2024-11-19 08:01:26.881606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.972 [2024-11-19 08:01:26.881640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.972 qpair failed and we were unable to recover it. 00:37:34.972 [2024-11-19 08:01:26.881783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.972 [2024-11-19 08:01:26.881819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.972 qpair failed and we were unable to recover it. 00:37:34.972 [2024-11-19 08:01:26.881986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.972 [2024-11-19 08:01:26.882021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:34.972 qpair failed and we were unable to recover it. 00:37:34.972 [2024-11-19 08:01:26.882161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.972 [2024-11-19 08:01:26.882194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:34.972 qpair failed and we were unable to recover it. 00:37:34.972 [2024-11-19 08:01:26.882391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.972 [2024-11-19 08:01:26.882428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:34.972 qpair failed and we were unable to recover it. 00:37:34.972 [2024-11-19 08:01:26.882588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.972 [2024-11-19 08:01:26.882622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:34.972 qpair failed and we were unable to recover it. 00:37:34.972 [2024-11-19 08:01:26.882768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.972 [2024-11-19 08:01:26.882803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:34.972 qpair failed and we were unable to recover it. 00:37:34.972 [2024-11-19 08:01:26.882907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.972 [2024-11-19 08:01:26.882960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:34.972 qpair failed and we were unable to recover it. 00:37:34.972 [2024-11-19 08:01:26.883149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.972 [2024-11-19 08:01:26.883186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:34.972 qpair failed and we were unable to recover it. 00:37:34.972 [2024-11-19 08:01:26.883352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.972 [2024-11-19 08:01:26.883385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:34.972 qpair failed and we were unable to recover it. 00:37:34.972 [2024-11-19 08:01:26.883515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.972 [2024-11-19 08:01:26.883549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:34.972 qpair failed and we were unable to recover it. 00:37:34.972 [2024-11-19 08:01:26.883722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.972 [2024-11-19 08:01:26.883759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.972 qpair failed and we were unable to recover it. 00:37:34.972 [2024-11-19 08:01:26.883868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.972 [2024-11-19 08:01:26.883903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.972 qpair failed and we were unable to recover it. 00:37:34.972 [2024-11-19 08:01:26.884077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.972 [2024-11-19 08:01:26.884113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.972 qpair failed and we were unable to recover it. 00:37:34.973 [2024-11-19 08:01:26.884272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.973 [2024-11-19 08:01:26.884323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.973 qpair failed and we were unable to recover it. 00:37:34.973 [2024-11-19 08:01:26.884525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.973 [2024-11-19 08:01:26.884560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.973 qpair failed and we were unable to recover it. 00:37:34.973 [2024-11-19 08:01:26.884704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.973 [2024-11-19 08:01:26.884753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.973 qpair failed and we were unable to recover it. 00:37:34.973 [2024-11-19 08:01:26.884868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.973 [2024-11-19 08:01:26.884904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:34.973 qpair failed and we were unable to recover it. 00:37:34.973 [2024-11-19 08:01:26.885015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.973 [2024-11-19 08:01:26.885049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:34.973 qpair failed and we were unable to recover it. 00:37:34.973 [2024-11-19 08:01:26.885232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.973 [2024-11-19 08:01:26.885295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:34.973 qpair failed and we were unable to recover it. 00:37:34.973 [2024-11-19 08:01:26.885428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.973 [2024-11-19 08:01:26.885462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:34.973 qpair failed and we were unable to recover it. 00:37:34.973 [2024-11-19 08:01:26.885567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.973 [2024-11-19 08:01:26.885601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:34.973 qpair failed and we were unable to recover it. 00:37:34.973 [2024-11-19 08:01:26.885774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.973 [2024-11-19 08:01:26.885810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.973 qpair failed and we were unable to recover it. 00:37:34.973 [2024-11-19 08:01:26.886000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.973 [2024-11-19 08:01:26.886050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.973 qpair failed and we were unable to recover it. 00:37:34.973 [2024-11-19 08:01:26.886190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.973 [2024-11-19 08:01:26.886227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.973 qpair failed and we were unable to recover it. 00:37:34.973 [2024-11-19 08:01:26.886390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.973 [2024-11-19 08:01:26.886426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.973 qpair failed and we were unable to recover it. 00:37:34.973 [2024-11-19 08:01:26.886537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.973 [2024-11-19 08:01:26.886571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.973 qpair failed and we were unable to recover it. 00:37:34.973 [2024-11-19 08:01:26.886722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.973 [2024-11-19 08:01:26.886771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.973 qpair failed and we were unable to recover it. 00:37:34.973 [2024-11-19 08:01:26.886914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.973 [2024-11-19 08:01:26.886954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:34.973 qpair failed and we were unable to recover it. 00:37:34.973 [2024-11-19 08:01:26.887066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.973 [2024-11-19 08:01:26.887100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:34.973 qpair failed and we were unable to recover it. 00:37:34.973 [2024-11-19 08:01:26.887230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.973 [2024-11-19 08:01:26.887263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:34.973 qpair failed and we were unable to recover it. 00:37:34.973 [2024-11-19 08:01:26.887461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.973 [2024-11-19 08:01:26.887495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:34.973 qpair failed and we were unable to recover it. 00:37:34.973 [2024-11-19 08:01:26.887630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.973 [2024-11-19 08:01:26.887664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:34.973 qpair failed and we were unable to recover it. 00:37:34.973 [2024-11-19 08:01:26.887823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.973 [2024-11-19 08:01:26.887858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.973 qpair failed and we were unable to recover it. 00:37:34.973 [2024-11-19 08:01:26.888023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.973 [2024-11-19 08:01:26.888058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.973 qpair failed and we were unable to recover it. 00:37:34.973 [2024-11-19 08:01:26.888166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.973 [2024-11-19 08:01:26.888201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.973 qpair failed and we were unable to recover it. 00:37:34.973 [2024-11-19 08:01:26.888306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.973 [2024-11-19 08:01:26.888341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.973 qpair failed and we were unable to recover it. 00:37:34.973 [2024-11-19 08:01:26.888568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.973 [2024-11-19 08:01:26.888616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.973 qpair failed and we were unable to recover it. 00:37:34.973 [2024-11-19 08:01:26.888757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.973 [2024-11-19 08:01:26.888806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.973 qpair failed and we were unable to recover it. 00:37:34.973 [2024-11-19 08:01:26.888955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.973 [2024-11-19 08:01:26.888993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.973 qpair failed and we were unable to recover it. 00:37:34.973 [2024-11-19 08:01:26.889137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.973 [2024-11-19 08:01:26.889173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.973 qpair failed and we were unable to recover it. 00:37:34.973 [2024-11-19 08:01:26.889333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.973 [2024-11-19 08:01:26.889369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.973 qpair failed and we were unable to recover it. 00:37:34.973 [2024-11-19 08:01:26.889497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.973 [2024-11-19 08:01:26.889532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:34.973 qpair failed and we were unable to recover it. 00:37:34.973 [2024-11-19 08:01:26.889676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.973 [2024-11-19 08:01:26.889721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.973 qpair failed and we were unable to recover it. 00:37:34.973 [2024-11-19 08:01:26.889841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.973 [2024-11-19 08:01:26.889880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.973 qpair failed and we were unable to recover it. 00:37:34.973 [2024-11-19 08:01:26.890046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.973 [2024-11-19 08:01:26.890082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.973 qpair failed and we were unable to recover it. 00:37:34.973 [2024-11-19 08:01:26.890242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.973 [2024-11-19 08:01:26.890277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.973 qpair failed and we were unable to recover it. 00:37:34.973 [2024-11-19 08:01:26.890544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.973 [2024-11-19 08:01:26.890579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.973 qpair failed and we were unable to recover it. 00:37:34.973 [2024-11-19 08:01:26.890723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.973 [2024-11-19 08:01:26.890760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.973 qpair failed and we were unable to recover it. 00:37:34.973 [2024-11-19 08:01:26.890902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.973 [2024-11-19 08:01:26.890937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:34.973 qpair failed and we were unable to recover it. 00:37:34.973 [2024-11-19 08:01:26.891068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.973 [2024-11-19 08:01:26.891102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:34.973 qpair failed and we were unable to recover it. 00:37:34.974 [2024-11-19 08:01:26.891232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.974 [2024-11-19 08:01:26.891266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:34.974 qpair failed and we were unable to recover it. 00:37:34.974 [2024-11-19 08:01:26.891383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.974 [2024-11-19 08:01:26.891417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:34.974 qpair failed and we were unable to recover it. 00:37:34.974 [2024-11-19 08:01:26.891529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.974 [2024-11-19 08:01:26.891563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:34.974 qpair failed and we were unable to recover it. 00:37:34.974 [2024-11-19 08:01:26.891702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.974 [2024-11-19 08:01:26.891737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:34.974 qpair failed and we were unable to recover it. 00:37:34.974 [2024-11-19 08:01:26.891878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.974 [2024-11-19 08:01:26.891913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:34.974 qpair failed and we were unable to recover it. 00:37:34.974 [2024-11-19 08:01:26.892094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.974 [2024-11-19 08:01:26.892128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:34.974 qpair failed and we were unable to recover it. 00:37:34.974 [2024-11-19 08:01:26.892234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.974 [2024-11-19 08:01:26.892268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:34.974 qpair failed and we were unable to recover it. 00:37:34.974 [2024-11-19 08:01:26.892397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.974 [2024-11-19 08:01:26.892431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:34.974 qpair failed and we were unable to recover it. 00:37:34.974 [2024-11-19 08:01:26.892531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.974 [2024-11-19 08:01:26.892565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:34.974 qpair failed and we were unable to recover it. 00:37:34.974 [2024-11-19 08:01:26.892674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.974 [2024-11-19 08:01:26.892715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:34.974 qpair failed and we were unable to recover it. 00:37:34.974 [2024-11-19 08:01:26.892849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.974 [2024-11-19 08:01:26.892883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:34.974 qpair failed and we were unable to recover it. 00:37:34.974 [2024-11-19 08:01:26.893013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.974 [2024-11-19 08:01:26.893047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:34.974 qpair failed and we were unable to recover it. 00:37:34.974 [2024-11-19 08:01:26.893182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.974 [2024-11-19 08:01:26.893216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:34.974 qpair failed and we were unable to recover it. 00:37:34.974 [2024-11-19 08:01:26.893335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.974 [2024-11-19 08:01:26.893370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:34.974 qpair failed and we were unable to recover it. 00:37:34.974 [2024-11-19 08:01:26.893554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.974 [2024-11-19 08:01:26.893603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.974 qpair failed and we were unable to recover it. 00:37:34.974 [2024-11-19 08:01:26.893752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.974 [2024-11-19 08:01:26.893801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.974 qpair failed and we were unable to recover it. 00:37:34.974 [2024-11-19 08:01:26.893968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.974 [2024-11-19 08:01:26.894005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.974 qpair failed and we were unable to recover it. 00:37:34.974 [2024-11-19 08:01:26.894114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.974 [2024-11-19 08:01:26.894154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.974 qpair failed and we were unable to recover it. 00:37:34.974 [2024-11-19 08:01:26.894293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.974 [2024-11-19 08:01:26.894328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.974 qpair failed and we were unable to recover it. 00:37:34.974 [2024-11-19 08:01:26.894520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.974 [2024-11-19 08:01:26.894569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.974 qpair failed and we were unable to recover it. 00:37:34.974 [2024-11-19 08:01:26.894685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.974 [2024-11-19 08:01:26.894729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:34.974 qpair failed and we were unable to recover it. 00:37:34.974 [2024-11-19 08:01:26.894849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.974 [2024-11-19 08:01:26.894888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.974 qpair failed and we were unable to recover it. 00:37:34.974 [2024-11-19 08:01:26.895016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.974 [2024-11-19 08:01:26.895070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.974 qpair failed and we were unable to recover it. 00:37:34.974 [2024-11-19 08:01:26.895295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.974 [2024-11-19 08:01:26.895330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.974 qpair failed and we were unable to recover it. 00:37:34.974 [2024-11-19 08:01:26.895473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.974 [2024-11-19 08:01:26.895508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.974 qpair failed and we were unable to recover it. 00:37:34.974 [2024-11-19 08:01:26.895685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.974 [2024-11-19 08:01:26.895729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.974 qpair failed and we were unable to recover it. 00:37:34.974 [2024-11-19 08:01:26.895883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.974 [2024-11-19 08:01:26.895923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.974 qpair failed and we were unable to recover it. 00:37:34.974 [2024-11-19 08:01:26.896026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.974 [2024-11-19 08:01:26.896061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.974 qpair failed and we were unable to recover it. 00:37:34.974 [2024-11-19 08:01:26.896196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.974 [2024-11-19 08:01:26.896235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.974 qpair failed and we were unable to recover it. 00:37:34.974 [2024-11-19 08:01:26.896494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.974 [2024-11-19 08:01:26.896529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.974 qpair failed and we were unable to recover it. 00:37:34.974 [2024-11-19 08:01:26.896667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.974 [2024-11-19 08:01:26.896710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.974 qpair failed and we were unable to recover it. 00:37:34.974 [2024-11-19 08:01:26.896888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.974 [2024-11-19 08:01:26.896923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.974 qpair failed and we were unable to recover it. 00:37:34.974 [2024-11-19 08:01:26.897034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.974 [2024-11-19 08:01:26.897069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.974 qpair failed and we were unable to recover it. 00:37:34.974 [2024-11-19 08:01:26.897210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.974 [2024-11-19 08:01:26.897245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.974 qpair failed and we were unable to recover it. 00:37:34.974 [2024-11-19 08:01:26.897406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.974 [2024-11-19 08:01:26.897441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.974 qpair failed and we were unable to recover it. 00:37:34.974 [2024-11-19 08:01:26.897596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.974 [2024-11-19 08:01:26.897647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.974 qpair failed and we were unable to recover it. 00:37:34.974 [2024-11-19 08:01:26.897827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.974 [2024-11-19 08:01:26.897883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:34.974 qpair failed and we were unable to recover it. 00:37:34.974 [2024-11-19 08:01:26.898010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.975 [2024-11-19 08:01:26.898048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.975 qpair failed and we were unable to recover it. 00:37:34.975 [2024-11-19 08:01:26.898304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.975 [2024-11-19 08:01:26.898361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.975 qpair failed and we were unable to recover it. 00:37:34.975 [2024-11-19 08:01:26.898475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.975 [2024-11-19 08:01:26.898516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.975 qpair failed and we were unable to recover it. 00:37:34.975 [2024-11-19 08:01:26.898685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.975 [2024-11-19 08:01:26.898729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.975 qpair failed and we were unable to recover it. 00:37:34.975 [2024-11-19 08:01:26.898841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.975 [2024-11-19 08:01:26.898877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:34.975 qpair failed and we were unable to recover it. 00:37:34.975 [2024-11-19 08:01:26.899165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.975 [2024-11-19 08:01:26.899229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.975 qpair failed and we were unable to recover it. 00:37:34.975 [2024-11-19 08:01:26.899452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.975 [2024-11-19 08:01:26.899520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:34.975 qpair failed and we were unable to recover it. 00:37:34.975 [2024-11-19 08:01:26.899681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.975 [2024-11-19 08:01:26.899724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.975 qpair failed and we were unable to recover it. 00:37:34.975 [2024-11-19 08:01:26.899869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.975 [2024-11-19 08:01:26.899903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:34.975 qpair failed and we were unable to recover it. 00:37:34.975 [2024-11-19 08:01:26.900061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.975 [2024-11-19 08:01:26.900110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:34.975 qpair failed and we were unable to recover it. 00:37:34.975 [2024-11-19 08:01:26.900410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.975 [2024-11-19 08:01:26.900470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:34.975 qpair failed and we were unable to recover it. 00:37:34.975 [2024-11-19 08:01:26.900675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.975 [2024-11-19 08:01:26.900716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:34.975 qpair failed and we were unable to recover it. 00:37:34.975 [2024-11-19 08:01:26.900882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.975 [2024-11-19 08:01:26.900916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:34.975 qpair failed and we were unable to recover it. 00:37:34.975 [2024-11-19 08:01:26.901080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.975 [2024-11-19 08:01:26.901114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:34.975 qpair failed and we were unable to recover it. 00:37:34.975 [2024-11-19 08:01:26.901233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.975 [2024-11-19 08:01:26.901267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:34.975 qpair failed and we were unable to recover it. 00:37:35.254 [2024-11-19 08:01:26.901425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.254 [2024-11-19 08:01:26.901512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.254 qpair failed and we were unable to recover it. 00:37:35.254 [2024-11-19 08:01:26.901668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.254 [2024-11-19 08:01:26.901730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.254 qpair failed and we were unable to recover it. 00:37:35.254 [2024-11-19 08:01:26.901839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.254 [2024-11-19 08:01:26.901873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.254 qpair failed and we were unable to recover it. 00:37:35.254 [2024-11-19 08:01:26.902014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.254 [2024-11-19 08:01:26.902048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.254 qpair failed and we were unable to recover it. 00:37:35.254 [2024-11-19 08:01:26.902215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.254 [2024-11-19 08:01:26.902249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.254 qpair failed and we were unable to recover it. 00:37:35.254 [2024-11-19 08:01:26.902408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.254 [2024-11-19 08:01:26.902466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.254 qpair failed and we were unable to recover it. 00:37:35.255 [2024-11-19 08:01:26.902628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.255 [2024-11-19 08:01:26.902662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.255 qpair failed and we were unable to recover it. 00:37:35.255 [2024-11-19 08:01:26.902810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.255 [2024-11-19 08:01:26.902859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.255 qpair failed and we were unable to recover it. 00:37:35.255 [2024-11-19 08:01:26.903003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.255 [2024-11-19 08:01:26.903071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.255 qpair failed and we were unable to recover it. 00:37:35.255 [2024-11-19 08:01:26.903201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.255 [2024-11-19 08:01:26.903244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.255 qpair failed and we were unable to recover it. 00:37:35.255 [2024-11-19 08:01:26.903404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.255 [2024-11-19 08:01:26.903444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.255 qpair failed and we were unable to recover it. 00:37:35.255 [2024-11-19 08:01:26.903595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.255 [2024-11-19 08:01:26.903647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.255 qpair failed and we were unable to recover it. 00:37:35.255 [2024-11-19 08:01:26.903810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.255 [2024-11-19 08:01:26.903858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.255 qpair failed and we were unable to recover it. 00:37:35.255 [2024-11-19 08:01:26.904018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.255 [2024-11-19 08:01:26.904056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.255 qpair failed and we were unable to recover it. 00:37:35.255 [2024-11-19 08:01:26.904172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.255 [2024-11-19 08:01:26.904210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.255 qpair failed and we were unable to recover it. 00:37:35.255 [2024-11-19 08:01:26.904353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.255 [2024-11-19 08:01:26.904405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.255 qpair failed and we were unable to recover it. 00:37:35.255 [2024-11-19 08:01:26.904513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.255 [2024-11-19 08:01:26.904546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.255 qpair failed and we were unable to recover it. 00:37:35.255 [2024-11-19 08:01:26.904673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.255 [2024-11-19 08:01:26.904728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.255 qpair failed and we were unable to recover it. 00:37:35.255 [2024-11-19 08:01:26.904850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.255 [2024-11-19 08:01:26.904887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.255 qpair failed and we were unable to recover it. 00:37:35.255 [2024-11-19 08:01:26.905161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.255 [2024-11-19 08:01:26.905196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.255 qpair failed and we were unable to recover it. 00:37:35.255 [2024-11-19 08:01:26.905307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.255 [2024-11-19 08:01:26.905342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.255 qpair failed and we were unable to recover it. 00:37:35.255 [2024-11-19 08:01:26.905530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.255 [2024-11-19 08:01:26.905591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.255 qpair failed and we were unable to recover it. 00:37:35.255 [2024-11-19 08:01:26.905737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.255 [2024-11-19 08:01:26.905775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.255 qpair failed and we were unable to recover it. 00:37:35.255 [2024-11-19 08:01:26.905939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.255 [2024-11-19 08:01:26.905974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.255 qpair failed and we were unable to recover it. 00:37:35.255 [2024-11-19 08:01:26.906168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.255 [2024-11-19 08:01:26.906205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.255 qpair failed and we were unable to recover it. 00:37:35.255 [2024-11-19 08:01:26.906425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.255 [2024-11-19 08:01:26.906457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.255 qpair failed and we were unable to recover it. 00:37:35.255 [2024-11-19 08:01:26.906586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.255 [2024-11-19 08:01:26.906619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.255 qpair failed and we were unable to recover it. 00:37:35.255 [2024-11-19 08:01:26.906772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.255 [2024-11-19 08:01:26.906806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.255 qpair failed and we were unable to recover it. 00:37:35.255 [2024-11-19 08:01:26.906945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.255 [2024-11-19 08:01:26.906978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.255 qpair failed and we were unable to recover it. 00:37:35.255 [2024-11-19 08:01:26.907121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.255 [2024-11-19 08:01:26.907169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.255 qpair failed and we were unable to recover it. 00:37:35.255 [2024-11-19 08:01:26.907326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.255 [2024-11-19 08:01:26.907425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.255 qpair failed and we were unable to recover it. 00:37:35.255 [2024-11-19 08:01:26.907603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.255 [2024-11-19 08:01:26.907652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.255 qpair failed and we were unable to recover it. 00:37:35.255 [2024-11-19 08:01:26.907781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.255 [2024-11-19 08:01:26.907816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.255 qpair failed and we were unable to recover it. 00:37:35.255 [2024-11-19 08:01:26.907921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.255 [2024-11-19 08:01:26.907954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.255 qpair failed and we were unable to recover it. 00:37:35.255 [2024-11-19 08:01:26.908114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.255 [2024-11-19 08:01:26.908148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.255 qpair failed and we were unable to recover it. 00:37:35.255 [2024-11-19 08:01:26.908281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.255 [2024-11-19 08:01:26.908319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.255 qpair failed and we were unable to recover it. 00:37:35.255 [2024-11-19 08:01:26.908486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.255 [2024-11-19 08:01:26.908519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.255 qpair failed and we were unable to recover it. 00:37:35.255 [2024-11-19 08:01:26.908654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.255 [2024-11-19 08:01:26.908687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.255 qpair failed and we were unable to recover it. 00:37:35.255 [2024-11-19 08:01:26.908801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.255 [2024-11-19 08:01:26.908835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.255 qpair failed and we were unable to recover it. 00:37:35.255 [2024-11-19 08:01:26.909076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.255 [2024-11-19 08:01:26.909124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.255 qpair failed and we were unable to recover it. 00:37:35.255 [2024-11-19 08:01:26.909358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.255 [2024-11-19 08:01:26.909406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.255 qpair failed and we were unable to recover it. 00:37:35.255 [2024-11-19 08:01:26.909554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.255 [2024-11-19 08:01:26.909591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.255 qpair failed and we were unable to recover it. 00:37:35.255 [2024-11-19 08:01:26.909735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.255 [2024-11-19 08:01:26.909771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.255 qpair failed and we were unable to recover it. 00:37:35.256 [2024-11-19 08:01:26.909924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.256 [2024-11-19 08:01:26.909959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.256 qpair failed and we were unable to recover it. 00:37:35.256 [2024-11-19 08:01:26.910103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.256 [2024-11-19 08:01:26.910137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.256 qpair failed and we were unable to recover it. 00:37:35.256 [2024-11-19 08:01:26.910293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.256 [2024-11-19 08:01:26.910333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.256 qpair failed and we were unable to recover it. 00:37:35.256 [2024-11-19 08:01:26.910494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.256 [2024-11-19 08:01:26.910536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.256 qpair failed and we were unable to recover it. 00:37:35.256 [2024-11-19 08:01:26.910708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.256 [2024-11-19 08:01:26.910773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.256 qpair failed and we were unable to recover it. 00:37:35.256 [2024-11-19 08:01:26.910965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.256 [2024-11-19 08:01:26.911032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.256 qpair failed and we were unable to recover it. 00:37:35.256 [2024-11-19 08:01:26.911166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.256 [2024-11-19 08:01:26.911206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.256 qpair failed and we were unable to recover it. 00:37:35.256 [2024-11-19 08:01:26.911426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.256 [2024-11-19 08:01:26.911460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.256 qpair failed and we were unable to recover it. 00:37:35.256 [2024-11-19 08:01:26.911599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.256 [2024-11-19 08:01:26.911632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.256 qpair failed and we were unable to recover it. 00:37:35.256 [2024-11-19 08:01:26.911754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.256 [2024-11-19 08:01:26.911788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.256 qpair failed and we were unable to recover it. 00:37:35.256 [2024-11-19 08:01:26.911925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.256 [2024-11-19 08:01:26.911975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.256 qpair failed and we were unable to recover it. 00:37:35.256 [2024-11-19 08:01:26.912163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.256 [2024-11-19 08:01:26.912197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.256 qpair failed and we were unable to recover it. 00:37:35.256 [2024-11-19 08:01:26.912336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.256 [2024-11-19 08:01:26.912369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.256 qpair failed and we were unable to recover it. 00:37:35.256 [2024-11-19 08:01:26.912520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.256 [2024-11-19 08:01:26.912569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.256 qpair failed and we were unable to recover it. 00:37:35.256 [2024-11-19 08:01:26.912722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.256 [2024-11-19 08:01:26.912771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.256 qpair failed and we were unable to recover it. 00:37:35.256 [2024-11-19 08:01:26.912968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.256 [2024-11-19 08:01:26.913006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.256 qpair failed and we were unable to recover it. 00:37:35.256 [2024-11-19 08:01:26.913154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.256 [2024-11-19 08:01:26.913191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.256 qpair failed and we were unable to recover it. 00:37:35.256 [2024-11-19 08:01:26.913339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.256 [2024-11-19 08:01:26.913374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.256 qpair failed and we were unable to recover it. 00:37:35.256 [2024-11-19 08:01:26.913559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.256 [2024-11-19 08:01:26.913607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.256 qpair failed and we were unable to recover it. 00:37:35.256 [2024-11-19 08:01:26.913768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.256 [2024-11-19 08:01:26.913817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.256 qpair failed and we were unable to recover it. 00:37:35.256 [2024-11-19 08:01:26.913933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.256 [2024-11-19 08:01:26.913986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.256 qpair failed and we were unable to recover it. 00:37:35.256 [2024-11-19 08:01:26.914134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.256 [2024-11-19 08:01:26.914173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.256 qpair failed and we were unable to recover it. 00:37:35.256 [2024-11-19 08:01:26.914355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.256 [2024-11-19 08:01:26.914420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.256 qpair failed and we were unable to recover it. 00:37:35.256 [2024-11-19 08:01:26.914592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.256 [2024-11-19 08:01:26.914638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.256 qpair failed and we were unable to recover it. 00:37:35.256 [2024-11-19 08:01:26.914755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.256 [2024-11-19 08:01:26.914789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.256 qpair failed and we were unable to recover it. 00:37:35.256 [2024-11-19 08:01:26.914936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.256 [2024-11-19 08:01:26.915003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.256 qpair failed and we were unable to recover it. 00:37:35.256 [2024-11-19 08:01:26.915164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.256 [2024-11-19 08:01:26.915227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.256 qpair failed and we were unable to recover it. 00:37:35.256 [2024-11-19 08:01:26.915439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.256 [2024-11-19 08:01:26.915499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.256 qpair failed and we were unable to recover it. 00:37:35.256 [2024-11-19 08:01:26.915642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.256 [2024-11-19 08:01:26.915676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.256 qpair failed and we were unable to recover it. 00:37:35.256 [2024-11-19 08:01:26.915868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.256 [2024-11-19 08:01:26.915926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.256 qpair failed and we were unable to recover it. 00:37:35.256 [2024-11-19 08:01:26.916081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.256 [2024-11-19 08:01:26.916116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.256 qpair failed and we were unable to recover it. 00:37:35.256 [2024-11-19 08:01:26.916255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.256 [2024-11-19 08:01:26.916290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.256 qpair failed and we were unable to recover it. 00:37:35.256 [2024-11-19 08:01:26.916428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.256 [2024-11-19 08:01:26.916461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.256 qpair failed and we were unable to recover it. 00:37:35.256 [2024-11-19 08:01:26.916593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.256 [2024-11-19 08:01:26.916626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.256 qpair failed and we were unable to recover it. 00:37:35.256 [2024-11-19 08:01:26.916754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.256 [2024-11-19 08:01:26.916787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.256 qpair failed and we were unable to recover it. 00:37:35.256 [2024-11-19 08:01:26.916915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.256 [2024-11-19 08:01:26.916955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.256 qpair failed and we were unable to recover it. 00:37:35.256 [2024-11-19 08:01:26.917085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.256 [2024-11-19 08:01:26.917120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.256 qpair failed and we were unable to recover it. 00:37:35.256 [2024-11-19 08:01:26.917312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.257 [2024-11-19 08:01:26.917367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.257 qpair failed and we were unable to recover it. 00:37:35.257 [2024-11-19 08:01:26.917473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.257 [2024-11-19 08:01:26.917509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.257 qpair failed and we were unable to recover it. 00:37:35.257 [2024-11-19 08:01:26.917668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.257 [2024-11-19 08:01:26.917712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.257 qpair failed and we were unable to recover it. 00:37:35.257 [2024-11-19 08:01:26.917850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.257 [2024-11-19 08:01:26.917889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.257 qpair failed and we were unable to recover it. 00:37:35.257 [2024-11-19 08:01:26.918113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.257 [2024-11-19 08:01:26.918151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.257 qpair failed and we were unable to recover it. 00:37:35.257 [2024-11-19 08:01:26.918359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.257 [2024-11-19 08:01:26.918428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.257 qpair failed and we were unable to recover it. 00:37:35.257 [2024-11-19 08:01:26.918558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.257 [2024-11-19 08:01:26.918593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.257 qpair failed and we were unable to recover it. 00:37:35.257 [2024-11-19 08:01:26.918764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.257 [2024-11-19 08:01:26.918800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.257 qpair failed and we were unable to recover it. 00:37:35.257 [2024-11-19 08:01:26.918902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.257 [2024-11-19 08:01:26.918936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.257 qpair failed and we were unable to recover it. 00:37:35.257 [2024-11-19 08:01:26.919145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.257 [2024-11-19 08:01:26.919178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.257 qpair failed and we were unable to recover it. 00:37:35.257 [2024-11-19 08:01:26.919312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.257 [2024-11-19 08:01:26.919345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.257 qpair failed and we were unable to recover it. 00:37:35.257 [2024-11-19 08:01:26.919523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.257 [2024-11-19 08:01:26.919557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.257 qpair failed and we were unable to recover it. 00:37:35.257 [2024-11-19 08:01:26.919702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.257 [2024-11-19 08:01:26.919739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.257 qpair failed and we were unable to recover it. 00:37:35.257 [2024-11-19 08:01:26.919847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.257 [2024-11-19 08:01:26.919901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.257 qpair failed and we were unable to recover it. 00:37:35.257 [2024-11-19 08:01:26.920079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.257 [2024-11-19 08:01:26.920134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.257 qpair failed and we were unable to recover it. 00:37:35.257 [2024-11-19 08:01:26.920301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.257 [2024-11-19 08:01:26.920339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.257 qpair failed and we were unable to recover it. 00:37:35.257 [2024-11-19 08:01:26.920512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.257 [2024-11-19 08:01:26.920550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.257 qpair failed and we were unable to recover it. 00:37:35.257 [2024-11-19 08:01:26.920731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.257 [2024-11-19 08:01:26.920765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.257 qpair failed and we were unable to recover it. 00:37:35.257 [2024-11-19 08:01:26.920879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.257 [2024-11-19 08:01:26.920912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.257 qpair failed and we were unable to recover it. 00:37:35.257 [2024-11-19 08:01:26.921049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.257 [2024-11-19 08:01:26.921083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.257 qpair failed and we were unable to recover it. 00:37:35.257 [2024-11-19 08:01:26.921219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.257 [2024-11-19 08:01:26.921252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.257 qpair failed and we were unable to recover it. 00:37:35.257 [2024-11-19 08:01:26.921410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.257 [2024-11-19 08:01:26.921448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.257 qpair failed and we were unable to recover it. 00:37:35.257 [2024-11-19 08:01:26.921615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.257 [2024-11-19 08:01:26.921651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.257 qpair failed and we were unable to recover it. 00:37:35.257 [2024-11-19 08:01:26.921778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.257 [2024-11-19 08:01:26.921826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.257 qpair failed and we were unable to recover it. 00:37:35.257 [2024-11-19 08:01:26.921981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.257 [2024-11-19 08:01:26.922018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.257 qpair failed and we were unable to recover it. 00:37:35.257 [2024-11-19 08:01:26.922253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.257 [2024-11-19 08:01:26.922291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.257 qpair failed and we were unable to recover it. 00:37:35.257 [2024-11-19 08:01:26.922432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.257 [2024-11-19 08:01:26.922470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.257 qpair failed and we were unable to recover it. 00:37:35.257 [2024-11-19 08:01:26.922609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.257 [2024-11-19 08:01:26.922646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.257 qpair failed and we were unable to recover it. 00:37:35.257 [2024-11-19 08:01:26.922828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.257 [2024-11-19 08:01:26.922865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.257 qpair failed and we were unable to recover it. 00:37:35.257 [2024-11-19 08:01:26.922999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.257 [2024-11-19 08:01:26.923048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.257 qpair failed and we were unable to recover it. 00:37:35.257 [2024-11-19 08:01:26.923298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.257 [2024-11-19 08:01:26.923355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.257 qpair failed and we were unable to recover it. 00:37:35.257 [2024-11-19 08:01:26.923504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.257 [2024-11-19 08:01:26.923539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.257 qpair failed and we were unable to recover it. 00:37:35.257 [2024-11-19 08:01:26.923681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.257 [2024-11-19 08:01:26.923726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.257 qpair failed and we were unable to recover it. 00:37:35.257 [2024-11-19 08:01:26.923908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.257 [2024-11-19 08:01:26.923957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.257 qpair failed and we were unable to recover it. 00:37:35.257 [2024-11-19 08:01:26.924078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.257 [2024-11-19 08:01:26.924117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.257 qpair failed and we were unable to recover it. 00:37:35.257 [2024-11-19 08:01:26.924282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.257 [2024-11-19 08:01:26.924335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.257 qpair failed and we were unable to recover it. 00:37:35.257 [2024-11-19 08:01:26.924456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.257 [2024-11-19 08:01:26.924495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.257 qpair failed and we were unable to recover it. 00:37:35.257 [2024-11-19 08:01:26.924659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.257 [2024-11-19 08:01:26.924714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.257 qpair failed and we were unable to recover it. 00:37:35.258 [2024-11-19 08:01:26.924860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.258 [2024-11-19 08:01:26.924897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.258 qpair failed and we were unable to recover it. 00:37:35.258 [2024-11-19 08:01:26.925035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.258 [2024-11-19 08:01:26.925070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.258 qpair failed and we were unable to recover it. 00:37:35.258 [2024-11-19 08:01:26.925251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.258 [2024-11-19 08:01:26.925290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.258 qpair failed and we were unable to recover it. 00:37:35.258 [2024-11-19 08:01:26.925451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.258 [2024-11-19 08:01:26.925485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.258 qpair failed and we were unable to recover it. 00:37:35.258 [2024-11-19 08:01:26.925617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.258 [2024-11-19 08:01:26.925651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.258 qpair failed and we were unable to recover it. 00:37:35.258 [2024-11-19 08:01:26.925787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.258 [2024-11-19 08:01:26.925837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.258 qpair failed and we were unable to recover it. 00:37:35.258 [2024-11-19 08:01:26.926010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.258 [2024-11-19 08:01:26.926047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.258 qpair failed and we were unable to recover it. 00:37:35.258 [2024-11-19 08:01:26.926197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.258 [2024-11-19 08:01:26.926238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.258 qpair failed and we were unable to recover it. 00:37:35.258 [2024-11-19 08:01:26.926413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.258 [2024-11-19 08:01:26.926452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.258 qpair failed and we were unable to recover it. 00:37:35.258 [2024-11-19 08:01:26.926628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.258 [2024-11-19 08:01:26.926677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.258 qpair failed and we were unable to recover it. 00:37:35.258 [2024-11-19 08:01:26.926861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.258 [2024-11-19 08:01:26.926909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.258 qpair failed and we were unable to recover it. 00:37:35.258 [2024-11-19 08:01:26.927084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.258 [2024-11-19 08:01:26.927124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.258 qpair failed and we were unable to recover it. 00:37:35.258 [2024-11-19 08:01:26.927321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.258 [2024-11-19 08:01:26.927377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.258 qpair failed and we were unable to recover it. 00:37:35.258 [2024-11-19 08:01:26.927502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.258 [2024-11-19 08:01:26.927556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.258 qpair failed and we were unable to recover it. 00:37:35.258 [2024-11-19 08:01:26.927719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.258 [2024-11-19 08:01:26.927768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.258 qpair failed and we were unable to recover it. 00:37:35.258 [2024-11-19 08:01:26.927874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.258 [2024-11-19 08:01:26.927908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.258 qpair failed and we were unable to recover it. 00:37:35.258 [2024-11-19 08:01:26.928047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.258 [2024-11-19 08:01:26.928079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.258 qpair failed and we were unable to recover it. 00:37:35.258 [2024-11-19 08:01:26.928218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.258 [2024-11-19 08:01:26.928255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.258 qpair failed and we were unable to recover it. 00:37:35.258 [2024-11-19 08:01:26.928446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.258 [2024-11-19 08:01:26.928480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.258 qpair failed and we were unable to recover it. 00:37:35.258 [2024-11-19 08:01:26.928611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.258 [2024-11-19 08:01:26.928644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.258 qpair failed and we were unable to recover it. 00:37:35.258 [2024-11-19 08:01:26.928760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.258 [2024-11-19 08:01:26.928793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.258 qpair failed and we were unable to recover it. 00:37:35.258 [2024-11-19 08:01:26.928928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.258 [2024-11-19 08:01:26.928962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.258 qpair failed and we were unable to recover it. 00:37:35.258 [2024-11-19 08:01:26.929112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.258 [2024-11-19 08:01:26.929161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.258 qpair failed and we were unable to recover it. 00:37:35.258 [2024-11-19 08:01:26.929324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.258 [2024-11-19 08:01:26.929379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.258 qpair failed and we were unable to recover it. 00:37:35.258 [2024-11-19 08:01:26.929500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.258 [2024-11-19 08:01:26.929535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.258 qpair failed and we were unable to recover it. 00:37:35.258 [2024-11-19 08:01:26.929675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.258 [2024-11-19 08:01:26.929723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.258 qpair failed and we were unable to recover it. 00:37:35.258 [2024-11-19 08:01:26.929857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.258 [2024-11-19 08:01:26.929906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.258 qpair failed and we were unable to recover it. 00:37:35.258 [2024-11-19 08:01:26.930050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.258 [2024-11-19 08:01:26.930087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.258 qpair failed and we were unable to recover it. 00:37:35.258 [2024-11-19 08:01:26.930222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.258 [2024-11-19 08:01:26.930262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.258 qpair failed and we were unable to recover it. 00:37:35.258 [2024-11-19 08:01:26.930426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.258 [2024-11-19 08:01:26.930466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.258 qpair failed and we were unable to recover it. 00:37:35.258 [2024-11-19 08:01:26.930599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.258 [2024-11-19 08:01:26.930652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.258 qpair failed and we were unable to recover it. 00:37:35.258 [2024-11-19 08:01:26.930798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.258 [2024-11-19 08:01:26.930835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.258 qpair failed and we were unable to recover it. 00:37:35.258 [2024-11-19 08:01:26.930971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.258 [2024-11-19 08:01:26.931010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.258 qpair failed and we were unable to recover it. 00:37:35.258 [2024-11-19 08:01:26.931218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.258 [2024-11-19 08:01:26.931281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.258 qpair failed and we were unable to recover it. 00:37:35.258 [2024-11-19 08:01:26.931403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.258 [2024-11-19 08:01:26.931442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.258 qpair failed and we were unable to recover it. 00:37:35.258 [2024-11-19 08:01:26.931554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.258 [2024-11-19 08:01:26.931589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.258 qpair failed and we were unable to recover it. 00:37:35.258 [2024-11-19 08:01:26.931711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.258 [2024-11-19 08:01:26.931749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.258 qpair failed and we were unable to recover it. 00:37:35.258 [2024-11-19 08:01:26.931857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.259 [2024-11-19 08:01:26.931904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.259 qpair failed and we were unable to recover it. 00:37:35.259 [2024-11-19 08:01:26.932042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.259 [2024-11-19 08:01:26.932078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.259 qpair failed and we were unable to recover it. 00:37:35.259 [2024-11-19 08:01:26.932217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.259 [2024-11-19 08:01:26.932252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.259 qpair failed and we were unable to recover it. 00:37:35.259 [2024-11-19 08:01:26.932416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.259 [2024-11-19 08:01:26.932452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.259 qpair failed and we were unable to recover it. 00:37:35.259 [2024-11-19 08:01:26.932590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.259 [2024-11-19 08:01:26.932627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.259 qpair failed and we were unable to recover it. 00:37:35.259 [2024-11-19 08:01:26.932742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.259 [2024-11-19 08:01:26.932778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.259 qpair failed and we were unable to recover it. 00:37:35.259 [2024-11-19 08:01:26.932931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.259 [2024-11-19 08:01:26.932967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.259 qpair failed and we were unable to recover it. 00:37:35.259 [2024-11-19 08:01:26.933148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.259 [2024-11-19 08:01:26.933199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.259 qpair failed and we were unable to recover it. 00:37:35.259 [2024-11-19 08:01:26.933310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.259 [2024-11-19 08:01:26.933344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.259 qpair failed and we were unable to recover it. 00:37:35.259 [2024-11-19 08:01:26.933479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.259 [2024-11-19 08:01:26.933514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.259 qpair failed and we were unable to recover it. 00:37:35.259 [2024-11-19 08:01:26.933663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.259 [2024-11-19 08:01:26.933730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.259 qpair failed and we were unable to recover it. 00:37:35.259 [2024-11-19 08:01:26.933843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.259 [2024-11-19 08:01:26.933878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.259 qpair failed and we were unable to recover it. 00:37:35.259 [2024-11-19 08:01:26.934037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.259 [2024-11-19 08:01:26.934071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.259 qpair failed and we were unable to recover it. 00:37:35.259 [2024-11-19 08:01:26.934205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.259 [2024-11-19 08:01:26.934245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.259 qpair failed and we were unable to recover it. 00:37:35.259 [2024-11-19 08:01:26.934461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.259 [2024-11-19 08:01:26.934520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.259 qpair failed and we were unable to recover it. 00:37:35.259 [2024-11-19 08:01:26.934698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.259 [2024-11-19 08:01:26.934736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.259 qpair failed and we were unable to recover it. 00:37:35.259 [2024-11-19 08:01:26.934865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.259 [2024-11-19 08:01:26.934902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.259 qpair failed and we were unable to recover it. 00:37:35.259 [2024-11-19 08:01:26.935038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.259 [2024-11-19 08:01:26.935081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.259 qpair failed and we were unable to recover it. 00:37:35.259 [2024-11-19 08:01:26.935247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.259 [2024-11-19 08:01:26.935282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.259 qpair failed and we were unable to recover it. 00:37:35.259 [2024-11-19 08:01:26.935420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.259 [2024-11-19 08:01:26.935453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.259 qpair failed and we were unable to recover it. 00:37:35.259 [2024-11-19 08:01:26.935565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.259 [2024-11-19 08:01:26.935598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.259 qpair failed and we were unable to recover it. 00:37:35.259 [2024-11-19 08:01:26.935816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.259 [2024-11-19 08:01:26.935852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.259 qpair failed and we were unable to recover it. 00:37:35.259 [2024-11-19 08:01:26.935973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.259 [2024-11-19 08:01:26.936021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.259 qpair failed and we were unable to recover it. 00:37:35.259 [2024-11-19 08:01:26.936210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.259 [2024-11-19 08:01:26.936279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.259 qpair failed and we were unable to recover it. 00:37:35.259 [2024-11-19 08:01:26.936494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.259 [2024-11-19 08:01:26.936529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.259 qpair failed and we were unable to recover it. 00:37:35.259 [2024-11-19 08:01:26.936663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.259 [2024-11-19 08:01:26.936703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.259 qpair failed and we were unable to recover it. 00:37:35.259 [2024-11-19 08:01:26.936841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.259 [2024-11-19 08:01:26.936876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.259 qpair failed and we were unable to recover it. 00:37:35.259 [2024-11-19 08:01:26.937021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.259 [2024-11-19 08:01:26.937075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.259 qpair failed and we were unable to recover it. 00:37:35.259 [2024-11-19 08:01:26.937242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.259 [2024-11-19 08:01:26.937342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.259 qpair failed and we were unable to recover it. 00:37:35.259 [2024-11-19 08:01:26.937527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.259 [2024-11-19 08:01:26.937585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.259 qpair failed and we were unable to recover it. 00:37:35.259 [2024-11-19 08:01:26.937742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.259 [2024-11-19 08:01:26.937777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.259 qpair failed and we were unable to recover it. 00:37:35.259 [2024-11-19 08:01:26.937877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.259 [2024-11-19 08:01:26.937911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.259 qpair failed and we were unable to recover it. 00:37:35.259 [2024-11-19 08:01:26.938048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.259 [2024-11-19 08:01:26.938086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.259 qpair failed and we were unable to recover it. 00:37:35.260 [2024-11-19 08:01:26.938215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.260 [2024-11-19 08:01:26.938267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.260 qpair failed and we were unable to recover it. 00:37:35.260 [2024-11-19 08:01:26.938427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.260 [2024-11-19 08:01:26.938466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.260 qpair failed and we were unable to recover it. 00:37:35.260 [2024-11-19 08:01:26.938656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.260 [2024-11-19 08:01:26.938697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.260 qpair failed and we were unable to recover it. 00:37:35.260 [2024-11-19 08:01:26.938802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.260 [2024-11-19 08:01:26.938835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.260 qpair failed and we were unable to recover it. 00:37:35.260 [2024-11-19 08:01:26.938940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.260 [2024-11-19 08:01:26.938978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.260 qpair failed and we were unable to recover it. 00:37:35.260 [2024-11-19 08:01:26.939112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.260 [2024-11-19 08:01:26.939145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.260 qpair failed and we were unable to recover it. 00:37:35.260 [2024-11-19 08:01:26.939341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.260 [2024-11-19 08:01:26.939375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.260 qpair failed and we were unable to recover it. 00:37:35.260 [2024-11-19 08:01:26.939512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.260 [2024-11-19 08:01:26.939546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.260 qpair failed and we were unable to recover it. 00:37:35.260 [2024-11-19 08:01:26.939736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.260 [2024-11-19 08:01:26.939786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.260 qpair failed and we were unable to recover it. 00:37:35.260 [2024-11-19 08:01:26.939924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.260 [2024-11-19 08:01:26.939961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.260 qpair failed and we were unable to recover it. 00:37:35.260 [2024-11-19 08:01:26.940104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.260 [2024-11-19 08:01:26.940140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.260 qpair failed and we were unable to recover it. 00:37:35.260 [2024-11-19 08:01:26.940307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.260 [2024-11-19 08:01:26.940342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.260 qpair failed and we were unable to recover it. 00:37:35.260 [2024-11-19 08:01:26.940462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.260 [2024-11-19 08:01:26.940511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.260 qpair failed and we were unable to recover it. 00:37:35.260 [2024-11-19 08:01:26.940646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.260 [2024-11-19 08:01:26.940704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.260 qpair failed and we were unable to recover it. 00:37:35.260 [2024-11-19 08:01:26.940850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.260 [2024-11-19 08:01:26.940885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.260 qpair failed and we were unable to recover it. 00:37:35.260 [2024-11-19 08:01:26.941023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.260 [2024-11-19 08:01:26.941056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.260 qpair failed and we were unable to recover it. 00:37:35.260 [2024-11-19 08:01:26.941152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.260 [2024-11-19 08:01:26.941186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.260 qpair failed and we were unable to recover it. 00:37:35.260 [2024-11-19 08:01:26.941334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.260 [2024-11-19 08:01:26.941385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.260 qpair failed and we were unable to recover it. 00:37:35.260 [2024-11-19 08:01:26.941523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.260 [2024-11-19 08:01:26.941564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.260 qpair failed and we were unable to recover it. 00:37:35.260 [2024-11-19 08:01:26.941750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.260 [2024-11-19 08:01:26.941785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.260 qpair failed and we were unable to recover it. 00:37:35.260 [2024-11-19 08:01:26.941939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.260 [2024-11-19 08:01:26.941988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.260 qpair failed and we were unable to recover it. 00:37:35.260 [2024-11-19 08:01:26.942175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.260 [2024-11-19 08:01:26.942216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.260 qpair failed and we were unable to recover it. 00:37:35.260 [2024-11-19 08:01:26.942361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.260 [2024-11-19 08:01:26.942400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.260 qpair failed and we were unable to recover it. 00:37:35.260 [2024-11-19 08:01:26.942592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.260 [2024-11-19 08:01:26.942627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.260 qpair failed and we were unable to recover it. 00:37:35.260 [2024-11-19 08:01:26.942744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.260 [2024-11-19 08:01:26.942779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.260 qpair failed and we were unable to recover it. 00:37:35.260 [2024-11-19 08:01:26.942893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.260 [2024-11-19 08:01:26.942926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.260 qpair failed and we were unable to recover it. 00:37:35.260 [2024-11-19 08:01:26.943044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.260 [2024-11-19 08:01:26.943080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.260 qpair failed and we were unable to recover it. 00:37:35.260 [2024-11-19 08:01:26.943219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.260 [2024-11-19 08:01:26.943255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.260 qpair failed and we were unable to recover it. 00:37:35.260 [2024-11-19 08:01:26.943479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.260 [2024-11-19 08:01:26.943550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.260 qpair failed and we were unable to recover it. 00:37:35.260 [2024-11-19 08:01:26.943708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.260 [2024-11-19 08:01:26.943742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.260 qpair failed and we were unable to recover it. 00:37:35.260 [2024-11-19 08:01:26.943848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.260 [2024-11-19 08:01:26.943882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.260 qpair failed and we were unable to recover it. 00:37:35.260 [2024-11-19 08:01:26.944054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.260 [2024-11-19 08:01:26.944088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.260 qpair failed and we were unable to recover it. 00:37:35.260 [2024-11-19 08:01:26.944190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.260 [2024-11-19 08:01:26.944224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.260 qpair failed and we were unable to recover it. 00:37:35.260 [2024-11-19 08:01:26.944403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.260 [2024-11-19 08:01:26.944440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.260 qpair failed and we were unable to recover it. 00:37:35.260 [2024-11-19 08:01:26.944558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.260 [2024-11-19 08:01:26.944608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.260 qpair failed and we were unable to recover it. 00:37:35.260 [2024-11-19 08:01:26.944764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.260 [2024-11-19 08:01:26.944814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.260 qpair failed and we were unable to recover it. 00:37:35.260 [2024-11-19 08:01:26.944967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.260 [2024-11-19 08:01:26.945003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.260 qpair failed and we were unable to recover it. 00:37:35.260 [2024-11-19 08:01:26.945143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.260 [2024-11-19 08:01:26.945178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.261 qpair failed and we were unable to recover it. 00:37:35.261 [2024-11-19 08:01:26.945333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.261 [2024-11-19 08:01:26.945371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.261 qpair failed and we were unable to recover it. 00:37:35.261 [2024-11-19 08:01:26.945498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.261 [2024-11-19 08:01:26.945550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.261 qpair failed and we were unable to recover it. 00:37:35.261 [2024-11-19 08:01:26.945707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.261 [2024-11-19 08:01:26.945762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.261 qpair failed and we were unable to recover it. 00:37:35.261 [2024-11-19 08:01:26.945875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.261 [2024-11-19 08:01:26.945911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.261 qpair failed and we were unable to recover it. 00:37:35.261 [2024-11-19 08:01:26.946045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.261 [2024-11-19 08:01:26.946079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.261 qpair failed and we were unable to recover it. 00:37:35.261 [2024-11-19 08:01:26.946181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.261 [2024-11-19 08:01:26.946215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.261 qpair failed and we were unable to recover it. 00:37:35.261 [2024-11-19 08:01:26.946365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.261 [2024-11-19 08:01:26.946406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.261 qpair failed and we were unable to recover it. 00:37:35.261 [2024-11-19 08:01:26.946530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.261 [2024-11-19 08:01:26.946579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.261 qpair failed and we were unable to recover it. 00:37:35.261 [2024-11-19 08:01:26.946714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.261 [2024-11-19 08:01:26.946763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.261 qpair failed and we were unable to recover it. 00:37:35.261 [2024-11-19 08:01:26.946908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.261 [2024-11-19 08:01:26.946943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.261 qpair failed and we were unable to recover it. 00:37:35.261 [2024-11-19 08:01:26.947043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.261 [2024-11-19 08:01:26.947078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.261 qpair failed and we were unable to recover it. 00:37:35.261 [2024-11-19 08:01:26.947304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.261 [2024-11-19 08:01:26.947342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.261 qpair failed and we were unable to recover it. 00:37:35.261 [2024-11-19 08:01:26.947496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.261 [2024-11-19 08:01:26.947547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.261 qpair failed and we were unable to recover it. 00:37:35.261 [2024-11-19 08:01:26.947725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.261 [2024-11-19 08:01:26.947776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.261 qpair failed and we were unable to recover it. 00:37:35.261 [2024-11-19 08:01:26.947889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.261 [2024-11-19 08:01:26.947923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.261 qpair failed and we were unable to recover it. 00:37:35.261 [2024-11-19 08:01:26.948027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.261 [2024-11-19 08:01:26.948061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.261 qpair failed and we were unable to recover it. 00:37:35.261 [2024-11-19 08:01:26.948275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.261 [2024-11-19 08:01:26.948309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.261 qpair failed and we were unable to recover it. 00:37:35.261 [2024-11-19 08:01:26.948440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.261 [2024-11-19 08:01:26.948473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.261 qpair failed and we were unable to recover it. 00:37:35.261 [2024-11-19 08:01:26.948623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.261 [2024-11-19 08:01:26.948657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.261 qpair failed and we were unable to recover it. 00:37:35.261 [2024-11-19 08:01:26.948822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.261 [2024-11-19 08:01:26.948890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.261 qpair failed and we were unable to recover it. 00:37:35.261 [2024-11-19 08:01:26.949116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.261 [2024-11-19 08:01:26.949154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.261 qpair failed and we were unable to recover it. 00:37:35.261 [2024-11-19 08:01:26.949309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.261 [2024-11-19 08:01:26.949343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.261 qpair failed and we were unable to recover it. 00:37:35.261 [2024-11-19 08:01:26.949481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.261 [2024-11-19 08:01:26.949516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.261 qpair failed and we were unable to recover it. 00:37:35.261 [2024-11-19 08:01:26.949684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.261 [2024-11-19 08:01:26.949727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.261 qpair failed and we were unable to recover it. 00:37:35.261 [2024-11-19 08:01:26.949861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.261 [2024-11-19 08:01:26.949896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.261 qpair failed and we were unable to recover it. 00:37:35.261 [2024-11-19 08:01:26.950003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.261 [2024-11-19 08:01:26.950057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.261 qpair failed and we were unable to recover it. 00:37:35.261 [2024-11-19 08:01:26.950221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.261 [2024-11-19 08:01:26.950254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.261 qpair failed and we were unable to recover it. 00:37:35.261 [2024-11-19 08:01:26.950360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.261 [2024-11-19 08:01:26.950394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.261 qpair failed and we were unable to recover it. 00:37:35.261 [2024-11-19 08:01:26.950491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.261 [2024-11-19 08:01:26.950525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.261 qpair failed and we were unable to recover it. 00:37:35.261 [2024-11-19 08:01:26.950647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.261 [2024-11-19 08:01:26.950706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.261 qpair failed and we were unable to recover it. 00:37:35.261 [2024-11-19 08:01:26.950841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.261 [2024-11-19 08:01:26.950889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.261 qpair failed and we were unable to recover it. 00:37:35.261 [2024-11-19 08:01:26.951007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.261 [2024-11-19 08:01:26.951043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.261 qpair failed and we were unable to recover it. 00:37:35.261 [2024-11-19 08:01:26.951210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.261 [2024-11-19 08:01:26.951245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.261 qpair failed and we were unable to recover it. 00:37:35.261 [2024-11-19 08:01:26.951386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.261 [2024-11-19 08:01:26.951437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.261 qpair failed and we were unable to recover it. 00:37:35.261 [2024-11-19 08:01:26.951594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.261 [2024-11-19 08:01:26.951644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.261 qpair failed and we were unable to recover it. 00:37:35.261 [2024-11-19 08:01:26.951763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.261 [2024-11-19 08:01:26.951799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.261 qpair failed and we were unable to recover it. 00:37:35.261 [2024-11-19 08:01:26.951913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.261 [2024-11-19 08:01:26.951947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.261 qpair failed and we were unable to recover it. 00:37:35.261 [2024-11-19 08:01:26.952055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.262 [2024-11-19 08:01:26.952089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.262 qpair failed and we were unable to recover it. 00:37:35.262 [2024-11-19 08:01:26.952241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.262 [2024-11-19 08:01:26.952317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.262 qpair failed and we were unable to recover it. 00:37:35.262 [2024-11-19 08:01:26.952518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.262 [2024-11-19 08:01:26.952555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.262 qpair failed and we were unable to recover it. 00:37:35.262 [2024-11-19 08:01:26.952790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.262 [2024-11-19 08:01:26.952830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.262 qpair failed and we were unable to recover it. 00:37:35.262 [2024-11-19 08:01:26.952946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.262 [2024-11-19 08:01:26.952982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.262 qpair failed and we were unable to recover it. 00:37:35.262 [2024-11-19 08:01:26.953143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.262 [2024-11-19 08:01:26.953180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.262 qpair failed and we were unable to recover it. 00:37:35.262 [2024-11-19 08:01:26.953326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.262 [2024-11-19 08:01:26.953363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.262 qpair failed and we were unable to recover it. 00:37:35.262 [2024-11-19 08:01:26.953474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.262 [2024-11-19 08:01:26.953512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.262 qpair failed and we were unable to recover it. 00:37:35.262 [2024-11-19 08:01:26.953696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.262 [2024-11-19 08:01:26.953730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.262 qpair failed and we were unable to recover it. 00:37:35.262 [2024-11-19 08:01:26.953843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.262 [2024-11-19 08:01:26.953883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.262 qpair failed and we were unable to recover it. 00:37:35.262 [2024-11-19 08:01:26.954019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.262 [2024-11-19 08:01:26.954057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.262 qpair failed and we were unable to recover it. 00:37:35.262 [2024-11-19 08:01:26.954239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.262 [2024-11-19 08:01:26.954273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.262 qpair failed and we were unable to recover it. 00:37:35.262 [2024-11-19 08:01:26.954393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.262 [2024-11-19 08:01:26.954428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.262 qpair failed and we were unable to recover it. 00:37:35.262 [2024-11-19 08:01:26.954561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.262 [2024-11-19 08:01:26.954613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.262 qpair failed and we were unable to recover it. 00:37:35.262 [2024-11-19 08:01:26.954750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.262 [2024-11-19 08:01:26.954784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.262 qpair failed and we were unable to recover it. 00:37:35.262 [2024-11-19 08:01:26.954885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.262 [2024-11-19 08:01:26.954918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.262 qpair failed and we were unable to recover it. 00:37:35.262 [2024-11-19 08:01:26.955089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.262 [2024-11-19 08:01:26.955123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.262 qpair failed and we were unable to recover it. 00:37:35.262 [2024-11-19 08:01:26.955310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.262 [2024-11-19 08:01:26.955348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.262 qpair failed and we were unable to recover it. 00:37:35.262 [2024-11-19 08:01:26.955468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.262 [2024-11-19 08:01:26.955518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.262 qpair failed and we were unable to recover it. 00:37:35.262 [2024-11-19 08:01:26.955643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.262 [2024-11-19 08:01:26.955701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.262 qpair failed and we were unable to recover it. 00:37:35.262 [2024-11-19 08:01:26.955848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.262 [2024-11-19 08:01:26.955886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.262 qpair failed and we were unable to recover it. 00:37:35.262 [2024-11-19 08:01:26.956026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.262 [2024-11-19 08:01:26.956062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.262 qpair failed and we were unable to recover it. 00:37:35.262 [2024-11-19 08:01:26.956168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.262 [2024-11-19 08:01:26.956202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.262 qpair failed and we were unable to recover it. 00:37:35.262 [2024-11-19 08:01:26.956382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.262 [2024-11-19 08:01:26.956417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.262 qpair failed and we were unable to recover it. 00:37:35.262 [2024-11-19 08:01:26.956578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.262 [2024-11-19 08:01:26.956612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.262 qpair failed and we were unable to recover it. 00:37:35.262 [2024-11-19 08:01:26.956759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.262 [2024-11-19 08:01:26.956793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.262 qpair failed and we were unable to recover it. 00:37:35.262 [2024-11-19 08:01:26.956952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.262 [2024-11-19 08:01:26.956985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.262 qpair failed and we were unable to recover it. 00:37:35.262 [2024-11-19 08:01:26.957086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.262 [2024-11-19 08:01:26.957139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.262 qpair failed and we were unable to recover it. 00:37:35.262 [2024-11-19 08:01:26.957380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.262 [2024-11-19 08:01:26.957436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.262 qpair failed and we were unable to recover it. 00:37:35.262 [2024-11-19 08:01:26.957604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.262 [2024-11-19 08:01:26.957641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.262 qpair failed and we were unable to recover it. 00:37:35.262 [2024-11-19 08:01:26.957809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.262 [2024-11-19 08:01:26.957845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.262 qpair failed and we were unable to recover it. 00:37:35.262 [2024-11-19 08:01:26.957971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.262 [2024-11-19 08:01:26.958020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.262 qpair failed and we were unable to recover it. 00:37:35.262 [2024-11-19 08:01:26.958220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.262 [2024-11-19 08:01:26.958278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.262 qpair failed and we were unable to recover it. 00:37:35.262 [2024-11-19 08:01:26.958528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.262 [2024-11-19 08:01:26.958588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.262 qpair failed and we were unable to recover it. 00:37:35.262 [2024-11-19 08:01:26.958712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.262 [2024-11-19 08:01:26.958765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.262 qpair failed and we were unable to recover it. 00:37:35.262 [2024-11-19 08:01:26.958908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.262 [2024-11-19 08:01:26.958943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.262 qpair failed and we were unable to recover it. 00:37:35.262 [2024-11-19 08:01:26.959083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.262 [2024-11-19 08:01:26.959119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.262 qpair failed and we were unable to recover it. 00:37:35.262 [2024-11-19 08:01:26.959219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.262 [2024-11-19 08:01:26.959254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.262 qpair failed and we were unable to recover it. 00:37:35.262 [2024-11-19 08:01:26.959397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.263 [2024-11-19 08:01:26.959432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.263 qpair failed and we were unable to recover it. 00:37:35.263 [2024-11-19 08:01:26.959659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.263 [2024-11-19 08:01:26.959715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.263 qpair failed and we were unable to recover it. 00:37:35.263 [2024-11-19 08:01:26.959869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.263 [2024-11-19 08:01:26.959905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.263 qpair failed and we were unable to recover it. 00:37:35.263 [2024-11-19 08:01:26.960036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.263 [2024-11-19 08:01:26.960070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.263 qpair failed and we were unable to recover it. 00:37:35.263 [2024-11-19 08:01:26.960174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.263 [2024-11-19 08:01:26.960209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.263 qpair failed and we were unable to recover it. 00:37:35.263 [2024-11-19 08:01:26.960407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.263 [2024-11-19 08:01:26.960441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.263 qpair failed and we were unable to recover it. 00:37:35.263 [2024-11-19 08:01:26.960544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.263 [2024-11-19 08:01:26.960578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.263 qpair failed and we were unable to recover it. 00:37:35.263 [2024-11-19 08:01:26.960699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.263 [2024-11-19 08:01:26.960736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.263 qpair failed and we were unable to recover it. 00:37:35.263 [2024-11-19 08:01:26.960851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.263 [2024-11-19 08:01:26.960886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.263 qpair failed and we were unable to recover it. 00:37:35.263 [2024-11-19 08:01:26.960995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.263 [2024-11-19 08:01:26.961030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.263 qpair failed and we were unable to recover it. 00:37:35.263 [2024-11-19 08:01:26.961136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.263 [2024-11-19 08:01:26.961172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.263 qpair failed and we were unable to recover it. 00:37:35.263 [2024-11-19 08:01:26.961321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.263 [2024-11-19 08:01:26.961362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.263 qpair failed and we were unable to recover it. 00:37:35.263 [2024-11-19 08:01:26.961530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.263 [2024-11-19 08:01:26.961597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.263 qpair failed and we were unable to recover it. 00:37:35.263 [2024-11-19 08:01:26.961712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.263 [2024-11-19 08:01:26.961747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.263 qpair failed and we were unable to recover it. 00:37:35.263 [2024-11-19 08:01:26.961875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.263 [2024-11-19 08:01:26.961924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.263 qpair failed and we were unable to recover it. 00:37:35.263 [2024-11-19 08:01:26.962119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.263 [2024-11-19 08:01:26.962158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.263 qpair failed and we were unable to recover it. 00:37:35.263 [2024-11-19 08:01:26.962345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.263 [2024-11-19 08:01:26.962409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.263 qpair failed and we were unable to recover it. 00:37:35.263 [2024-11-19 08:01:26.962574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.263 [2024-11-19 08:01:26.962612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.263 qpair failed and we were unable to recover it. 00:37:35.263 [2024-11-19 08:01:26.962773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.263 [2024-11-19 08:01:26.962809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.263 qpair failed and we were unable to recover it. 00:37:35.263 [2024-11-19 08:01:26.962980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.263 [2024-11-19 08:01:26.963030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.263 qpair failed and we were unable to recover it. 00:37:35.263 [2024-11-19 08:01:26.963251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.263 [2024-11-19 08:01:26.963288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.263 qpair failed and we were unable to recover it. 00:37:35.263 [2024-11-19 08:01:26.963455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.263 [2024-11-19 08:01:26.963525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.263 qpair failed and we were unable to recover it. 00:37:35.263 [2024-11-19 08:01:26.963727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.263 [2024-11-19 08:01:26.963776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.263 qpair failed and we were unable to recover it. 00:37:35.263 [2024-11-19 08:01:26.963944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.263 [2024-11-19 08:01:26.964000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.263 qpair failed and we were unable to recover it. 00:37:35.263 [2024-11-19 08:01:26.964204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.263 [2024-11-19 08:01:26.964239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.263 qpair failed and we were unable to recover it. 00:37:35.263 [2024-11-19 08:01:26.964349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.263 [2024-11-19 08:01:26.964383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.263 qpair failed and we were unable to recover it. 00:37:35.263 [2024-11-19 08:01:26.964530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.263 [2024-11-19 08:01:26.964579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.263 qpair failed and we were unable to recover it. 00:37:35.263 [2024-11-19 08:01:26.964719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.263 [2024-11-19 08:01:26.964767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.263 qpair failed and we were unable to recover it. 00:37:35.263 [2024-11-19 08:01:26.964887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.263 [2024-11-19 08:01:26.964956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.263 qpair failed and we were unable to recover it. 00:37:35.263 [2024-11-19 08:01:26.965176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.263 [2024-11-19 08:01:26.965212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.263 qpair failed and we were unable to recover it. 00:37:35.263 [2024-11-19 08:01:26.965460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.263 [2024-11-19 08:01:26.965498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.263 qpair failed and we were unable to recover it. 00:37:35.263 [2024-11-19 08:01:26.965642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.263 [2024-11-19 08:01:26.965680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.263 qpair failed and we were unable to recover it. 00:37:35.263 [2024-11-19 08:01:26.965913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.263 [2024-11-19 08:01:26.965947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.263 qpair failed and we were unable to recover it. 00:37:35.263 [2024-11-19 08:01:26.966085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.263 [2024-11-19 08:01:26.966120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.263 qpair failed and we were unable to recover it. 00:37:35.263 [2024-11-19 08:01:26.966306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.263 [2024-11-19 08:01:26.966365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.263 qpair failed and we were unable to recover it. 00:37:35.263 [2024-11-19 08:01:26.966542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.263 [2024-11-19 08:01:26.966579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.263 qpair failed and we were unable to recover it. 00:37:35.263 [2024-11-19 08:01:26.966739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.263 [2024-11-19 08:01:26.966789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.264 qpair failed and we were unable to recover it. 00:37:35.264 [2024-11-19 08:01:26.966936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.264 [2024-11-19 08:01:26.966973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.264 qpair failed and we were unable to recover it. 00:37:35.264 [2024-11-19 08:01:26.967116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.264 [2024-11-19 08:01:26.967152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.264 qpair failed and we were unable to recover it. 00:37:35.264 [2024-11-19 08:01:26.967389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.264 [2024-11-19 08:01:26.967428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.264 qpair failed and we were unable to recover it. 00:37:35.264 [2024-11-19 08:01:26.967576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.264 [2024-11-19 08:01:26.967614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.264 qpair failed and we were unable to recover it. 00:37:35.264 [2024-11-19 08:01:26.967733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.264 [2024-11-19 08:01:26.967785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.264 qpair failed and we were unable to recover it. 00:37:35.264 [2024-11-19 08:01:26.967968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.264 [2024-11-19 08:01:26.968017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.264 qpair failed and we were unable to recover it. 00:37:35.264 [2024-11-19 08:01:26.968160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.264 [2024-11-19 08:01:26.968194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.264 qpair failed and we were unable to recover it. 00:37:35.264 [2024-11-19 08:01:26.968324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.264 [2024-11-19 08:01:26.968361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.264 qpair failed and we were unable to recover it. 00:37:35.264 [2024-11-19 08:01:26.968527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.264 [2024-11-19 08:01:26.968564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.264 qpair failed and we were unable to recover it. 00:37:35.264 [2024-11-19 08:01:26.968706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.264 [2024-11-19 08:01:26.968741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.264 qpair failed and we were unable to recover it. 00:37:35.264 [2024-11-19 08:01:26.968875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.264 [2024-11-19 08:01:26.968916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.264 qpair failed and we were unable to recover it. 00:37:35.264 [2024-11-19 08:01:26.969111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.264 [2024-11-19 08:01:26.969172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.264 qpair failed and we were unable to recover it. 00:37:35.264 [2024-11-19 08:01:26.969422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.264 [2024-11-19 08:01:26.969483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.264 qpair failed and we were unable to recover it. 00:37:35.264 [2024-11-19 08:01:26.969661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.264 [2024-11-19 08:01:26.969713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.264 qpair failed and we were unable to recover it. 00:37:35.264 [2024-11-19 08:01:26.969879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.264 [2024-11-19 08:01:26.969933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.264 qpair failed and we were unable to recover it. 00:37:35.264 [2024-11-19 08:01:26.970053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.264 [2024-11-19 08:01:26.970088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.264 qpair failed and we were unable to recover it. 00:37:35.264 [2024-11-19 08:01:26.970231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.264 [2024-11-19 08:01:26.970266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.264 qpair failed and we were unable to recover it. 00:37:35.264 [2024-11-19 08:01:26.970469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.264 [2024-11-19 08:01:26.970529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.264 qpair failed and we were unable to recover it. 00:37:35.264 [2024-11-19 08:01:26.970697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.264 [2024-11-19 08:01:26.970732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.264 qpair failed and we were unable to recover it. 00:37:35.264 [2024-11-19 08:01:26.970882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.264 [2024-11-19 08:01:26.970931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.264 qpair failed and we were unable to recover it. 00:37:35.264 [2024-11-19 08:01:26.971077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.264 [2024-11-19 08:01:26.971115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.264 qpair failed and we were unable to recover it. 00:37:35.264 [2024-11-19 08:01:26.971252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.264 [2024-11-19 08:01:26.971305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.264 qpair failed and we were unable to recover it. 00:37:35.264 [2024-11-19 08:01:26.971444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.264 [2024-11-19 08:01:26.971479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.264 qpair failed and we were unable to recover it. 00:37:35.264 [2024-11-19 08:01:26.971640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.264 [2024-11-19 08:01:26.971698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.264 qpair failed and we were unable to recover it. 00:37:35.264 [2024-11-19 08:01:26.971848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.264 [2024-11-19 08:01:26.971885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.264 qpair failed and we were unable to recover it. 00:37:35.264 [2024-11-19 08:01:26.972003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.264 [2024-11-19 08:01:26.972045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.264 qpair failed and we were unable to recover it. 00:37:35.264 [2024-11-19 08:01:26.972211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.264 [2024-11-19 08:01:26.972246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.264 qpair failed and we were unable to recover it. 00:37:35.264 [2024-11-19 08:01:26.972375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.264 [2024-11-19 08:01:26.972409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.264 qpair failed and we were unable to recover it. 00:37:35.264 [2024-11-19 08:01:26.972552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.264 [2024-11-19 08:01:26.972587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.264 qpair failed and we were unable to recover it. 00:37:35.264 [2024-11-19 08:01:26.972700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.264 [2024-11-19 08:01:26.972735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.264 qpair failed and we were unable to recover it. 00:37:35.264 [2024-11-19 08:01:26.972839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.264 [2024-11-19 08:01:26.972873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.264 qpair failed and we were unable to recover it. 00:37:35.264 [2024-11-19 08:01:26.973015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.264 [2024-11-19 08:01:26.973115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.264 qpair failed and we were unable to recover it. 00:37:35.264 [2024-11-19 08:01:26.973263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.264 [2024-11-19 08:01:26.973298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.264 qpair failed and we were unable to recover it. 00:37:35.264 [2024-11-19 08:01:26.973434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.264 [2024-11-19 08:01:26.973468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.264 qpair failed and we were unable to recover it. 00:37:35.264 [2024-11-19 08:01:26.973604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.264 [2024-11-19 08:01:26.973638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.265 qpair failed and we were unable to recover it. 00:37:35.265 [2024-11-19 08:01:26.973795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.265 [2024-11-19 08:01:26.973830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.265 qpair failed and we were unable to recover it. 00:37:35.265 [2024-11-19 08:01:26.973967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.265 [2024-11-19 08:01:26.974002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.265 qpair failed and we were unable to recover it. 00:37:35.265 [2024-11-19 08:01:26.974103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.265 [2024-11-19 08:01:26.974137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.265 qpair failed and we were unable to recover it. 00:37:35.265 [2024-11-19 08:01:26.974275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.265 [2024-11-19 08:01:26.974310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.265 qpair failed and we were unable to recover it. 00:37:35.265 [2024-11-19 08:01:26.974464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.265 [2024-11-19 08:01:26.974512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.265 qpair failed and we were unable to recover it. 00:37:35.265 [2024-11-19 08:01:26.974666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.265 [2024-11-19 08:01:26.974726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.265 qpair failed and we were unable to recover it. 00:37:35.265 [2024-11-19 08:01:26.974940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.265 [2024-11-19 08:01:26.974994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.265 qpair failed and we were unable to recover it. 00:37:35.265 [2024-11-19 08:01:26.975177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.265 [2024-11-19 08:01:26.975231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.265 qpair failed and we were unable to recover it. 00:37:35.265 [2024-11-19 08:01:26.975404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.265 [2024-11-19 08:01:26.975472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.265 qpair failed and we were unable to recover it. 00:37:35.265 [2024-11-19 08:01:26.975607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.265 [2024-11-19 08:01:26.975646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.265 qpair failed and we were unable to recover it. 00:37:35.265 [2024-11-19 08:01:26.975785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.265 [2024-11-19 08:01:26.975822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.265 qpair failed and we were unable to recover it. 00:37:35.265 [2024-11-19 08:01:26.975977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.265 [2024-11-19 08:01:26.976028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.265 qpair failed and we were unable to recover it. 00:37:35.265 [2024-11-19 08:01:26.976228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.265 [2024-11-19 08:01:26.976263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.265 qpair failed and we were unable to recover it. 00:37:35.265 [2024-11-19 08:01:26.976393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.265 [2024-11-19 08:01:26.976427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.265 qpair failed and we were unable to recover it. 00:37:35.265 [2024-11-19 08:01:26.976665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.265 [2024-11-19 08:01:26.976732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.265 qpair failed and we were unable to recover it. 00:37:35.265 [2024-11-19 08:01:26.976960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.265 [2024-11-19 08:01:26.976997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.265 qpair failed and we were unable to recover it. 00:37:35.265 [2024-11-19 08:01:26.977161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.265 [2024-11-19 08:01:26.977196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.265 qpair failed and we were unable to recover it. 00:37:35.265 [2024-11-19 08:01:26.977295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.265 [2024-11-19 08:01:26.977329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.265 qpair failed and we were unable to recover it. 00:37:35.265 [2024-11-19 08:01:26.977485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.265 [2024-11-19 08:01:26.977519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.265 qpair failed and we were unable to recover it. 00:37:35.265 [2024-11-19 08:01:26.977634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.265 [2024-11-19 08:01:26.977697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.265 qpair failed and we were unable to recover it. 00:37:35.265 [2024-11-19 08:01:26.977820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.265 [2024-11-19 08:01:26.977856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.265 qpair failed and we were unable to recover it. 00:37:35.265 [2024-11-19 08:01:26.978023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.265 [2024-11-19 08:01:26.978078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.265 qpair failed and we were unable to recover it. 00:37:35.265 [2024-11-19 08:01:26.978298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.265 [2024-11-19 08:01:26.978336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.265 qpair failed and we were unable to recover it. 00:37:35.265 [2024-11-19 08:01:26.978484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.265 [2024-11-19 08:01:26.978520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.265 qpair failed and we were unable to recover it. 00:37:35.265 [2024-11-19 08:01:26.978678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.265 [2024-11-19 08:01:26.978725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.265 qpair failed and we were unable to recover it. 00:37:35.265 [2024-11-19 08:01:26.978897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.265 [2024-11-19 08:01:26.978945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.265 qpair failed and we were unable to recover it. 00:37:35.265 [2024-11-19 08:01:26.979170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.265 [2024-11-19 08:01:26.979218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.265 qpair failed and we were unable to recover it. 00:37:35.265 [2024-11-19 08:01:26.979361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.265 [2024-11-19 08:01:26.979396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.265 qpair failed and we were unable to recover it. 00:37:35.265 [2024-11-19 08:01:26.979606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.265 [2024-11-19 08:01:26.979640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.265 qpair failed and we were unable to recover it. 00:37:35.265 [2024-11-19 08:01:26.979779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.265 [2024-11-19 08:01:26.979815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.265 qpair failed and we were unable to recover it. 00:37:35.265 [2024-11-19 08:01:26.979941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.265 [2024-11-19 08:01:26.979993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.265 qpair failed and we were unable to recover it. 00:37:35.265 [2024-11-19 08:01:26.980264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.265 [2024-11-19 08:01:26.980322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.265 qpair failed and we were unable to recover it. 00:37:35.265 [2024-11-19 08:01:26.980438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.265 [2024-11-19 08:01:26.980492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.265 qpair failed and we were unable to recover it. 00:37:35.265 [2024-11-19 08:01:26.980621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.265 [2024-11-19 08:01:26.980658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.265 qpair failed and we were unable to recover it. 00:37:35.265 [2024-11-19 08:01:26.980806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.265 [2024-11-19 08:01:26.980843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.265 qpair failed and we were unable to recover it. 00:37:35.265 [2024-11-19 08:01:26.981006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.265 [2024-11-19 08:01:26.981041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.265 qpair failed and we were unable to recover it. 00:37:35.265 [2024-11-19 08:01:26.981154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.265 [2024-11-19 08:01:26.981189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.265 qpair failed and we were unable to recover it. 00:37:35.265 [2024-11-19 08:01:26.981315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.266 [2024-11-19 08:01:26.981410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.266 qpair failed and we were unable to recover it. 00:37:35.266 [2024-11-19 08:01:26.981566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.266 [2024-11-19 08:01:26.981609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.266 qpair failed and we were unable to recover it. 00:37:35.266 [2024-11-19 08:01:26.981790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.266 [2024-11-19 08:01:26.981838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.266 qpair failed and we were unable to recover it. 00:37:35.266 [2024-11-19 08:01:26.981980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.266 [2024-11-19 08:01:26.982035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.266 qpair failed and we were unable to recover it. 00:37:35.266 [2024-11-19 08:01:26.982174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.266 [2024-11-19 08:01:26.982209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.266 qpair failed and we were unable to recover it. 00:37:35.266 [2024-11-19 08:01:26.982315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.266 [2024-11-19 08:01:26.982350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.266 qpair failed and we were unable to recover it. 00:37:35.266 [2024-11-19 08:01:26.982448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.266 [2024-11-19 08:01:26.982482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.266 qpair failed and we were unable to recover it. 00:37:35.266 [2024-11-19 08:01:26.982629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.266 [2024-11-19 08:01:26.982665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.266 qpair failed and we were unable to recover it. 00:37:35.266 [2024-11-19 08:01:26.982837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.266 [2024-11-19 08:01:26.982877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.266 qpair failed and we were unable to recover it. 00:37:35.266 [2024-11-19 08:01:26.983009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.266 [2024-11-19 08:01:26.983074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.266 qpair failed and we were unable to recover it. 00:37:35.266 [2024-11-19 08:01:26.983187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.266 [2024-11-19 08:01:26.983223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.266 qpair failed and we were unable to recover it. 00:37:35.266 [2024-11-19 08:01:26.983439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.266 [2024-11-19 08:01:26.983499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.266 qpair failed and we were unable to recover it. 00:37:35.266 [2024-11-19 08:01:26.983693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.266 [2024-11-19 08:01:26.983738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.266 qpair failed and we were unable to recover it. 00:37:35.266 [2024-11-19 08:01:26.983881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.266 [2024-11-19 08:01:26.983917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.266 qpair failed and we were unable to recover it. 00:37:35.266 [2024-11-19 08:01:26.984082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.266 [2024-11-19 08:01:26.984134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.266 qpair failed and we were unable to recover it. 00:37:35.266 [2024-11-19 08:01:26.984336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.266 [2024-11-19 08:01:26.984397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.266 qpair failed and we were unable to recover it. 00:37:35.266 [2024-11-19 08:01:26.984596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.266 [2024-11-19 08:01:26.984657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.266 qpair failed and we were unable to recover it. 00:37:35.266 [2024-11-19 08:01:26.984860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.266 [2024-11-19 08:01:26.984909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.266 qpair failed and we were unable to recover it. 00:37:35.266 [2024-11-19 08:01:26.985091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.266 [2024-11-19 08:01:26.985129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.266 qpair failed and we were unable to recover it. 00:37:35.266 [2024-11-19 08:01:26.985294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.266 [2024-11-19 08:01:26.985329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.266 qpair failed and we were unable to recover it. 00:37:35.266 [2024-11-19 08:01:26.985480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.266 [2024-11-19 08:01:26.985534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.266 qpair failed and we were unable to recover it. 00:37:35.266 [2024-11-19 08:01:26.985725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.266 [2024-11-19 08:01:26.985765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.266 qpair failed and we were unable to recover it. 00:37:35.266 [2024-11-19 08:01:26.985939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.266 [2024-11-19 08:01:26.986020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.266 qpair failed and we were unable to recover it. 00:37:35.266 [2024-11-19 08:01:26.986141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.266 [2024-11-19 08:01:26.986178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.266 qpair failed and we were unable to recover it. 00:37:35.266 [2024-11-19 08:01:26.986403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.266 [2024-11-19 08:01:26.986463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.266 qpair failed and we were unable to recover it. 00:37:35.266 [2024-11-19 08:01:26.986656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.266 [2024-11-19 08:01:26.986699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.266 qpair failed and we were unable to recover it. 00:37:35.266 [2024-11-19 08:01:26.986836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.266 [2024-11-19 08:01:26.986884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.266 qpair failed and we were unable to recover it. 00:37:35.266 [2024-11-19 08:01:26.987129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.266 [2024-11-19 08:01:26.987194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.266 qpair failed and we were unable to recover it. 00:37:35.266 [2024-11-19 08:01:26.987379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.266 [2024-11-19 08:01:26.987474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.266 qpair failed and we were unable to recover it. 00:37:35.266 [2024-11-19 08:01:26.987631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.266 [2024-11-19 08:01:26.987665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.266 qpair failed and we were unable to recover it. 00:37:35.266 [2024-11-19 08:01:26.987810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.266 [2024-11-19 08:01:26.987846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.266 qpair failed and we were unable to recover it. 00:37:35.266 [2024-11-19 08:01:26.988018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.266 [2024-11-19 08:01:26.988064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.266 qpair failed and we were unable to recover it. 00:37:35.266 [2024-11-19 08:01:26.988194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.266 [2024-11-19 08:01:26.988229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.266 qpair failed and we were unable to recover it. 00:37:35.266 [2024-11-19 08:01:26.988362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.266 [2024-11-19 08:01:26.988396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.266 qpair failed and we were unable to recover it. 00:37:35.266 [2024-11-19 08:01:26.988561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.266 [2024-11-19 08:01:26.988627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.266 qpair failed and we were unable to recover it. 00:37:35.266 [2024-11-19 08:01:26.988785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.266 [2024-11-19 08:01:26.988833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.266 qpair failed and we were unable to recover it. 00:37:35.266 [2024-11-19 08:01:26.988958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.266 [2024-11-19 08:01:26.989013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.266 qpair failed and we were unable to recover it. 00:37:35.266 [2024-11-19 08:01:26.989209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.267 [2024-11-19 08:01:26.989244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.267 qpair failed and we were unable to recover it. 00:37:35.267 [2024-11-19 08:01:26.989349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.267 [2024-11-19 08:01:26.989383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.267 qpair failed and we were unable to recover it. 00:37:35.267 [2024-11-19 08:01:26.989574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.267 [2024-11-19 08:01:26.989621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.267 qpair failed and we were unable to recover it. 00:37:35.267 [2024-11-19 08:01:26.989781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.267 [2024-11-19 08:01:26.989818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.267 qpair failed and we were unable to recover it. 00:37:35.267 [2024-11-19 08:01:26.989982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.267 [2024-11-19 08:01:26.990018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.267 qpair failed and we were unable to recover it. 00:37:35.267 [2024-11-19 08:01:26.990184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.267 [2024-11-19 08:01:26.990218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.267 qpair failed and we were unable to recover it. 00:37:35.267 [2024-11-19 08:01:26.990376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.267 [2024-11-19 08:01:26.990443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.267 qpair failed and we were unable to recover it. 00:37:35.267 [2024-11-19 08:01:26.990598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.267 [2024-11-19 08:01:26.990653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.267 qpair failed and we were unable to recover it. 00:37:35.267 [2024-11-19 08:01:26.990800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.267 [2024-11-19 08:01:26.990848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.267 qpair failed and we were unable to recover it. 00:37:35.267 [2024-11-19 08:01:26.991020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.267 [2024-11-19 08:01:26.991058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.267 qpair failed and we were unable to recover it. 00:37:35.267 [2024-11-19 08:01:26.991201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.267 [2024-11-19 08:01:26.991236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.267 qpair failed and we were unable to recover it. 00:37:35.267 [2024-11-19 08:01:26.991428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.267 [2024-11-19 08:01:26.991486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.267 qpair failed and we were unable to recover it. 00:37:35.267 [2024-11-19 08:01:26.991615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.267 [2024-11-19 08:01:26.991654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.267 qpair failed and we were unable to recover it. 00:37:35.267 [2024-11-19 08:01:26.991821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.267 [2024-11-19 08:01:26.991856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.267 qpair failed and we were unable to recover it. 00:37:35.267 [2024-11-19 08:01:26.992022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.267 [2024-11-19 08:01:26.992061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.267 qpair failed and we were unable to recover it. 00:37:35.267 [2024-11-19 08:01:26.992216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.267 [2024-11-19 08:01:26.992258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.267 qpair failed and we were unable to recover it. 00:37:35.267 [2024-11-19 08:01:26.992415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.267 [2024-11-19 08:01:26.992467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.267 qpair failed and we were unable to recover it. 00:37:35.267 [2024-11-19 08:01:26.992590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.267 [2024-11-19 08:01:26.992639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.267 qpair failed and we were unable to recover it. 00:37:35.267 [2024-11-19 08:01:26.992806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.267 [2024-11-19 08:01:26.992843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.267 qpair failed and we were unable to recover it. 00:37:35.267 [2024-11-19 08:01:26.993092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.267 [2024-11-19 08:01:26.993176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.267 qpair failed and we were unable to recover it. 00:37:35.267 [2024-11-19 08:01:26.993437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.267 [2024-11-19 08:01:26.993474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.267 qpair failed and we were unable to recover it. 00:37:35.267 [2024-11-19 08:01:26.993582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.267 [2024-11-19 08:01:26.993618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.267 qpair failed and we were unable to recover it. 00:37:35.267 [2024-11-19 08:01:26.993835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.267 [2024-11-19 08:01:26.993871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.267 qpair failed and we were unable to recover it. 00:37:35.267 [2024-11-19 08:01:26.994027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.267 [2024-11-19 08:01:26.994075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.267 qpair failed and we were unable to recover it. 00:37:35.267 [2024-11-19 08:01:26.994292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.267 [2024-11-19 08:01:26.994328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.267 qpair failed and we were unable to recover it. 00:37:35.267 [2024-11-19 08:01:26.994529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.267 [2024-11-19 08:01:26.994585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.267 qpair failed and we were unable to recover it. 00:37:35.267 [2024-11-19 08:01:26.994736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.267 [2024-11-19 08:01:26.994771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.267 qpair failed and we were unable to recover it. 00:37:35.267 [2024-11-19 08:01:26.994904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.267 [2024-11-19 08:01:26.994937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.267 qpair failed and we were unable to recover it. 00:37:35.267 [2024-11-19 08:01:26.995132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.267 [2024-11-19 08:01:26.995199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.267 qpair failed and we were unable to recover it. 00:37:35.267 [2024-11-19 08:01:26.995413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.267 [2024-11-19 08:01:26.995447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.267 qpair failed and we were unable to recover it. 00:37:35.267 [2024-11-19 08:01:26.995567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.267 [2024-11-19 08:01:26.995601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.267 qpair failed and we were unable to recover it. 00:37:35.267 [2024-11-19 08:01:26.995719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.267 [2024-11-19 08:01:26.995768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.267 qpair failed and we were unable to recover it. 00:37:35.267 [2024-11-19 08:01:26.995917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.267 [2024-11-19 08:01:26.995954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.267 qpair failed and we were unable to recover it. 00:37:35.267 [2024-11-19 08:01:26.996121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.267 [2024-11-19 08:01:26.996156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.267 qpair failed and we were unable to recover it. 00:37:35.267 [2024-11-19 08:01:26.996304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.267 [2024-11-19 08:01:26.996338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.267 qpair failed and we were unable to recover it. 00:37:35.267 [2024-11-19 08:01:26.996468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.267 [2024-11-19 08:01:26.996517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.267 qpair failed and we were unable to recover it. 00:37:35.267 [2024-11-19 08:01:26.996743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.267 [2024-11-19 08:01:26.996792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.267 qpair failed and we were unable to recover it. 00:37:35.267 [2024-11-19 08:01:26.996902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.267 [2024-11-19 08:01:26.996937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.267 qpair failed and we were unable to recover it. 00:37:35.267 [2024-11-19 08:01:26.997208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.268 [2024-11-19 08:01:26.997269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.268 qpair failed and we were unable to recover it. 00:37:35.268 [2024-11-19 08:01:26.997453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.268 [2024-11-19 08:01:26.997513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.268 qpair failed and we were unable to recover it. 00:37:35.268 [2024-11-19 08:01:26.997669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.268 [2024-11-19 08:01:26.997716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.268 qpair failed and we were unable to recover it. 00:37:35.268 [2024-11-19 08:01:26.997868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.268 [2024-11-19 08:01:26.997901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.268 qpair failed and we were unable to recover it. 00:37:35.268 [2024-11-19 08:01:26.998035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.268 [2024-11-19 08:01:26.998068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.268 qpair failed and we were unable to recover it. 00:37:35.268 [2024-11-19 08:01:26.998185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.268 [2024-11-19 08:01:26.998222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.268 qpair failed and we were unable to recover it. 00:37:35.268 [2024-11-19 08:01:26.998369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.268 [2024-11-19 08:01:26.998406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.268 qpair failed and we were unable to recover it. 00:37:35.268 [2024-11-19 08:01:26.998567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.268 [2024-11-19 08:01:26.998621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.268 qpair failed and we were unable to recover it. 00:37:35.268 [2024-11-19 08:01:26.998814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.268 [2024-11-19 08:01:26.998862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.268 qpair failed and we were unable to recover it. 00:37:35.268 [2024-11-19 08:01:26.999061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.268 [2024-11-19 08:01:26.999127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.268 qpair failed and we were unable to recover it. 00:37:35.268 [2024-11-19 08:01:26.999262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.268 [2024-11-19 08:01:26.999300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.268 qpair failed and we were unable to recover it. 00:37:35.268 [2024-11-19 08:01:26.999474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.268 [2024-11-19 08:01:26.999512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.268 qpair failed and we were unable to recover it. 00:37:35.268 [2024-11-19 08:01:26.999700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.268 [2024-11-19 08:01:26.999735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.268 qpair failed and we were unable to recover it. 00:37:35.268 [2024-11-19 08:01:26.999876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.268 [2024-11-19 08:01:26.999914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.268 qpair failed and we were unable to recover it. 00:37:35.268 [2024-11-19 08:01:27.000042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.268 [2024-11-19 08:01:27.000080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.268 qpair failed and we were unable to recover it. 00:37:35.268 [2024-11-19 08:01:27.000347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.268 [2024-11-19 08:01:27.000381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.268 qpair failed and we were unable to recover it. 00:37:35.268 [2024-11-19 08:01:27.000497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.268 [2024-11-19 08:01:27.000531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.268 qpair failed and we were unable to recover it. 00:37:35.268 [2024-11-19 08:01:27.000641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.268 [2024-11-19 08:01:27.000675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.268 qpair failed and we were unable to recover it. 00:37:35.268 [2024-11-19 08:01:27.000813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.268 [2024-11-19 08:01:27.000846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.268 qpair failed and we were unable to recover it. 00:37:35.268 [2024-11-19 08:01:27.000976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.268 [2024-11-19 08:01:27.001011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.268 qpair failed and we were unable to recover it. 00:37:35.268 [2024-11-19 08:01:27.001188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.268 [2024-11-19 08:01:27.001225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.268 qpair failed and we were unable to recover it. 00:37:35.268 [2024-11-19 08:01:27.001370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.268 [2024-11-19 08:01:27.001406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.268 qpair failed and we were unable to recover it. 00:37:35.268 [2024-11-19 08:01:27.001611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.268 [2024-11-19 08:01:27.001650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.268 qpair failed and we were unable to recover it. 00:37:35.268 [2024-11-19 08:01:27.001823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.268 [2024-11-19 08:01:27.001873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.268 qpair failed and we were unable to recover it. 00:37:35.268 [2024-11-19 08:01:27.002034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.268 [2024-11-19 08:01:27.002082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.268 qpair failed and we were unable to recover it. 00:37:35.268 [2024-11-19 08:01:27.002316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.268 [2024-11-19 08:01:27.002353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.268 qpair failed and we were unable to recover it. 00:37:35.268 [2024-11-19 08:01:27.002599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.268 [2024-11-19 08:01:27.002662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.268 qpair failed and we were unable to recover it. 00:37:35.268 [2024-11-19 08:01:27.002817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.268 [2024-11-19 08:01:27.002862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.268 qpair failed and we were unable to recover it. 00:37:35.268 [2024-11-19 08:01:27.003040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.268 [2024-11-19 08:01:27.003077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.268 qpair failed and we were unable to recover it. 00:37:35.268 [2024-11-19 08:01:27.003257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.268 [2024-11-19 08:01:27.003297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.268 qpair failed and we were unable to recover it. 00:37:35.268 [2024-11-19 08:01:27.003414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.268 [2024-11-19 08:01:27.003454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.268 qpair failed and we were unable to recover it. 00:37:35.268 [2024-11-19 08:01:27.003611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.268 [2024-11-19 08:01:27.003677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.268 qpair failed and we were unable to recover it. 00:37:35.268 [2024-11-19 08:01:27.003805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.268 [2024-11-19 08:01:27.003843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.268 qpair failed and we were unable to recover it. 00:37:35.268 [2024-11-19 08:01:27.003968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.268 [2024-11-19 08:01:27.004021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.268 qpair failed and we were unable to recover it. 00:37:35.268 [2024-11-19 08:01:27.004202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.269 [2024-11-19 08:01:27.004253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.269 qpair failed and we were unable to recover it. 00:37:35.269 [2024-11-19 08:01:27.004468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.269 [2024-11-19 08:01:27.004503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.269 qpair failed and we were unable to recover it. 00:37:35.269 [2024-11-19 08:01:27.004614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.269 [2024-11-19 08:01:27.004660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.269 qpair failed and we were unable to recover it. 00:37:35.269 [2024-11-19 08:01:27.004841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.269 [2024-11-19 08:01:27.004895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.269 qpair failed and we were unable to recover it. 00:37:35.269 [2024-11-19 08:01:27.005039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.269 [2024-11-19 08:01:27.005104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.269 qpair failed and we were unable to recover it. 00:37:35.269 [2024-11-19 08:01:27.005272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.269 [2024-11-19 08:01:27.005325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.269 qpair failed and we were unable to recover it. 00:37:35.269 [2024-11-19 08:01:27.005518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.269 [2024-11-19 08:01:27.005552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.269 qpair failed and we were unable to recover it. 00:37:35.269 [2024-11-19 08:01:27.005668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.269 [2024-11-19 08:01:27.005708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.269 qpair failed and we were unable to recover it. 00:37:35.269 [2024-11-19 08:01:27.005818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.269 [2024-11-19 08:01:27.005870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.269 qpair failed and we were unable to recover it. 00:37:35.269 [2024-11-19 08:01:27.006038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.269 [2024-11-19 08:01:27.006107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.269 qpair failed and we were unable to recover it. 00:37:35.269 [2024-11-19 08:01:27.006284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.269 [2024-11-19 08:01:27.006351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.269 qpair failed and we were unable to recover it. 00:37:35.269 [2024-11-19 08:01:27.006466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.269 [2024-11-19 08:01:27.006519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.269 qpair failed and we were unable to recover it. 00:37:35.269 [2024-11-19 08:01:27.006630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.269 [2024-11-19 08:01:27.006666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.269 qpair failed and we were unable to recover it. 00:37:35.269 [2024-11-19 08:01:27.006839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.269 [2024-11-19 08:01:27.006889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.269 qpair failed and we were unable to recover it. 00:37:35.269 [2024-11-19 08:01:27.007122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.269 [2024-11-19 08:01:27.007159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.269 qpair failed and we were unable to recover it. 00:37:35.269 [2024-11-19 08:01:27.007324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.269 [2024-11-19 08:01:27.007360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.269 qpair failed and we were unable to recover it. 00:37:35.269 [2024-11-19 08:01:27.007572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.269 [2024-11-19 08:01:27.007607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.269 qpair failed and we were unable to recover it. 00:37:35.269 [2024-11-19 08:01:27.007730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.269 [2024-11-19 08:01:27.007766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.269 qpair failed and we were unable to recover it. 00:37:35.269 [2024-11-19 08:01:27.007901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.269 [2024-11-19 08:01:27.007936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.269 qpair failed and we were unable to recover it. 00:37:35.269 [2024-11-19 08:01:27.008096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.269 [2024-11-19 08:01:27.008131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.269 qpair failed and we were unable to recover it. 00:37:35.269 [2024-11-19 08:01:27.008287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.269 [2024-11-19 08:01:27.008348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.269 qpair failed and we were unable to recover it. 00:37:35.269 [2024-11-19 08:01:27.008549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.269 [2024-11-19 08:01:27.008585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.269 qpair failed and we were unable to recover it. 00:37:35.269 [2024-11-19 08:01:27.008679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.269 [2024-11-19 08:01:27.008720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.269 qpair failed and we were unable to recover it. 00:37:35.269 [2024-11-19 08:01:27.008857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.269 [2024-11-19 08:01:27.008892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.269 qpair failed and we were unable to recover it. 00:37:35.269 [2024-11-19 08:01:27.009017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.269 [2024-11-19 08:01:27.009066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.269 qpair failed and we were unable to recover it. 00:37:35.269 [2024-11-19 08:01:27.009186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.269 [2024-11-19 08:01:27.009241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.269 qpair failed and we were unable to recover it. 00:37:35.269 [2024-11-19 08:01:27.009405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.269 [2024-11-19 08:01:27.009470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.269 qpair failed and we were unable to recover it. 00:37:35.269 [2024-11-19 08:01:27.009609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.269 [2024-11-19 08:01:27.009645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.269 qpair failed and we were unable to recover it. 00:37:35.269 [2024-11-19 08:01:27.009792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.269 [2024-11-19 08:01:27.009827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.269 qpair failed and we were unable to recover it. 00:37:35.269 [2024-11-19 08:01:27.010007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.269 [2024-11-19 08:01:27.010047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.269 qpair failed and we were unable to recover it. 00:37:35.269 [2024-11-19 08:01:27.010201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.269 [2024-11-19 08:01:27.010239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.269 qpair failed and we were unable to recover it. 00:37:35.269 [2024-11-19 08:01:27.010388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.269 [2024-11-19 08:01:27.010441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.269 qpair failed and we were unable to recover it. 00:37:35.269 [2024-11-19 08:01:27.010574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.269 [2024-11-19 08:01:27.010614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.269 qpair failed and we were unable to recover it. 00:37:35.269 [2024-11-19 08:01:27.010800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.269 [2024-11-19 08:01:27.010855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.269 qpair failed and we were unable to recover it. 00:37:35.269 [2024-11-19 08:01:27.011033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.269 [2024-11-19 08:01:27.011100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.269 qpair failed and we were unable to recover it. 00:37:35.269 [2024-11-19 08:01:27.011252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.269 [2024-11-19 08:01:27.011308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.269 qpair failed and we were unable to recover it. 00:37:35.269 [2024-11-19 08:01:27.011466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.269 [2024-11-19 08:01:27.011519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.269 qpair failed and we were unable to recover it. 00:37:35.269 [2024-11-19 08:01:27.011653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.269 [2024-11-19 08:01:27.011697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.269 qpair failed and we were unable to recover it. 00:37:35.269 [2024-11-19 08:01:27.011817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.270 [2024-11-19 08:01:27.011858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.270 qpair failed and we were unable to recover it. 00:37:35.270 [2024-11-19 08:01:27.012013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.270 [2024-11-19 08:01:27.012061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.270 qpair failed and we were unable to recover it. 00:37:35.270 [2024-11-19 08:01:27.012208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.270 [2024-11-19 08:01:27.012243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.270 qpair failed and we were unable to recover it. 00:37:35.270 [2024-11-19 08:01:27.012368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.270 [2024-11-19 08:01:27.012404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.270 qpair failed and we were unable to recover it. 00:37:35.270 [2024-11-19 08:01:27.012514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.270 [2024-11-19 08:01:27.012549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.270 qpair failed and we were unable to recover it. 00:37:35.270 [2024-11-19 08:01:27.012659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.270 [2024-11-19 08:01:27.012699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.270 qpair failed and we were unable to recover it. 00:37:35.270 [2024-11-19 08:01:27.012837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.270 [2024-11-19 08:01:27.012873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.270 qpair failed and we were unable to recover it. 00:37:35.270 [2024-11-19 08:01:27.012976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.270 [2024-11-19 08:01:27.013012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.270 qpair failed and we were unable to recover it. 00:37:35.270 [2024-11-19 08:01:27.013177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.270 [2024-11-19 08:01:27.013211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.270 qpair failed and we were unable to recover it. 00:37:35.270 [2024-11-19 08:01:27.013350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.270 [2024-11-19 08:01:27.013385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.270 qpair failed and we were unable to recover it. 00:37:35.270 [2024-11-19 08:01:27.013551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.270 [2024-11-19 08:01:27.013589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.270 qpair failed and we were unable to recover it. 00:37:35.270 [2024-11-19 08:01:27.013772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.270 [2024-11-19 08:01:27.013807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.270 qpair failed and we were unable to recover it. 00:37:35.270 [2024-11-19 08:01:27.013916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.270 [2024-11-19 08:01:27.013950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.270 qpair failed and we were unable to recover it. 00:37:35.270 [2024-11-19 08:01:27.014149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.270 [2024-11-19 08:01:27.014183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.270 qpair failed and we were unable to recover it. 00:37:35.270 [2024-11-19 08:01:27.014300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.270 [2024-11-19 08:01:27.014335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.270 qpair failed and we were unable to recover it. 00:37:35.270 [2024-11-19 08:01:27.014479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.270 [2024-11-19 08:01:27.014514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.270 qpair failed and we were unable to recover it. 00:37:35.270 [2024-11-19 08:01:27.014687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.270 [2024-11-19 08:01:27.014729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.270 qpair failed and we were unable to recover it. 00:37:35.270 [2024-11-19 08:01:27.014853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.270 [2024-11-19 08:01:27.014888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.270 qpair failed and we were unable to recover it. 00:37:35.270 [2024-11-19 08:01:27.015018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.270 [2024-11-19 08:01:27.015053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.270 qpair failed and we were unable to recover it. 00:37:35.270 [2024-11-19 08:01:27.015221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.270 [2024-11-19 08:01:27.015283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.270 qpair failed and we were unable to recover it. 00:37:35.270 [2024-11-19 08:01:27.015536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.270 [2024-11-19 08:01:27.015585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.270 qpair failed and we were unable to recover it. 00:37:35.270 [2024-11-19 08:01:27.015702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.270 [2024-11-19 08:01:27.015740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.270 qpair failed and we were unable to recover it. 00:37:35.270 [2024-11-19 08:01:27.015873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.270 [2024-11-19 08:01:27.015921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.270 qpair failed and we were unable to recover it. 00:37:35.270 [2024-11-19 08:01:27.016043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.270 [2024-11-19 08:01:27.016080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.270 qpair failed and we were unable to recover it. 00:37:35.270 [2024-11-19 08:01:27.016195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.270 [2024-11-19 08:01:27.016229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.270 qpair failed and we were unable to recover it. 00:37:35.270 [2024-11-19 08:01:27.016357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.270 [2024-11-19 08:01:27.016391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.270 qpair failed and we were unable to recover it. 00:37:35.270 [2024-11-19 08:01:27.016521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.270 [2024-11-19 08:01:27.016560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.270 qpair failed and we were unable to recover it. 00:37:35.270 [2024-11-19 08:01:27.016745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.270 [2024-11-19 08:01:27.016793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.270 qpair failed and we were unable to recover it. 00:37:35.270 [2024-11-19 08:01:27.016915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.270 [2024-11-19 08:01:27.016953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.270 qpair failed and we were unable to recover it. 00:37:35.270 [2024-11-19 08:01:27.017099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.270 [2024-11-19 08:01:27.017167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.270 qpair failed and we were unable to recover it. 00:37:35.270 [2024-11-19 08:01:27.017320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.270 [2024-11-19 08:01:27.017375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.270 qpair failed and we were unable to recover it. 00:37:35.270 [2024-11-19 08:01:27.017497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.270 [2024-11-19 08:01:27.017550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.270 qpair failed and we were unable to recover it. 00:37:35.270 [2024-11-19 08:01:27.017683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.270 [2024-11-19 08:01:27.017726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.270 qpair failed and we were unable to recover it. 00:37:35.270 [2024-11-19 08:01:27.017853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.270 [2024-11-19 08:01:27.017889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.270 qpair failed and we were unable to recover it. 00:37:35.270 [2024-11-19 08:01:27.018018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.270 [2024-11-19 08:01:27.018071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.270 qpair failed and we were unable to recover it. 00:37:35.270 [2024-11-19 08:01:27.018287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.270 [2024-11-19 08:01:27.018386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.270 qpair failed and we were unable to recover it. 00:37:35.270 [2024-11-19 08:01:27.018598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.270 [2024-11-19 08:01:27.018636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.270 qpair failed and we were unable to recover it. 00:37:35.270 [2024-11-19 08:01:27.018790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.270 [2024-11-19 08:01:27.018829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.270 qpair failed and we were unable to recover it. 00:37:35.271 [2024-11-19 08:01:27.018984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.271 [2024-11-19 08:01:27.019033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.271 qpair failed and we were unable to recover it. 00:37:35.271 [2024-11-19 08:01:27.019169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.271 [2024-11-19 08:01:27.019217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.271 qpair failed and we were unable to recover it. 00:37:35.271 [2024-11-19 08:01:27.019361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.271 [2024-11-19 08:01:27.019397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.271 qpair failed and we were unable to recover it. 00:37:35.271 [2024-11-19 08:01:27.019536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.271 [2024-11-19 08:01:27.019571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.271 qpair failed and we were unable to recover it. 00:37:35.271 [2024-11-19 08:01:27.019709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.271 [2024-11-19 08:01:27.019745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.271 qpair failed and we were unable to recover it. 00:37:35.271 [2024-11-19 08:01:27.019885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.271 [2024-11-19 08:01:27.019920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.271 qpair failed and we were unable to recover it. 00:37:35.271 [2024-11-19 08:01:27.020064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.271 [2024-11-19 08:01:27.020098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.271 qpair failed and we were unable to recover it. 00:37:35.271 [2024-11-19 08:01:27.020232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.271 [2024-11-19 08:01:27.020266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.271 qpair failed and we were unable to recover it. 00:37:35.271 [2024-11-19 08:01:27.020393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.271 [2024-11-19 08:01:27.020426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.271 qpair failed and we were unable to recover it. 00:37:35.271 [2024-11-19 08:01:27.020583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.271 [2024-11-19 08:01:27.020617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.271 qpair failed and we were unable to recover it. 00:37:35.271 [2024-11-19 08:01:27.020727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.271 [2024-11-19 08:01:27.020761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.271 qpair failed and we were unable to recover it. 00:37:35.271 [2024-11-19 08:01:27.020878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.271 [2024-11-19 08:01:27.020914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.271 qpair failed and we were unable to recover it. 00:37:35.271 [2024-11-19 08:01:27.021037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.271 [2024-11-19 08:01:27.021071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.271 qpair failed and we were unable to recover it. 00:37:35.271 [2024-11-19 08:01:27.021254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.271 [2024-11-19 08:01:27.021290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.271 qpair failed and we were unable to recover it. 00:37:35.271 [2024-11-19 08:01:27.021426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.271 [2024-11-19 08:01:27.021461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.271 qpair failed and we were unable to recover it. 00:37:35.271 [2024-11-19 08:01:27.021610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.271 [2024-11-19 08:01:27.021660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.271 qpair failed and we were unable to recover it. 00:37:35.271 [2024-11-19 08:01:27.021833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.271 [2024-11-19 08:01:27.021882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.271 qpair failed and we were unable to recover it. 00:37:35.271 [2024-11-19 08:01:27.022024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.271 [2024-11-19 08:01:27.022082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.271 qpair failed and we were unable to recover it. 00:37:35.271 [2024-11-19 08:01:27.022240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.271 [2024-11-19 08:01:27.022273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.271 qpair failed and we were unable to recover it. 00:37:35.271 [2024-11-19 08:01:27.022411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.271 [2024-11-19 08:01:27.022445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.271 qpair failed and we were unable to recover it. 00:37:35.271 [2024-11-19 08:01:27.022555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.271 [2024-11-19 08:01:27.022588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.271 qpair failed and we were unable to recover it. 00:37:35.271 [2024-11-19 08:01:27.022772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.271 [2024-11-19 08:01:27.022821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.271 qpair failed and we were unable to recover it. 00:37:35.271 [2024-11-19 08:01:27.023113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.271 [2024-11-19 08:01:27.023185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.271 qpair failed and we were unable to recover it. 00:37:35.271 [2024-11-19 08:01:27.023316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.271 [2024-11-19 08:01:27.023369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.271 qpair failed and we were unable to recover it. 00:37:35.271 [2024-11-19 08:01:27.023505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.271 [2024-11-19 08:01:27.023540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.271 qpair failed and we were unable to recover it. 00:37:35.271 [2024-11-19 08:01:27.023700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.271 [2024-11-19 08:01:27.023734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.271 qpair failed and we were unable to recover it. 00:37:35.271 [2024-11-19 08:01:27.023898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.271 [2024-11-19 08:01:27.023932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.271 qpair failed and we were unable to recover it. 00:37:35.271 [2024-11-19 08:01:27.024066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.271 [2024-11-19 08:01:27.024104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.271 qpair failed and we were unable to recover it. 00:37:35.271 [2024-11-19 08:01:27.024293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.271 [2024-11-19 08:01:27.024330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.271 qpair failed and we were unable to recover it. 00:37:35.271 [2024-11-19 08:01:27.024473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.271 [2024-11-19 08:01:27.024526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.271 qpair failed and we were unable to recover it. 00:37:35.271 [2024-11-19 08:01:27.024675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.271 [2024-11-19 08:01:27.024720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.271 qpair failed and we were unable to recover it. 00:37:35.271 [2024-11-19 08:01:27.024863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.271 [2024-11-19 08:01:27.024899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.271 qpair failed and we were unable to recover it. 00:37:35.271 [2024-11-19 08:01:27.025013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.271 [2024-11-19 08:01:27.025048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.271 qpair failed and we were unable to recover it. 00:37:35.271 [2024-11-19 08:01:27.025205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.271 [2024-11-19 08:01:27.025244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.271 qpair failed and we were unable to recover it. 00:37:35.271 [2024-11-19 08:01:27.025428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.271 [2024-11-19 08:01:27.025494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.271 qpair failed and we were unable to recover it. 00:37:35.271 [2024-11-19 08:01:27.025664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.271 [2024-11-19 08:01:27.025704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.271 qpair failed and we were unable to recover it. 00:37:35.271 [2024-11-19 08:01:27.025858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.271 [2024-11-19 08:01:27.025906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.271 qpair failed and we were unable to recover it. 00:37:35.271 [2024-11-19 08:01:27.026051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.272 [2024-11-19 08:01:27.026092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.272 qpair failed and we were unable to recover it. 00:37:35.272 [2024-11-19 08:01:27.026235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.272 [2024-11-19 08:01:27.026270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.272 qpair failed and we were unable to recover it. 00:37:35.272 [2024-11-19 08:01:27.026413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.272 [2024-11-19 08:01:27.026448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.272 qpair failed and we were unable to recover it. 00:37:35.272 [2024-11-19 08:01:27.026604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.272 [2024-11-19 08:01:27.026642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.272 qpair failed and we were unable to recover it. 00:37:35.272 [2024-11-19 08:01:27.026819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.272 [2024-11-19 08:01:27.026859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.272 qpair failed and we were unable to recover it. 00:37:35.272 [2024-11-19 08:01:27.027028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.272 [2024-11-19 08:01:27.027076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.272 qpair failed and we were unable to recover it. 00:37:35.272 [2024-11-19 08:01:27.027232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.272 [2024-11-19 08:01:27.027286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.272 qpair failed and we were unable to recover it. 00:37:35.272 [2024-11-19 08:01:27.027451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.272 [2024-11-19 08:01:27.027485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.272 qpair failed and we were unable to recover it. 00:37:35.272 [2024-11-19 08:01:27.027643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.272 [2024-11-19 08:01:27.027677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.272 qpair failed and we were unable to recover it. 00:37:35.272 [2024-11-19 08:01:27.027827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.272 [2024-11-19 08:01:27.027862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.272 qpair failed and we were unable to recover it. 00:37:35.272 [2024-11-19 08:01:27.027961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.272 [2024-11-19 08:01:27.027994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.272 qpair failed and we were unable to recover it. 00:37:35.272 [2024-11-19 08:01:27.028185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.272 [2024-11-19 08:01:27.028223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.272 qpair failed and we were unable to recover it. 00:37:35.272 [2024-11-19 08:01:27.028334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.272 [2024-11-19 08:01:27.028372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.272 qpair failed and we were unable to recover it. 00:37:35.272 [2024-11-19 08:01:27.028481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.272 [2024-11-19 08:01:27.028520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.272 qpair failed and we were unable to recover it. 00:37:35.272 [2024-11-19 08:01:27.028704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.272 [2024-11-19 08:01:27.028754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.272 qpair failed and we were unable to recover it. 00:37:35.272 [2024-11-19 08:01:27.028901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.272 [2024-11-19 08:01:27.028940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.272 qpair failed and we were unable to recover it. 00:37:35.272 [2024-11-19 08:01:27.029156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.272 [2024-11-19 08:01:27.029191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.272 qpair failed and we were unable to recover it. 00:37:35.272 [2024-11-19 08:01:27.029328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.272 [2024-11-19 08:01:27.029363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.272 qpair failed and we were unable to recover it. 00:37:35.272 [2024-11-19 08:01:27.029543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.272 [2024-11-19 08:01:27.029577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.272 qpair failed and we were unable to recover it. 00:37:35.272 [2024-11-19 08:01:27.029713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.272 [2024-11-19 08:01:27.029747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.272 qpair failed and we were unable to recover it. 00:37:35.272 [2024-11-19 08:01:27.029887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.272 [2024-11-19 08:01:27.029921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.272 qpair failed and we were unable to recover it. 00:37:35.272 [2024-11-19 08:01:27.030088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.272 [2024-11-19 08:01:27.030122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.272 qpair failed and we were unable to recover it. 00:37:35.272 [2024-11-19 08:01:27.030232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.272 [2024-11-19 08:01:27.030267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.272 qpair failed and we were unable to recover it. 00:37:35.272 [2024-11-19 08:01:27.030413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.272 [2024-11-19 08:01:27.030447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.272 qpair failed and we were unable to recover it. 00:37:35.272 [2024-11-19 08:01:27.030624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.272 [2024-11-19 08:01:27.030658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.272 qpair failed and we were unable to recover it. 00:37:35.272 [2024-11-19 08:01:27.030845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.272 [2024-11-19 08:01:27.030894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.272 qpair failed and we were unable to recover it. 00:37:35.272 [2024-11-19 08:01:27.031061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.272 [2024-11-19 08:01:27.031101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.272 qpair failed and we were unable to recover it. 00:37:35.272 [2024-11-19 08:01:27.031309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.272 [2024-11-19 08:01:27.031349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.272 qpair failed and we were unable to recover it. 00:37:35.272 [2024-11-19 08:01:27.031527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.272 [2024-11-19 08:01:27.031565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.272 qpair failed and we were unable to recover it. 00:37:35.272 [2024-11-19 08:01:27.031733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.272 [2024-11-19 08:01:27.031769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.272 qpair failed and we were unable to recover it. 00:37:35.272 [2024-11-19 08:01:27.031903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.272 [2024-11-19 08:01:27.031938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.272 qpair failed and we were unable to recover it. 00:37:35.272 [2024-11-19 08:01:27.032094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.272 [2024-11-19 08:01:27.032132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.272 qpair failed and we were unable to recover it. 00:37:35.272 [2024-11-19 08:01:27.032251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.272 [2024-11-19 08:01:27.032289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.272 qpair failed and we were unable to recover it. 00:37:35.272 [2024-11-19 08:01:27.032478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.272 [2024-11-19 08:01:27.032536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.272 qpair failed and we were unable to recover it. 00:37:35.272 [2024-11-19 08:01:27.032656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.272 [2024-11-19 08:01:27.032700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.272 qpair failed and we were unable to recover it. 00:37:35.272 [2024-11-19 08:01:27.032852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.272 [2024-11-19 08:01:27.032886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.272 qpair failed and we were unable to recover it. 00:37:35.272 [2024-11-19 08:01:27.033016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.272 [2024-11-19 08:01:27.033065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.272 qpair failed and we were unable to recover it. 00:37:35.272 [2024-11-19 08:01:27.033278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.272 [2024-11-19 08:01:27.033313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.272 qpair failed and we were unable to recover it. 00:37:35.273 [2024-11-19 08:01:27.033441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.273 [2024-11-19 08:01:27.033476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.273 qpair failed and we were unable to recover it. 00:37:35.273 [2024-11-19 08:01:27.033698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.273 [2024-11-19 08:01:27.033733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.273 qpair failed and we were unable to recover it. 00:37:35.273 [2024-11-19 08:01:27.033865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.273 [2024-11-19 08:01:27.033904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.273 qpair failed and we were unable to recover it. 00:37:35.273 [2024-11-19 08:01:27.034040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.273 [2024-11-19 08:01:27.034094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.273 qpair failed and we were unable to recover it. 00:37:35.273 [2024-11-19 08:01:27.034228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.273 [2024-11-19 08:01:27.034262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.273 qpair failed and we were unable to recover it. 00:37:35.273 [2024-11-19 08:01:27.034426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.273 [2024-11-19 08:01:27.034460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.273 qpair failed and we were unable to recover it. 00:37:35.273 [2024-11-19 08:01:27.034635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.273 [2024-11-19 08:01:27.034697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.273 qpair failed and we were unable to recover it. 00:37:35.273 [2024-11-19 08:01:27.034839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.273 [2024-11-19 08:01:27.034875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.273 qpair failed and we were unable to recover it. 00:37:35.273 [2024-11-19 08:01:27.035014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.273 [2024-11-19 08:01:27.035049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.273 qpair failed and we were unable to recover it. 00:37:35.273 [2024-11-19 08:01:27.035195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.273 [2024-11-19 08:01:27.035229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.273 qpair failed and we were unable to recover it. 00:37:35.273 [2024-11-19 08:01:27.035429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.273 [2024-11-19 08:01:27.035491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.273 qpair failed and we were unable to recover it. 00:37:35.273 [2024-11-19 08:01:27.035648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.273 [2024-11-19 08:01:27.035682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.273 qpair failed and we were unable to recover it. 00:37:35.273 [2024-11-19 08:01:27.035799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.273 [2024-11-19 08:01:27.035834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.273 qpair failed and we were unable to recover it. 00:37:35.273 [2024-11-19 08:01:27.035981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.273 [2024-11-19 08:01:27.036020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.273 qpair failed and we were unable to recover it. 00:37:35.273 [2024-11-19 08:01:27.036185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.273 [2024-11-19 08:01:27.036221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.273 qpair failed and we were unable to recover it. 00:37:35.273 [2024-11-19 08:01:27.036345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.273 [2024-11-19 08:01:27.036384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.273 qpair failed and we were unable to recover it. 00:37:35.273 [2024-11-19 08:01:27.036528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.273 [2024-11-19 08:01:27.036566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.273 qpair failed and we were unable to recover it. 00:37:35.273 [2024-11-19 08:01:27.036746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.273 [2024-11-19 08:01:27.036795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.273 qpair failed and we were unable to recover it. 00:37:35.273 [2024-11-19 08:01:27.036964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.273 [2024-11-19 08:01:27.037018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.273 qpair failed and we were unable to recover it. 00:37:35.273 [2024-11-19 08:01:27.037203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.273 [2024-11-19 08:01:27.037254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.273 qpair failed and we were unable to recover it. 00:37:35.273 [2024-11-19 08:01:27.037398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.273 [2024-11-19 08:01:27.037434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.273 qpair failed and we were unable to recover it. 00:37:35.273 [2024-11-19 08:01:27.037617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.273 [2024-11-19 08:01:27.037666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.273 qpair failed and we were unable to recover it. 00:37:35.273 [2024-11-19 08:01:27.037809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.273 [2024-11-19 08:01:27.037857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.273 qpair failed and we were unable to recover it. 00:37:35.273 [2024-11-19 08:01:27.038033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.273 [2024-11-19 08:01:27.038101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.273 qpair failed and we were unable to recover it. 00:37:35.273 [2024-11-19 08:01:27.038255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.273 [2024-11-19 08:01:27.038313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.273 qpair failed and we were unable to recover it. 00:37:35.273 [2024-11-19 08:01:27.038468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.273 [2024-11-19 08:01:27.038510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.273 qpair failed and we were unable to recover it. 00:37:35.273 [2024-11-19 08:01:27.038626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.273 [2024-11-19 08:01:27.038661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.273 qpair failed and we were unable to recover it. 00:37:35.273 [2024-11-19 08:01:27.038779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.273 [2024-11-19 08:01:27.038816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.273 qpair failed and we were unable to recover it. 00:37:35.273 [2024-11-19 08:01:27.039030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.273 [2024-11-19 08:01:27.039065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.273 qpair failed and we were unable to recover it. 00:37:35.273 [2024-11-19 08:01:27.039190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.273 [2024-11-19 08:01:27.039247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.273 qpair failed and we were unable to recover it. 00:37:35.273 [2024-11-19 08:01:27.039429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.273 [2024-11-19 08:01:27.039502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.273 qpair failed and we were unable to recover it. 00:37:35.273 [2024-11-19 08:01:27.039629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.273 [2024-11-19 08:01:27.039667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.273 qpair failed and we were unable to recover it. 00:37:35.273 [2024-11-19 08:01:27.039811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.273 [2024-11-19 08:01:27.039847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.273 qpair failed and we were unable to recover it. 00:37:35.273 [2024-11-19 08:01:27.039956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.273 [2024-11-19 08:01:27.039991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.274 qpair failed and we were unable to recover it. 00:37:35.274 [2024-11-19 08:01:27.040127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.274 [2024-11-19 08:01:27.040162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.274 qpair failed and we were unable to recover it. 00:37:35.274 [2024-11-19 08:01:27.040322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.274 [2024-11-19 08:01:27.040375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.274 qpair failed and we were unable to recover it. 00:37:35.274 [2024-11-19 08:01:27.040521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.274 [2024-11-19 08:01:27.040586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.274 qpair failed and we were unable to recover it. 00:37:35.274 [2024-11-19 08:01:27.040714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.274 [2024-11-19 08:01:27.040763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.274 qpair failed and we were unable to recover it. 00:37:35.274 [2024-11-19 08:01:27.040909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.274 [2024-11-19 08:01:27.040946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.274 qpair failed and we were unable to recover it. 00:37:35.274 [2024-11-19 08:01:27.041186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.274 [2024-11-19 08:01:27.041225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.274 qpair failed and we were unable to recover it. 00:37:35.274 [2024-11-19 08:01:27.041379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.274 [2024-11-19 08:01:27.041417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.274 qpair failed and we were unable to recover it. 00:37:35.274 [2024-11-19 08:01:27.041531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.274 [2024-11-19 08:01:27.041569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.274 qpair failed and we were unable to recover it. 00:37:35.274 [2024-11-19 08:01:27.041750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.274 [2024-11-19 08:01:27.041798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.274 qpair failed and we were unable to recover it. 00:37:35.274 [2024-11-19 08:01:27.041990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.274 [2024-11-19 08:01:27.042032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.274 qpair failed and we were unable to recover it. 00:37:35.274 [2024-11-19 08:01:27.042253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.274 [2024-11-19 08:01:27.042289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.274 qpair failed and we were unable to recover it. 00:37:35.274 [2024-11-19 08:01:27.042427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.274 [2024-11-19 08:01:27.042465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.274 qpair failed and we were unable to recover it. 00:37:35.274 [2024-11-19 08:01:27.042589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.274 [2024-11-19 08:01:27.042623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.274 qpair failed and we were unable to recover it. 00:37:35.274 [2024-11-19 08:01:27.042789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.274 [2024-11-19 08:01:27.042824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.274 qpair failed and we were unable to recover it. 00:37:35.274 [2024-11-19 08:01:27.042953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.274 [2024-11-19 08:01:27.042990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.274 qpair failed and we were unable to recover it. 00:37:35.274 [2024-11-19 08:01:27.043174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.274 [2024-11-19 08:01:27.043212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.274 qpair failed and we were unable to recover it. 00:37:35.274 [2024-11-19 08:01:27.043398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.274 [2024-11-19 08:01:27.043450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.274 qpair failed and we were unable to recover it. 00:37:35.274 [2024-11-19 08:01:27.043594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.274 [2024-11-19 08:01:27.043632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.274 qpair failed and we were unable to recover it. 00:37:35.274 [2024-11-19 08:01:27.043800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.274 [2024-11-19 08:01:27.043834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.274 qpair failed and we were unable to recover it. 00:37:35.274 [2024-11-19 08:01:27.043989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.274 [2024-11-19 08:01:27.044038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.274 qpair failed and we were unable to recover it. 00:37:35.274 [2024-11-19 08:01:27.044259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.274 [2024-11-19 08:01:27.044296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.274 qpair failed and we were unable to recover it. 00:37:35.274 [2024-11-19 08:01:27.044559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.274 [2024-11-19 08:01:27.044619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.274 qpair failed and we were unable to recover it. 00:37:35.274 [2024-11-19 08:01:27.044801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.274 [2024-11-19 08:01:27.044837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.274 qpair failed and we were unable to recover it. 00:37:35.274 [2024-11-19 08:01:27.044941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.274 [2024-11-19 08:01:27.044977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.274 qpair failed and we were unable to recover it. 00:37:35.274 [2024-11-19 08:01:27.045215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.274 [2024-11-19 08:01:27.045277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.274 qpair failed and we were unable to recover it. 00:37:35.274 [2024-11-19 08:01:27.045467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.274 [2024-11-19 08:01:27.045529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.274 qpair failed and we were unable to recover it. 00:37:35.274 [2024-11-19 08:01:27.045713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.274 [2024-11-19 08:01:27.045780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.274 qpair failed and we were unable to recover it. 00:37:35.274 [2024-11-19 08:01:27.045893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.274 [2024-11-19 08:01:27.045932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.274 qpair failed and we were unable to recover it. 00:37:35.274 [2024-11-19 08:01:27.046080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.274 [2024-11-19 08:01:27.046134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.274 qpair failed and we were unable to recover it. 00:37:35.274 [2024-11-19 08:01:27.046400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.274 [2024-11-19 08:01:27.046473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.274 qpair failed and we were unable to recover it. 00:37:35.274 [2024-11-19 08:01:27.046604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.274 [2024-11-19 08:01:27.046643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.274 qpair failed and we were unable to recover it. 00:37:35.274 [2024-11-19 08:01:27.046812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.274 [2024-11-19 08:01:27.046860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.274 qpair failed and we were unable to recover it. 00:37:35.274 [2024-11-19 08:01:27.047002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.274 [2024-11-19 08:01:27.047043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.274 qpair failed and we were unable to recover it. 00:37:35.274 [2024-11-19 08:01:27.047228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.274 [2024-11-19 08:01:27.047287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.274 qpair failed and we were unable to recover it. 00:37:35.274 [2024-11-19 08:01:27.047396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.274 [2024-11-19 08:01:27.047434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.274 qpair failed and we were unable to recover it. 00:37:35.274 [2024-11-19 08:01:27.047600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.274 [2024-11-19 08:01:27.047640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.274 qpair failed and we were unable to recover it. 00:37:35.274 [2024-11-19 08:01:27.047811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.274 [2024-11-19 08:01:27.047859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.274 qpair failed and we were unable to recover it. 00:37:35.274 [2024-11-19 08:01:27.048003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.275 [2024-11-19 08:01:27.048038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.275 qpair failed and we were unable to recover it. 00:37:35.275 [2024-11-19 08:01:27.048234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.275 [2024-11-19 08:01:27.048271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.275 qpair failed and we were unable to recover it. 00:37:35.275 [2024-11-19 08:01:27.048388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.275 [2024-11-19 08:01:27.048423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.275 qpair failed and we were unable to recover it. 00:37:35.275 [2024-11-19 08:01:27.048557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.275 [2024-11-19 08:01:27.048591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.275 qpair failed and we were unable to recover it. 00:37:35.275 [2024-11-19 08:01:27.048729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.275 [2024-11-19 08:01:27.048764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.275 qpair failed and we were unable to recover it. 00:37:35.275 [2024-11-19 08:01:27.048937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.275 [2024-11-19 08:01:27.048992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.275 qpair failed and we were unable to recover it. 00:37:35.275 [2024-11-19 08:01:27.049207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.275 [2024-11-19 08:01:27.049271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.275 qpair failed and we were unable to recover it. 00:37:35.275 [2024-11-19 08:01:27.049452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.275 [2024-11-19 08:01:27.049513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.275 qpair failed and we were unable to recover it. 00:37:35.275 [2024-11-19 08:01:27.049677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.275 [2024-11-19 08:01:27.049718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.275 qpair failed and we were unable to recover it. 00:37:35.275 [2024-11-19 08:01:27.049877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.275 [2024-11-19 08:01:27.049910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.275 qpair failed and we were unable to recover it. 00:37:35.275 [2024-11-19 08:01:27.050038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.275 [2024-11-19 08:01:27.050075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.275 qpair failed and we were unable to recover it. 00:37:35.275 [2024-11-19 08:01:27.050273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.275 [2024-11-19 08:01:27.050311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.275 qpair failed and we were unable to recover it. 00:37:35.275 [2024-11-19 08:01:27.050461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.275 [2024-11-19 08:01:27.050495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.275 qpair failed and we were unable to recover it. 00:37:35.275 [2024-11-19 08:01:27.050669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.275 [2024-11-19 08:01:27.050711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.275 qpair failed and we were unable to recover it. 00:37:35.275 [2024-11-19 08:01:27.050840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.275 [2024-11-19 08:01:27.050874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.275 qpair failed and we were unable to recover it. 00:37:35.275 [2024-11-19 08:01:27.050994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.275 [2024-11-19 08:01:27.051042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.275 qpair failed and we were unable to recover it. 00:37:35.275 [2024-11-19 08:01:27.051219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.275 [2024-11-19 08:01:27.051273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.275 qpair failed and we were unable to recover it. 00:37:35.275 [2024-11-19 08:01:27.051458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.275 [2024-11-19 08:01:27.051497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.275 qpair failed and we were unable to recover it. 00:37:35.275 [2024-11-19 08:01:27.051629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.275 [2024-11-19 08:01:27.051668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.275 qpair failed and we were unable to recover it. 00:37:35.275 [2024-11-19 08:01:27.051838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.275 [2024-11-19 08:01:27.051872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.275 qpair failed and we were unable to recover it. 00:37:35.275 [2024-11-19 08:01:27.052021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.275 [2024-11-19 08:01:27.052059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.275 qpair failed and we were unable to recover it. 00:37:35.275 [2024-11-19 08:01:27.052301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.275 [2024-11-19 08:01:27.052358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.275 qpair failed and we were unable to recover it. 00:37:35.275 [2024-11-19 08:01:27.052507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.275 [2024-11-19 08:01:27.052540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.275 qpair failed and we were unable to recover it. 00:37:35.275 [2024-11-19 08:01:27.052700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.275 [2024-11-19 08:01:27.052747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.275 qpair failed and we were unable to recover it. 00:37:35.275 [2024-11-19 08:01:27.052889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.275 [2024-11-19 08:01:27.052924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.275 qpair failed and we were unable to recover it. 00:37:35.275 [2024-11-19 08:01:27.053113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.275 [2024-11-19 08:01:27.053149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.275 qpair failed and we were unable to recover it. 00:37:35.275 [2024-11-19 08:01:27.053332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.275 [2024-11-19 08:01:27.053365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.275 qpair failed and we were unable to recover it. 00:37:35.275 [2024-11-19 08:01:27.053553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.275 [2024-11-19 08:01:27.053590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.275 qpair failed and we were unable to recover it. 00:37:35.275 [2024-11-19 08:01:27.053737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.275 [2024-11-19 08:01:27.053788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.275 qpair failed and we were unable to recover it. 00:37:35.275 [2024-11-19 08:01:27.053905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.275 [2024-11-19 08:01:27.053937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.275 qpair failed and we were unable to recover it. 00:37:35.275 [2024-11-19 08:01:27.054054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.275 [2024-11-19 08:01:27.054103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.275 qpair failed and we were unable to recover it. 00:37:35.275 [2024-11-19 08:01:27.054244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.275 [2024-11-19 08:01:27.054281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.275 qpair failed and we were unable to recover it. 00:37:35.275 [2024-11-19 08:01:27.054408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.275 [2024-11-19 08:01:27.054444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.275 qpair failed and we were unable to recover it. 00:37:35.275 [2024-11-19 08:01:27.054669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.275 [2024-11-19 08:01:27.054729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.275 qpair failed and we were unable to recover it. 00:37:35.275 [2024-11-19 08:01:27.054859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.275 [2024-11-19 08:01:27.054892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.275 qpair failed and we were unable to recover it. 00:37:35.275 [2024-11-19 08:01:27.055030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.275 [2024-11-19 08:01:27.055064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.275 qpair failed and we were unable to recover it. 00:37:35.275 [2024-11-19 08:01:27.055170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.275 [2024-11-19 08:01:27.055203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.275 qpair failed and we were unable to recover it. 00:37:35.275 [2024-11-19 08:01:27.055371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.275 [2024-11-19 08:01:27.055408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.275 qpair failed and we were unable to recover it. 00:37:35.275 [2024-11-19 08:01:27.055547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.276 [2024-11-19 08:01:27.055589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.276 qpair failed and we were unable to recover it. 00:37:35.276 [2024-11-19 08:01:27.055786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.276 [2024-11-19 08:01:27.055835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.276 qpair failed and we were unable to recover it. 00:37:35.276 [2024-11-19 08:01:27.055982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.276 [2024-11-19 08:01:27.056020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.276 qpair failed and we were unable to recover it. 00:37:35.276 [2024-11-19 08:01:27.056145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.276 [2024-11-19 08:01:27.056180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.276 qpair failed and we were unable to recover it. 00:37:35.276 [2024-11-19 08:01:27.056322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.276 [2024-11-19 08:01:27.056357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.276 qpair failed and we were unable to recover it. 00:37:35.276 [2024-11-19 08:01:27.056469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.276 [2024-11-19 08:01:27.056505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.276 qpair failed and we were unable to recover it. 00:37:35.276 [2024-11-19 08:01:27.056737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.276 [2024-11-19 08:01:27.056786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.276 qpair failed and we were unable to recover it. 00:37:35.276 [2024-11-19 08:01:27.056905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.276 [2024-11-19 08:01:27.056940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.276 qpair failed and we were unable to recover it. 00:37:35.276 [2024-11-19 08:01:27.057054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.276 [2024-11-19 08:01:27.057088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.276 qpair failed and we were unable to recover it. 00:37:35.276 [2024-11-19 08:01:27.057208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.276 [2024-11-19 08:01:27.057243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.276 qpair failed and we were unable to recover it. 00:37:35.276 [2024-11-19 08:01:27.057359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.276 [2024-11-19 08:01:27.057393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.276 qpair failed and we were unable to recover it. 00:37:35.276 [2024-11-19 08:01:27.057568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.276 [2024-11-19 08:01:27.057606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.276 qpair failed and we were unable to recover it. 00:37:35.276 [2024-11-19 08:01:27.057764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.276 [2024-11-19 08:01:27.057799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.276 qpair failed and we were unable to recover it. 00:37:35.276 [2024-11-19 08:01:27.057930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.276 [2024-11-19 08:01:27.057972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.276 qpair failed and we were unable to recover it. 00:37:35.276 [2024-11-19 08:01:27.058091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.276 [2024-11-19 08:01:27.058124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.276 qpair failed and we were unable to recover it. 00:37:35.276 [2024-11-19 08:01:27.058230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.276 [2024-11-19 08:01:27.058264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.276 qpair failed and we were unable to recover it. 00:37:35.276 [2024-11-19 08:01:27.058390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.276 [2024-11-19 08:01:27.058424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.276 qpair failed and we were unable to recover it. 00:37:35.276 [2024-11-19 08:01:27.058555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.276 [2024-11-19 08:01:27.058593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.276 qpair failed and we were unable to recover it. 00:37:35.276 [2024-11-19 08:01:27.058783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.276 [2024-11-19 08:01:27.058833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.276 qpair failed and we were unable to recover it. 00:37:35.276 [2024-11-19 08:01:27.058992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.276 [2024-11-19 08:01:27.059040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.276 qpair failed and we were unable to recover it. 00:37:35.276 [2024-11-19 08:01:27.059156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.276 [2024-11-19 08:01:27.059193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.276 qpair failed and we were unable to recover it. 00:37:35.276 [2024-11-19 08:01:27.059332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.276 [2024-11-19 08:01:27.059370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.276 qpair failed and we were unable to recover it. 00:37:35.276 [2024-11-19 08:01:27.059508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.276 [2024-11-19 08:01:27.059546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.276 qpair failed and we were unable to recover it. 00:37:35.276 [2024-11-19 08:01:27.059694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.276 [2024-11-19 08:01:27.059729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.276 qpair failed and we were unable to recover it. 00:37:35.276 [2024-11-19 08:01:27.059838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.276 [2024-11-19 08:01:27.059873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.276 qpair failed and we were unable to recover it. 00:37:35.276 [2024-11-19 08:01:27.060042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.276 [2024-11-19 08:01:27.060080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.276 qpair failed and we were unable to recover it. 00:37:35.276 [2024-11-19 08:01:27.060237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.276 [2024-11-19 08:01:27.060290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.276 qpair failed and we were unable to recover it. 00:37:35.276 [2024-11-19 08:01:27.060434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.276 [2024-11-19 08:01:27.060474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.276 qpair failed and we were unable to recover it. 00:37:35.276 [2024-11-19 08:01:27.060651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.276 [2024-11-19 08:01:27.060686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.276 qpair failed and we were unable to recover it. 00:37:35.276 [2024-11-19 08:01:27.060807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.276 [2024-11-19 08:01:27.060840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.276 qpair failed and we were unable to recover it. 00:37:35.276 [2024-11-19 08:01:27.060948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.276 [2024-11-19 08:01:27.060983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.276 qpair failed and we were unable to recover it. 00:37:35.276 [2024-11-19 08:01:27.061127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.276 [2024-11-19 08:01:27.061184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.276 qpair failed and we were unable to recover it. 00:37:35.276 [2024-11-19 08:01:27.061335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.276 [2024-11-19 08:01:27.061388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.276 qpair failed and we were unable to recover it. 00:37:35.276 [2024-11-19 08:01:27.061488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.276 [2024-11-19 08:01:27.061522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.276 qpair failed and we were unable to recover it. 00:37:35.276 [2024-11-19 08:01:27.061665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.276 [2024-11-19 08:01:27.061707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.276 qpair failed and we were unable to recover it. 00:37:35.276 [2024-11-19 08:01:27.061815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.276 [2024-11-19 08:01:27.061849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.276 qpair failed and we were unable to recover it. 00:37:35.276 [2024-11-19 08:01:27.062003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.276 [2024-11-19 08:01:27.062041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.276 qpair failed and we were unable to recover it. 00:37:35.276 [2024-11-19 08:01:27.062184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.276 [2024-11-19 08:01:27.062250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.276 qpair failed and we were unable to recover it. 00:37:35.277 [2024-11-19 08:01:27.062375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.277 [2024-11-19 08:01:27.062428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.277 qpair failed and we were unable to recover it. 00:37:35.277 [2024-11-19 08:01:27.062561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.277 [2024-11-19 08:01:27.062597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.277 qpair failed and we were unable to recover it. 00:37:35.277 [2024-11-19 08:01:27.062740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.277 [2024-11-19 08:01:27.062781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.277 qpair failed and we were unable to recover it. 00:37:35.277 [2024-11-19 08:01:27.062891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.277 [2024-11-19 08:01:27.062926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.277 qpair failed and we were unable to recover it. 00:37:35.277 [2024-11-19 08:01:27.063119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.277 [2024-11-19 08:01:27.063158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.277 qpair failed and we were unable to recover it. 00:37:35.277 [2024-11-19 08:01:27.063321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.277 [2024-11-19 08:01:27.063356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.277 qpair failed and we were unable to recover it. 00:37:35.277 [2024-11-19 08:01:27.063499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.277 [2024-11-19 08:01:27.063534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.277 qpair failed and we were unable to recover it. 00:37:35.277 [2024-11-19 08:01:27.063640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.277 [2024-11-19 08:01:27.063675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.277 qpair failed and we were unable to recover it. 00:37:35.277 [2024-11-19 08:01:27.063798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.277 [2024-11-19 08:01:27.063832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.277 qpair failed and we were unable to recover it. 00:37:35.277 [2024-11-19 08:01:27.063937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.277 [2024-11-19 08:01:27.063972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.277 qpair failed and we were unable to recover it. 00:37:35.277 [2024-11-19 08:01:27.064133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.277 [2024-11-19 08:01:27.064167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.277 qpair failed and we were unable to recover it. 00:37:35.277 [2024-11-19 08:01:27.064269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.277 [2024-11-19 08:01:27.064304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.277 qpair failed and we were unable to recover it. 00:37:35.277 [2024-11-19 08:01:27.064423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.277 [2024-11-19 08:01:27.064477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.277 qpair failed and we were unable to recover it. 00:37:35.277 [2024-11-19 08:01:27.064607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.277 [2024-11-19 08:01:27.064642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.277 qpair failed and we were unable to recover it. 00:37:35.277 [2024-11-19 08:01:27.064758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.277 [2024-11-19 08:01:27.064793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.277 qpair failed and we were unable to recover it. 00:37:35.277 [2024-11-19 08:01:27.064934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.277 [2024-11-19 08:01:27.064969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.277 qpair failed and we were unable to recover it. 00:37:35.277 [2024-11-19 08:01:27.065132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.277 [2024-11-19 08:01:27.065171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.277 qpair failed and we were unable to recover it. 00:37:35.277 [2024-11-19 08:01:27.065295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.277 [2024-11-19 08:01:27.065334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.277 qpair failed and we were unable to recover it. 00:37:35.277 [2024-11-19 08:01:27.065508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.277 [2024-11-19 08:01:27.065543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.277 qpair failed and we were unable to recover it. 00:37:35.277 [2024-11-19 08:01:27.065649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.277 [2024-11-19 08:01:27.065682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.277 qpair failed and we were unable to recover it. 00:37:35.277 [2024-11-19 08:01:27.065801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.277 [2024-11-19 08:01:27.065834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.277 qpair failed and we were unable to recover it. 00:37:35.277 [2024-11-19 08:01:27.065973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.277 [2024-11-19 08:01:27.066007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.277 qpair failed and we were unable to recover it. 00:37:35.277 [2024-11-19 08:01:27.066215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.277 [2024-11-19 08:01:27.066253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.277 qpair failed and we were unable to recover it. 00:37:35.277 [2024-11-19 08:01:27.066409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.277 [2024-11-19 08:01:27.066443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.277 qpair failed and we were unable to recover it. 00:37:35.277 [2024-11-19 08:01:27.066540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.277 [2024-11-19 08:01:27.066574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.277 qpair failed and we were unable to recover it. 00:37:35.277 [2024-11-19 08:01:27.066692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.277 [2024-11-19 08:01:27.066726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.277 qpair failed and we were unable to recover it. 00:37:35.277 [2024-11-19 08:01:27.066843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.277 [2024-11-19 08:01:27.066898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.277 qpair failed and we were unable to recover it. 00:37:35.277 [2024-11-19 08:01:27.067082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.277 [2024-11-19 08:01:27.067120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.277 qpair failed and we were unable to recover it. 00:37:35.277 [2024-11-19 08:01:27.067245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.277 [2024-11-19 08:01:27.067281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.277 qpair failed and we were unable to recover it. 00:37:35.277 [2024-11-19 08:01:27.067429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.277 [2024-11-19 08:01:27.067463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.277 qpair failed and we were unable to recover it. 00:37:35.277 [2024-11-19 08:01:27.067572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.277 [2024-11-19 08:01:27.067606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.277 qpair failed and we were unable to recover it. 00:37:35.277 [2024-11-19 08:01:27.067734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.277 [2024-11-19 08:01:27.067783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.277 qpair failed and we were unable to recover it. 00:37:35.277 [2024-11-19 08:01:27.067963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.277 [2024-11-19 08:01:27.067999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.277 qpair failed and we were unable to recover it. 00:37:35.277 [2024-11-19 08:01:27.068133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.277 [2024-11-19 08:01:27.068183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.277 qpair failed and we were unable to recover it. 00:37:35.277 [2024-11-19 08:01:27.068326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.278 [2024-11-19 08:01:27.068361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.278 qpair failed and we were unable to recover it. 00:37:35.278 [2024-11-19 08:01:27.068461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.278 [2024-11-19 08:01:27.068495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.278 qpair failed and we were unable to recover it. 00:37:35.278 [2024-11-19 08:01:27.068636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.278 [2024-11-19 08:01:27.068670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.278 qpair failed and we were unable to recover it. 00:37:35.278 [2024-11-19 08:01:27.068782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.278 [2024-11-19 08:01:27.068816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.278 qpair failed and we were unable to recover it. 00:37:35.278 [2024-11-19 08:01:27.068952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.278 [2024-11-19 08:01:27.068985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.278 qpair failed and we were unable to recover it. 00:37:35.278 [2024-11-19 08:01:27.069088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.278 [2024-11-19 08:01:27.069140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.278 qpair failed and we were unable to recover it. 00:37:35.278 [2024-11-19 08:01:27.069280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.278 [2024-11-19 08:01:27.069314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.278 qpair failed and we were unable to recover it. 00:37:35.278 [2024-11-19 08:01:27.069436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.278 [2024-11-19 08:01:27.069472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.278 qpair failed and we were unable to recover it. 00:37:35.278 [2024-11-19 08:01:27.069601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.278 [2024-11-19 08:01:27.069641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.278 qpair failed and we were unable to recover it. 00:37:35.278 [2024-11-19 08:01:27.069750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.278 [2024-11-19 08:01:27.069784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.278 qpair failed and we were unable to recover it. 00:37:35.278 [2024-11-19 08:01:27.069919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.278 [2024-11-19 08:01:27.069953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.278 qpair failed and we were unable to recover it. 00:37:35.278 [2024-11-19 08:01:27.070095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.278 [2024-11-19 08:01:27.070129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.278 qpair failed and we were unable to recover it. 00:37:35.278 [2024-11-19 08:01:27.070257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.278 [2024-11-19 08:01:27.070295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.278 qpair failed and we were unable to recover it. 00:37:35.278 [2024-11-19 08:01:27.070483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.278 [2024-11-19 08:01:27.070517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.278 qpair failed and we were unable to recover it. 00:37:35.278 [2024-11-19 08:01:27.070634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.278 [2024-11-19 08:01:27.070682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.278 qpair failed and we were unable to recover it. 00:37:35.278 [2024-11-19 08:01:27.070849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.278 [2024-11-19 08:01:27.070898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.278 qpair failed and we were unable to recover it. 00:37:35.278 [2024-11-19 08:01:27.071029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.278 [2024-11-19 08:01:27.071082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.278 qpair failed and we were unable to recover it. 00:37:35.278 [2024-11-19 08:01:27.071223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.278 [2024-11-19 08:01:27.071274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.278 qpair failed and we were unable to recover it. 00:37:35.278 [2024-11-19 08:01:27.071417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.278 [2024-11-19 08:01:27.071451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.278 qpair failed and we were unable to recover it. 00:37:35.278 [2024-11-19 08:01:27.071564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.278 [2024-11-19 08:01:27.071599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.278 qpair failed and we were unable to recover it. 00:37:35.278 [2024-11-19 08:01:27.071715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.278 [2024-11-19 08:01:27.071750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.278 qpair failed and we were unable to recover it. 00:37:35.278 [2024-11-19 08:01:27.071875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.278 [2024-11-19 08:01:27.071914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.278 qpair failed and we were unable to recover it. 00:37:35.278 [2024-11-19 08:01:27.072062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.278 [2024-11-19 08:01:27.072109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.278 qpair failed and we were unable to recover it. 00:37:35.278 [2024-11-19 08:01:27.072250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.278 [2024-11-19 08:01:27.072285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.278 qpair failed and we were unable to recover it. 00:37:35.278 [2024-11-19 08:01:27.072450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.278 [2024-11-19 08:01:27.072484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.278 qpair failed and we were unable to recover it. 00:37:35.278 [2024-11-19 08:01:27.072599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.278 [2024-11-19 08:01:27.072634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.278 qpair failed and we were unable to recover it. 00:37:35.278 [2024-11-19 08:01:27.072771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.278 [2024-11-19 08:01:27.072819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.278 qpair failed and we were unable to recover it. 00:37:35.278 [2024-11-19 08:01:27.072935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.278 [2024-11-19 08:01:27.072971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.278 qpair failed and we were unable to recover it. 00:37:35.278 [2024-11-19 08:01:27.073120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.278 [2024-11-19 08:01:27.073156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.278 qpair failed and we were unable to recover it. 00:37:35.278 [2024-11-19 08:01:27.073293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.278 [2024-11-19 08:01:27.073338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.278 qpair failed and we were unable to recover it. 00:37:35.278 [2024-11-19 08:01:27.073555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.278 [2024-11-19 08:01:27.073589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.278 qpair failed and we were unable to recover it. 00:37:35.278 [2024-11-19 08:01:27.073714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.278 [2024-11-19 08:01:27.073763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.278 qpair failed and we were unable to recover it. 00:37:35.278 [2024-11-19 08:01:27.073902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.278 [2024-11-19 08:01:27.073942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.278 qpair failed and we were unable to recover it. 00:37:35.278 [2024-11-19 08:01:27.074061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.278 [2024-11-19 08:01:27.074096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.278 qpair failed and we were unable to recover it. 00:37:35.278 [2024-11-19 08:01:27.074232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.278 [2024-11-19 08:01:27.074267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.278 qpair failed and we were unable to recover it. 00:37:35.278 [2024-11-19 08:01:27.074385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.278 [2024-11-19 08:01:27.074421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.278 qpair failed and we were unable to recover it. 00:37:35.278 [2024-11-19 08:01:27.074544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.278 [2024-11-19 08:01:27.074592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.278 qpair failed and we were unable to recover it. 00:37:35.278 [2024-11-19 08:01:27.074724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.278 [2024-11-19 08:01:27.074759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.278 qpair failed and we were unable to recover it. 00:37:35.279 [2024-11-19 08:01:27.074925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.279 [2024-11-19 08:01:27.074963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.279 qpair failed and we were unable to recover it. 00:37:35.279 [2024-11-19 08:01:27.075072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.279 [2024-11-19 08:01:27.075111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.279 qpair failed and we were unable to recover it. 00:37:35.279 [2024-11-19 08:01:27.075268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.279 [2024-11-19 08:01:27.075337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.279 qpair failed and we were unable to recover it. 00:37:35.279 [2024-11-19 08:01:27.075501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.279 [2024-11-19 08:01:27.075535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.279 qpair failed and we were unable to recover it. 00:37:35.279 [2024-11-19 08:01:27.075670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.279 [2024-11-19 08:01:27.075712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.279 qpair failed and we were unable to recover it. 00:37:35.279 [2024-11-19 08:01:27.075861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.279 [2024-11-19 08:01:27.075900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.279 qpair failed and we were unable to recover it. 00:37:35.279 [2024-11-19 08:01:27.076036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.279 [2024-11-19 08:01:27.076088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.279 qpair failed and we were unable to recover it. 00:37:35.279 [2024-11-19 08:01:27.076244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.279 [2024-11-19 08:01:27.076279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.279 qpair failed and we were unable to recover it. 00:37:35.279 [2024-11-19 08:01:27.076385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.279 [2024-11-19 08:01:27.076418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.279 qpair failed and we were unable to recover it. 00:37:35.279 [2024-11-19 08:01:27.076573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.279 [2024-11-19 08:01:27.076613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.279 qpair failed and we were unable to recover it. 00:37:35.279 [2024-11-19 08:01:27.076725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.279 [2024-11-19 08:01:27.076765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.279 qpair failed and we were unable to recover it. 00:37:35.279 [2024-11-19 08:01:27.076924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.279 [2024-11-19 08:01:27.076963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.279 qpair failed and we were unable to recover it. 00:37:35.279 [2024-11-19 08:01:27.077086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.279 [2024-11-19 08:01:27.077121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.279 qpair failed and we were unable to recover it. 00:37:35.279 [2024-11-19 08:01:27.077256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.279 [2024-11-19 08:01:27.077290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.279 qpair failed and we were unable to recover it. 00:37:35.279 [2024-11-19 08:01:27.077443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.279 [2024-11-19 08:01:27.077490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.279 qpair failed and we were unable to recover it. 00:37:35.279 [2024-11-19 08:01:27.077613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.279 [2024-11-19 08:01:27.077649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.279 qpair failed and we were unable to recover it. 00:37:35.279 [2024-11-19 08:01:27.077801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.279 [2024-11-19 08:01:27.077839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.279 qpair failed and we were unable to recover it. 00:37:35.279 [2024-11-19 08:01:27.077953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.279 [2024-11-19 08:01:27.077989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.279 qpair failed and we were unable to recover it. 00:37:35.279 [2024-11-19 08:01:27.078086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.279 [2024-11-19 08:01:27.078121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.279 qpair failed and we were unable to recover it. 00:37:35.279 [2024-11-19 08:01:27.078249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.279 [2024-11-19 08:01:27.078284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.279 qpair failed and we were unable to recover it. 00:37:35.279 [2024-11-19 08:01:27.078396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.279 [2024-11-19 08:01:27.078432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.279 qpair failed and we were unable to recover it. 00:37:35.279 [2024-11-19 08:01:27.078532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.279 [2024-11-19 08:01:27.078567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.279 qpair failed and we were unable to recover it. 00:37:35.279 [2024-11-19 08:01:27.078684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.279 [2024-11-19 08:01:27.078744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.279 qpair failed and we were unable to recover it. 00:37:35.279 [2024-11-19 08:01:27.078894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.279 [2024-11-19 08:01:27.078930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.279 qpair failed and we were unable to recover it. 00:37:35.279 [2024-11-19 08:01:27.079129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.279 [2024-11-19 08:01:27.079182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.279 qpair failed and we were unable to recover it. 00:37:35.279 [2024-11-19 08:01:27.079285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.279 [2024-11-19 08:01:27.079318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.279 qpair failed and we were unable to recover it. 00:37:35.279 [2024-11-19 08:01:27.079427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.279 [2024-11-19 08:01:27.079460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.279 qpair failed and we were unable to recover it. 00:37:35.279 [2024-11-19 08:01:27.079561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.279 [2024-11-19 08:01:27.079594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.279 qpair failed and we were unable to recover it. 00:37:35.279 [2024-11-19 08:01:27.079711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.279 [2024-11-19 08:01:27.079744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.279 qpair failed and we were unable to recover it. 00:37:35.279 [2024-11-19 08:01:27.079881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.279 [2024-11-19 08:01:27.079915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.279 qpair failed and we were unable to recover it. 00:37:35.279 [2024-11-19 08:01:27.080025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.279 [2024-11-19 08:01:27.080058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.279 qpair failed and we were unable to recover it. 00:37:35.279 [2024-11-19 08:01:27.080169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.279 [2024-11-19 08:01:27.080206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.279 qpair failed and we were unable to recover it. 00:37:35.279 [2024-11-19 08:01:27.080314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.279 [2024-11-19 08:01:27.080349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.279 qpair failed and we were unable to recover it. 00:37:35.279 [2024-11-19 08:01:27.080469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.279 [2024-11-19 08:01:27.080518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.279 qpair failed and we were unable to recover it. 00:37:35.279 [2024-11-19 08:01:27.080643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.279 [2024-11-19 08:01:27.080679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.279 qpair failed and we were unable to recover it. 00:37:35.279 [2024-11-19 08:01:27.080874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.279 [2024-11-19 08:01:27.080922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.279 qpair failed and we were unable to recover it. 00:37:35.279 [2024-11-19 08:01:27.081045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.279 [2024-11-19 08:01:27.081082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.279 qpair failed and we were unable to recover it. 00:37:35.279 [2024-11-19 08:01:27.081228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.279 [2024-11-19 08:01:27.081264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.280 qpair failed and we were unable to recover it. 00:37:35.280 [2024-11-19 08:01:27.081420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.280 [2024-11-19 08:01:27.081460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.280 qpair failed and we were unable to recover it. 00:37:35.280 [2024-11-19 08:01:27.081610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.280 [2024-11-19 08:01:27.081645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.280 qpair failed and we were unable to recover it. 00:37:35.280 [2024-11-19 08:01:27.081764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.280 [2024-11-19 08:01:27.081799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.280 qpair failed and we were unable to recover it. 00:37:35.280 [2024-11-19 08:01:27.081912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.280 [2024-11-19 08:01:27.081948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.280 qpair failed and we were unable to recover it. 00:37:35.280 [2024-11-19 08:01:27.082092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.280 [2024-11-19 08:01:27.082127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.280 qpair failed and we were unable to recover it. 00:37:35.280 [2024-11-19 08:01:27.082261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.280 [2024-11-19 08:01:27.082310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.280 qpair failed and we were unable to recover it. 00:37:35.280 [2024-11-19 08:01:27.082436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.280 [2024-11-19 08:01:27.082472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.280 qpair failed and we were unable to recover it. 00:37:35.280 [2024-11-19 08:01:27.082609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.280 [2024-11-19 08:01:27.082643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.280 qpair failed and we were unable to recover it. 00:37:35.280 [2024-11-19 08:01:27.082760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.280 [2024-11-19 08:01:27.082793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.280 qpair failed and we were unable to recover it. 00:37:35.280 [2024-11-19 08:01:27.082906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.280 [2024-11-19 08:01:27.082939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.280 qpair failed and we were unable to recover it. 00:37:35.280 [2024-11-19 08:01:27.083037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.280 [2024-11-19 08:01:27.083070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.280 qpair failed and we were unable to recover it. 00:37:35.280 [2024-11-19 08:01:27.083177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.280 [2024-11-19 08:01:27.083213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.280 qpair failed and we were unable to recover it. 00:37:35.280 [2024-11-19 08:01:27.083329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.280 [2024-11-19 08:01:27.083369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.280 qpair failed and we were unable to recover it. 00:37:35.280 [2024-11-19 08:01:27.083483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.280 [2024-11-19 08:01:27.083519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.280 qpair failed and we were unable to recover it. 00:37:35.280 [2024-11-19 08:01:27.083671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.280 [2024-11-19 08:01:27.083717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.280 qpair failed and we were unable to recover it. 00:37:35.280 [2024-11-19 08:01:27.083848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.280 [2024-11-19 08:01:27.083883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.280 qpair failed and we were unable to recover it. 00:37:35.280 [2024-11-19 08:01:27.084004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.280 [2024-11-19 08:01:27.084052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.280 qpair failed and we were unable to recover it. 00:37:35.280 [2024-11-19 08:01:27.084172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.280 [2024-11-19 08:01:27.084207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.280 qpair failed and we were unable to recover it. 00:37:35.280 [2024-11-19 08:01:27.084345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.280 [2024-11-19 08:01:27.084416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.280 qpair failed and we were unable to recover it. 00:37:35.280 [2024-11-19 08:01:27.084561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.280 [2024-11-19 08:01:27.084597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.280 qpair failed and we were unable to recover it. 00:37:35.280 [2024-11-19 08:01:27.084711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.280 [2024-11-19 08:01:27.084747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.280 qpair failed and we were unable to recover it. 00:37:35.280 [2024-11-19 08:01:27.084870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.280 [2024-11-19 08:01:27.084905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.280 qpair failed and we were unable to recover it. 00:37:35.280 [2024-11-19 08:01:27.085026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.280 [2024-11-19 08:01:27.085063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.280 qpair failed and we were unable to recover it. 00:37:35.280 [2024-11-19 08:01:27.085233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.280 [2024-11-19 08:01:27.085268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.280 qpair failed and we were unable to recover it. 00:37:35.280 [2024-11-19 08:01:27.085376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.280 [2024-11-19 08:01:27.085411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.280 qpair failed and we were unable to recover it. 00:37:35.280 [2024-11-19 08:01:27.085582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.280 [2024-11-19 08:01:27.085617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.280 qpair failed and we were unable to recover it. 00:37:35.280 [2024-11-19 08:01:27.085760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.280 [2024-11-19 08:01:27.085809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.280 qpair failed and we were unable to recover it. 00:37:35.280 [2024-11-19 08:01:27.085947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.280 [2024-11-19 08:01:27.085984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.280 qpair failed and we were unable to recover it. 00:37:35.280 [2024-11-19 08:01:27.086130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.280 [2024-11-19 08:01:27.086164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.280 qpair failed and we were unable to recover it. 00:37:35.280 [2024-11-19 08:01:27.086309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.280 [2024-11-19 08:01:27.086343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.280 qpair failed and we were unable to recover it. 00:37:35.280 [2024-11-19 08:01:27.086458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.280 [2024-11-19 08:01:27.086494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.280 qpair failed and we were unable to recover it. 00:37:35.280 [2024-11-19 08:01:27.086610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.280 [2024-11-19 08:01:27.086645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.280 qpair failed and we were unable to recover it. 00:37:35.280 [2024-11-19 08:01:27.086764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.280 [2024-11-19 08:01:27.086800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.280 qpair failed and we were unable to recover it. 00:37:35.280 [2024-11-19 08:01:27.086921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.280 [2024-11-19 08:01:27.086974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.280 qpair failed and we were unable to recover it. 00:37:35.280 [2024-11-19 08:01:27.087126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.280 [2024-11-19 08:01:27.087180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.280 qpair failed and we were unable to recover it. 00:37:35.280 [2024-11-19 08:01:27.087325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.280 [2024-11-19 08:01:27.087359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.280 qpair failed and we were unable to recover it. 00:37:35.280 [2024-11-19 08:01:27.087469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.280 [2024-11-19 08:01:27.087504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.280 qpair failed and we were unable to recover it. 00:37:35.280 [2024-11-19 08:01:27.087612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.281 [2024-11-19 08:01:27.087648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.281 qpair failed and we were unable to recover it. 00:37:35.281 [2024-11-19 08:01:27.087795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.281 [2024-11-19 08:01:27.087832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.281 qpair failed and we were unable to recover it. 00:37:35.281 [2024-11-19 08:01:27.087977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.281 [2024-11-19 08:01:27.088025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.281 qpair failed and we were unable to recover it. 00:37:35.281 [2024-11-19 08:01:27.088133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.281 [2024-11-19 08:01:27.088169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.281 qpair failed and we were unable to recover it. 00:37:35.281 [2024-11-19 08:01:27.088283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.281 [2024-11-19 08:01:27.088317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.281 qpair failed and we were unable to recover it. 00:37:35.281 [2024-11-19 08:01:27.088422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.281 [2024-11-19 08:01:27.088456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.281 qpair failed and we were unable to recover it. 00:37:35.281 [2024-11-19 08:01:27.088628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.281 [2024-11-19 08:01:27.088666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.281 qpair failed and we were unable to recover it. 00:37:35.281 [2024-11-19 08:01:27.088814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.281 [2024-11-19 08:01:27.088850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.281 qpair failed and we were unable to recover it. 00:37:35.281 [2024-11-19 08:01:27.088960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.281 [2024-11-19 08:01:27.088994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.281 qpair failed and we were unable to recover it. 00:37:35.281 [2024-11-19 08:01:27.089097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.281 [2024-11-19 08:01:27.089132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.281 qpair failed and we were unable to recover it. 00:37:35.281 [2024-11-19 08:01:27.089239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.281 [2024-11-19 08:01:27.089273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.281 qpair failed and we were unable to recover it. 00:37:35.281 [2024-11-19 08:01:27.089382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.281 [2024-11-19 08:01:27.089418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.281 qpair failed and we were unable to recover it. 00:37:35.281 [2024-11-19 08:01:27.089541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.281 [2024-11-19 08:01:27.089590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.281 qpair failed and we were unable to recover it. 00:37:35.281 [2024-11-19 08:01:27.089702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.281 [2024-11-19 08:01:27.089757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.281 qpair failed and we were unable to recover it. 00:37:35.281 [2024-11-19 08:01:27.089912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.281 [2024-11-19 08:01:27.089946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.281 qpair failed and we were unable to recover it. 00:37:35.281 [2024-11-19 08:01:27.090051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.281 [2024-11-19 08:01:27.090090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.281 qpair failed and we were unable to recover it. 00:37:35.281 [2024-11-19 08:01:27.090206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.281 [2024-11-19 08:01:27.090240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.281 qpair failed and we were unable to recover it. 00:37:35.281 [2024-11-19 08:01:27.090369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.281 [2024-11-19 08:01:27.090403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.281 qpair failed and we were unable to recover it. 00:37:35.281 [2024-11-19 08:01:27.090515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.281 [2024-11-19 08:01:27.090549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.281 qpair failed and we were unable to recover it. 00:37:35.281 [2024-11-19 08:01:27.090686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.281 [2024-11-19 08:01:27.090748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.281 qpair failed and we were unable to recover it. 00:37:35.281 [2024-11-19 08:01:27.090935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.281 [2024-11-19 08:01:27.090989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.281 qpair failed and we were unable to recover it. 00:37:35.281 [2024-11-19 08:01:27.091133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.281 [2024-11-19 08:01:27.091185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.281 qpair failed and we were unable to recover it. 00:37:35.281 [2024-11-19 08:01:27.091286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.281 [2024-11-19 08:01:27.091321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.281 qpair failed and we were unable to recover it. 00:37:35.281 [2024-11-19 08:01:27.091447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.281 [2024-11-19 08:01:27.091481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.281 qpair failed and we were unable to recover it. 00:37:35.281 [2024-11-19 08:01:27.091630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.281 [2024-11-19 08:01:27.091679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.281 qpair failed and we were unable to recover it. 00:37:35.281 [2024-11-19 08:01:27.091831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.281 [2024-11-19 08:01:27.091880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.281 qpair failed and we were unable to recover it. 00:37:35.281 [2024-11-19 08:01:27.091998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.281 [2024-11-19 08:01:27.092034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.281 qpair failed and we were unable to recover it. 00:37:35.281 [2024-11-19 08:01:27.092182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.281 [2024-11-19 08:01:27.092216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.281 qpair failed and we were unable to recover it. 00:37:35.281 [2024-11-19 08:01:27.092321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.281 [2024-11-19 08:01:27.092355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.281 qpair failed and we were unable to recover it. 00:37:35.281 [2024-11-19 08:01:27.092512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.281 [2024-11-19 08:01:27.092547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.281 qpair failed and we were unable to recover it. 00:37:35.281 [2024-11-19 08:01:27.092680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.281 [2024-11-19 08:01:27.092724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.281 qpair failed and we were unable to recover it. 00:37:35.281 [2024-11-19 08:01:27.092872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.281 [2024-11-19 08:01:27.092912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.281 qpair failed and we were unable to recover it. 00:37:35.281 [2024-11-19 08:01:27.093067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.281 [2024-11-19 08:01:27.093101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.281 qpair failed and we were unable to recover it. 00:37:35.281 [2024-11-19 08:01:27.093246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.281 [2024-11-19 08:01:27.093281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.281 qpair failed and we were unable to recover it. 00:37:35.281 [2024-11-19 08:01:27.093394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.281 [2024-11-19 08:01:27.093428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.281 qpair failed and we were unable to recover it. 00:37:35.281 [2024-11-19 08:01:27.093548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.281 [2024-11-19 08:01:27.093598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.281 qpair failed and we were unable to recover it. 00:37:35.281 [2024-11-19 08:01:27.093744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.281 [2024-11-19 08:01:27.093781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.281 qpair failed and we were unable to recover it. 00:37:35.281 [2024-11-19 08:01:27.093901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.281 [2024-11-19 08:01:27.093935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.281 qpair failed and we were unable to recover it. 00:37:35.281 [2024-11-19 08:01:27.094072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.282 [2024-11-19 08:01:27.094107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.282 qpair failed and we were unable to recover it. 00:37:35.282 [2024-11-19 08:01:27.094303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.282 [2024-11-19 08:01:27.094338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.282 qpair failed and we were unable to recover it. 00:37:35.282 [2024-11-19 08:01:27.094444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.282 [2024-11-19 08:01:27.094479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.282 qpair failed and we were unable to recover it. 00:37:35.282 [2024-11-19 08:01:27.094633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.282 [2024-11-19 08:01:27.094682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.282 qpair failed and we were unable to recover it. 00:37:35.282 [2024-11-19 08:01:27.094831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.282 [2024-11-19 08:01:27.094871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.282 qpair failed and we were unable to recover it. 00:37:35.282 [2024-11-19 08:01:27.095032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.282 [2024-11-19 08:01:27.095081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.282 qpair failed and we were unable to recover it. 00:37:35.282 [2024-11-19 08:01:27.095242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.282 [2024-11-19 08:01:27.095306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.282 qpair failed and we were unable to recover it. 00:37:35.282 [2024-11-19 08:01:27.095439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.282 [2024-11-19 08:01:27.095474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.282 qpair failed and we were unable to recover it. 00:37:35.282 [2024-11-19 08:01:27.095593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.282 [2024-11-19 08:01:27.095627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.282 qpair failed and we were unable to recover it. 00:37:35.282 [2024-11-19 08:01:27.095822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.282 [2024-11-19 08:01:27.095875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.282 qpair failed and we were unable to recover it. 00:37:35.282 [2024-11-19 08:01:27.096032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.282 [2024-11-19 08:01:27.096096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.282 qpair failed and we were unable to recover it. 00:37:35.282 [2024-11-19 08:01:27.096299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.282 [2024-11-19 08:01:27.096340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.282 qpair failed and we were unable to recover it. 00:37:35.282 [2024-11-19 08:01:27.096477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.282 [2024-11-19 08:01:27.096536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.282 qpair failed and we were unable to recover it. 00:37:35.282 [2024-11-19 08:01:27.096670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.282 [2024-11-19 08:01:27.096710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.282 qpair failed and we were unable to recover it. 00:37:35.282 [2024-11-19 08:01:27.096820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.282 [2024-11-19 08:01:27.096873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.282 qpair failed and we were unable to recover it. 00:37:35.282 [2024-11-19 08:01:27.097060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.282 [2024-11-19 08:01:27.097098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.282 qpair failed and we were unable to recover it. 00:37:35.282 [2024-11-19 08:01:27.097237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.282 [2024-11-19 08:01:27.097297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.282 qpair failed and we were unable to recover it. 00:37:35.282 [2024-11-19 08:01:27.097423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.282 [2024-11-19 08:01:27.097466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.282 qpair failed and we were unable to recover it. 00:37:35.282 [2024-11-19 08:01:27.097615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.282 [2024-11-19 08:01:27.097652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.282 qpair failed and we were unable to recover it. 00:37:35.282 [2024-11-19 08:01:27.097806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.282 [2024-11-19 08:01:27.097841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.282 qpair failed and we were unable to recover it. 00:37:35.282 [2024-11-19 08:01:27.097964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.282 [2024-11-19 08:01:27.098018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.282 qpair failed and we were unable to recover it. 00:37:35.282 [2024-11-19 08:01:27.098140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.282 [2024-11-19 08:01:27.098178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.282 qpair failed and we were unable to recover it. 00:37:35.282 [2024-11-19 08:01:27.098405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.282 [2024-11-19 08:01:27.098439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.282 qpair failed and we were unable to recover it. 00:37:35.282 [2024-11-19 08:01:27.098558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.282 [2024-11-19 08:01:27.098593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.282 qpair failed and we were unable to recover it. 00:37:35.282 [2024-11-19 08:01:27.098706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.282 [2024-11-19 08:01:27.098741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.282 qpair failed and we were unable to recover it. 00:37:35.282 [2024-11-19 08:01:27.098876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.282 [2024-11-19 08:01:27.098912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.282 qpair failed and we were unable to recover it. 00:37:35.282 [2024-11-19 08:01:27.099053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.282 [2024-11-19 08:01:27.099086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.282 qpair failed and we were unable to recover it. 00:37:35.282 [2024-11-19 08:01:27.099217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.282 [2024-11-19 08:01:27.099266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.282 qpair failed and we were unable to recover it. 00:37:35.282 [2024-11-19 08:01:27.099408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.282 [2024-11-19 08:01:27.099442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.282 qpair failed and we were unable to recover it. 00:37:35.282 [2024-11-19 08:01:27.099592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.282 [2024-11-19 08:01:27.099641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.282 qpair failed and we were unable to recover it. 00:37:35.282 [2024-11-19 08:01:27.099767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.282 [2024-11-19 08:01:27.099809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.282 qpair failed and we were unable to recover it. 00:37:35.282 [2024-11-19 08:01:27.099950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.282 [2024-11-19 08:01:27.100003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.282 qpair failed and we were unable to recover it. 00:37:35.282 [2024-11-19 08:01:27.100133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.282 [2024-11-19 08:01:27.100185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.282 qpair failed and we were unable to recover it. 00:37:35.282 [2024-11-19 08:01:27.100331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.282 [2024-11-19 08:01:27.100369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.282 qpair failed and we were unable to recover it. 00:37:35.283 [2024-11-19 08:01:27.100504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.283 [2024-11-19 08:01:27.100546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.283 qpair failed and we were unable to recover it. 00:37:35.283 [2024-11-19 08:01:27.100803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.283 [2024-11-19 08:01:27.100851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.283 qpair failed and we were unable to recover it. 00:37:35.283 [2024-11-19 08:01:27.101027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.283 [2024-11-19 08:01:27.101091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.283 qpair failed and we were unable to recover it. 00:37:35.283 [2024-11-19 08:01:27.101254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.283 [2024-11-19 08:01:27.101312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.283 qpair failed and we were unable to recover it. 00:37:35.283 [2024-11-19 08:01:27.101466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.283 [2024-11-19 08:01:27.101521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.283 qpair failed and we were unable to recover it. 00:37:35.283 [2024-11-19 08:01:27.101621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.283 [2024-11-19 08:01:27.101656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.283 qpair failed and we were unable to recover it. 00:37:35.283 [2024-11-19 08:01:27.101800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.283 [2024-11-19 08:01:27.101836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.283 qpair failed and we were unable to recover it. 00:37:35.283 [2024-11-19 08:01:27.101956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.283 [2024-11-19 08:01:27.101991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.283 qpair failed and we were unable to recover it. 00:37:35.283 [2024-11-19 08:01:27.102097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.283 [2024-11-19 08:01:27.102131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.283 qpair failed and we were unable to recover it. 00:37:35.283 [2024-11-19 08:01:27.102227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.283 [2024-11-19 08:01:27.102261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.283 qpair failed and we were unable to recover it. 00:37:35.283 [2024-11-19 08:01:27.102376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.283 [2024-11-19 08:01:27.102414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.283 qpair failed and we were unable to recover it. 00:37:35.283 [2024-11-19 08:01:27.102543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.283 [2024-11-19 08:01:27.102592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.283 qpair failed and we were unable to recover it. 00:37:35.283 [2024-11-19 08:01:27.102718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.283 [2024-11-19 08:01:27.102754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.283 qpair failed and we were unable to recover it. 00:37:35.283 [2024-11-19 08:01:27.102913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.283 [2024-11-19 08:01:27.102963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.283 qpair failed and we were unable to recover it. 00:37:35.283 [2024-11-19 08:01:27.103093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.283 [2024-11-19 08:01:27.103148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.283 qpair failed and we were unable to recover it. 00:37:35.283 [2024-11-19 08:01:27.103334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.283 [2024-11-19 08:01:27.103389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.283 qpair failed and we were unable to recover it. 00:37:35.283 [2024-11-19 08:01:27.103527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.283 [2024-11-19 08:01:27.103562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.283 qpair failed and we were unable to recover it. 00:37:35.283 [2024-11-19 08:01:27.103686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.283 [2024-11-19 08:01:27.103728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.283 qpair failed and we were unable to recover it. 00:37:35.283 [2024-11-19 08:01:27.103849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.283 [2024-11-19 08:01:27.103888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.283 qpair failed and we were unable to recover it. 00:37:35.283 [2024-11-19 08:01:27.104010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.283 [2024-11-19 08:01:27.104062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.283 qpair failed and we were unable to recover it. 00:37:35.283 [2024-11-19 08:01:27.104194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.283 [2024-11-19 08:01:27.104229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.283 qpair failed and we were unable to recover it. 00:37:35.283 [2024-11-19 08:01:27.104344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.283 [2024-11-19 08:01:27.104379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.283 qpair failed and we were unable to recover it. 00:37:35.283 [2024-11-19 08:01:27.104556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.283 [2024-11-19 08:01:27.104592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.283 qpair failed and we were unable to recover it. 00:37:35.283 [2024-11-19 08:01:27.104716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.283 [2024-11-19 08:01:27.104773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.283 qpair failed and we were unable to recover it. 00:37:35.283 [2024-11-19 08:01:27.104896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.283 [2024-11-19 08:01:27.104933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.283 qpair failed and we were unable to recover it. 00:37:35.283 [2024-11-19 08:01:27.105088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.283 [2024-11-19 08:01:27.105146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.283 qpair failed and we were unable to recover it. 00:37:35.283 [2024-11-19 08:01:27.105312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.283 [2024-11-19 08:01:27.105354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.283 qpair failed and we were unable to recover it. 00:37:35.283 [2024-11-19 08:01:27.105489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.283 [2024-11-19 08:01:27.105526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.283 qpair failed and we were unable to recover it. 00:37:35.283 [2024-11-19 08:01:27.105642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.283 [2024-11-19 08:01:27.105682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.283 qpair failed and we were unable to recover it. 00:37:35.283 [2024-11-19 08:01:27.105836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.283 [2024-11-19 08:01:27.105884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.283 qpair failed and we were unable to recover it. 00:37:35.283 [2024-11-19 08:01:27.106037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.283 [2024-11-19 08:01:27.106090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.283 qpair failed and we were unable to recover it. 00:37:35.283 [2024-11-19 08:01:27.106217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.283 [2024-11-19 08:01:27.106257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.283 qpair failed and we were unable to recover it. 00:37:35.283 [2024-11-19 08:01:27.106406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.283 [2024-11-19 08:01:27.106445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.283 qpair failed and we were unable to recover it. 00:37:35.283 [2024-11-19 08:01:27.106583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.283 [2024-11-19 08:01:27.106618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.283 qpair failed and we were unable to recover it. 00:37:35.283 [2024-11-19 08:01:27.106750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.283 [2024-11-19 08:01:27.106785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.283 qpair failed and we were unable to recover it. 00:37:35.283 [2024-11-19 08:01:27.106909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.283 [2024-11-19 08:01:27.106945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.283 qpair failed and we were unable to recover it. 00:37:35.283 [2024-11-19 08:01:27.107099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.283 [2024-11-19 08:01:27.107138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.283 qpair failed and we were unable to recover it. 00:37:35.283 [2024-11-19 08:01:27.107277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.283 [2024-11-19 08:01:27.107316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.284 qpair failed and we were unable to recover it. 00:37:35.284 [2024-11-19 08:01:27.107442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.284 [2024-11-19 08:01:27.107481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.284 qpair failed and we were unable to recover it. 00:37:35.284 [2024-11-19 08:01:27.107620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.284 [2024-11-19 08:01:27.107658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.284 qpair failed and we were unable to recover it. 00:37:35.284 [2024-11-19 08:01:27.107828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.284 [2024-11-19 08:01:27.107876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.284 qpair failed and we were unable to recover it. 00:37:35.284 [2024-11-19 08:01:27.107986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.284 [2024-11-19 08:01:27.108043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.284 qpair failed and we were unable to recover it. 00:37:35.284 [2024-11-19 08:01:27.108152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.284 [2024-11-19 08:01:27.108190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.284 qpair failed and we were unable to recover it. 00:37:35.284 [2024-11-19 08:01:27.108326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.284 [2024-11-19 08:01:27.108378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.284 qpair failed and we were unable to recover it. 00:37:35.284 [2024-11-19 08:01:27.108559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.284 [2024-11-19 08:01:27.108599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.284 qpair failed and we were unable to recover it. 00:37:35.284 [2024-11-19 08:01:27.108761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.284 [2024-11-19 08:01:27.108797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.284 qpair failed and we were unable to recover it. 00:37:35.284 [2024-11-19 08:01:27.108899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.284 [2024-11-19 08:01:27.108934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.284 qpair failed and we were unable to recover it. 00:37:35.284 [2024-11-19 08:01:27.109077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.284 [2024-11-19 08:01:27.109112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.284 qpair failed and we were unable to recover it. 00:37:35.284 [2024-11-19 08:01:27.109224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.284 [2024-11-19 08:01:27.109258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.284 qpair failed and we were unable to recover it. 00:37:35.284 [2024-11-19 08:01:27.109430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.284 [2024-11-19 08:01:27.109499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.284 qpair failed and we were unable to recover it. 00:37:35.284 [2024-11-19 08:01:27.109663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.284 [2024-11-19 08:01:27.109718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.284 qpair failed and we were unable to recover it. 00:37:35.284 [2024-11-19 08:01:27.109835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.284 [2024-11-19 08:01:27.109890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.284 qpair failed and we were unable to recover it. 00:37:35.284 [2024-11-19 08:01:27.110037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.284 [2024-11-19 08:01:27.110076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.284 qpair failed and we were unable to recover it. 00:37:35.284 [2024-11-19 08:01:27.110195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.284 [2024-11-19 08:01:27.110234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.284 qpair failed and we were unable to recover it. 00:37:35.284 [2024-11-19 08:01:27.110396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.284 [2024-11-19 08:01:27.110435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.284 qpair failed and we were unable to recover it. 00:37:35.284 [2024-11-19 08:01:27.110581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.284 [2024-11-19 08:01:27.110620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.284 qpair failed and we were unable to recover it. 00:37:35.284 [2024-11-19 08:01:27.110782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.284 [2024-11-19 08:01:27.110821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.284 qpair failed and we were unable to recover it. 00:37:35.284 [2024-11-19 08:01:27.110969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.284 [2024-11-19 08:01:27.111023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.284 qpair failed and we were unable to recover it. 00:37:35.284 [2024-11-19 08:01:27.111180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.284 [2024-11-19 08:01:27.111221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.284 qpair failed and we were unable to recover it. 00:37:35.284 [2024-11-19 08:01:27.111359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.284 [2024-11-19 08:01:27.111394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.284 qpair failed and we were unable to recover it. 00:37:35.284 [2024-11-19 08:01:27.111560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.284 [2024-11-19 08:01:27.111594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.284 qpair failed and we were unable to recover it. 00:37:35.284 [2024-11-19 08:01:27.111705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.284 [2024-11-19 08:01:27.111739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.284 qpair failed and we were unable to recover it. 00:37:35.284 [2024-11-19 08:01:27.111872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.284 [2024-11-19 08:01:27.111907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.284 qpair failed and we were unable to recover it. 00:37:35.284 [2024-11-19 08:01:27.112016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.284 [2024-11-19 08:01:27.112069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.284 qpair failed and we were unable to recover it. 00:37:35.284 [2024-11-19 08:01:27.112231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.284 [2024-11-19 08:01:27.112285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.284 qpair failed and we were unable to recover it. 00:37:35.284 [2024-11-19 08:01:27.112419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.284 [2024-11-19 08:01:27.112472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.284 qpair failed and we were unable to recover it. 00:37:35.284 [2024-11-19 08:01:27.112666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.284 [2024-11-19 08:01:27.112714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.284 qpair failed and we were unable to recover it. 00:37:35.284 [2024-11-19 08:01:27.112874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.284 [2024-11-19 08:01:27.112922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.284 qpair failed and we were unable to recover it. 00:37:35.284 [2024-11-19 08:01:27.113058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.284 [2024-11-19 08:01:27.113100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.284 qpair failed and we were unable to recover it. 00:37:35.284 [2024-11-19 08:01:27.113227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.284 [2024-11-19 08:01:27.113266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.284 qpair failed and we were unable to recover it. 00:37:35.284 [2024-11-19 08:01:27.113389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.284 [2024-11-19 08:01:27.113427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.284 qpair failed and we were unable to recover it. 00:37:35.284 [2024-11-19 08:01:27.113583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.284 [2024-11-19 08:01:27.113617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.284 qpair failed and we were unable to recover it. 00:37:35.284 [2024-11-19 08:01:27.113751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.284 [2024-11-19 08:01:27.113786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.284 qpair failed and we were unable to recover it. 00:37:35.284 [2024-11-19 08:01:27.113913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.284 [2024-11-19 08:01:27.113962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.284 qpair failed and we were unable to recover it. 00:37:35.284 [2024-11-19 08:01:27.114093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.284 [2024-11-19 08:01:27.114142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.284 qpair failed and we were unable to recover it. 00:37:35.284 [2024-11-19 08:01:27.114352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.285 [2024-11-19 08:01:27.114407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.285 qpair failed and we were unable to recover it. 00:37:35.285 [2024-11-19 08:01:27.114549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.285 [2024-11-19 08:01:27.114584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.285 qpair failed and we were unable to recover it. 00:37:35.285 [2024-11-19 08:01:27.114703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.285 [2024-11-19 08:01:27.114738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.285 qpair failed and we were unable to recover it. 00:37:35.285 [2024-11-19 08:01:27.114841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.285 [2024-11-19 08:01:27.114876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.285 qpair failed and we were unable to recover it. 00:37:35.285 [2024-11-19 08:01:27.114974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.285 [2024-11-19 08:01:27.115008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.285 qpair failed and we were unable to recover it. 00:37:35.285 [2024-11-19 08:01:27.115155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.285 [2024-11-19 08:01:27.115190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.285 qpair failed and we were unable to recover it. 00:37:35.285 [2024-11-19 08:01:27.115302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.285 [2024-11-19 08:01:27.115335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.285 qpair failed and we were unable to recover it. 00:37:35.285 [2024-11-19 08:01:27.115444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.285 [2024-11-19 08:01:27.115478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.285 qpair failed and we were unable to recover it. 00:37:35.285 [2024-11-19 08:01:27.115617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.285 [2024-11-19 08:01:27.115651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.285 qpair failed and we were unable to recover it. 00:37:35.285 [2024-11-19 08:01:27.115777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.285 [2024-11-19 08:01:27.115812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.285 qpair failed and we were unable to recover it. 00:37:35.285 [2024-11-19 08:01:27.115976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.285 [2024-11-19 08:01:27.116030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.285 qpair failed and we were unable to recover it. 00:37:35.285 [2024-11-19 08:01:27.116152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.285 [2024-11-19 08:01:27.116205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.285 qpair failed and we were unable to recover it. 00:37:35.285 [2024-11-19 08:01:27.116374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.285 [2024-11-19 08:01:27.116426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.285 qpair failed and we were unable to recover it. 00:37:35.285 [2024-11-19 08:01:27.116550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.285 [2024-11-19 08:01:27.116601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.285 qpair failed and we were unable to recover it. 00:37:35.285 [2024-11-19 08:01:27.116711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.285 [2024-11-19 08:01:27.116745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.285 qpair failed and we were unable to recover it. 00:37:35.285 [2024-11-19 08:01:27.116876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.285 [2024-11-19 08:01:27.116929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.285 qpair failed and we were unable to recover it. 00:37:35.285 [2024-11-19 08:01:27.117078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.285 [2024-11-19 08:01:27.117130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.285 qpair failed and we were unable to recover it. 00:37:35.285 [2024-11-19 08:01:27.117315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.285 [2024-11-19 08:01:27.117367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.285 qpair failed and we were unable to recover it. 00:37:35.285 [2024-11-19 08:01:27.117507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.285 [2024-11-19 08:01:27.117541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.285 qpair failed and we were unable to recover it. 00:37:35.285 [2024-11-19 08:01:27.117658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.285 [2024-11-19 08:01:27.117702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.285 qpair failed and we were unable to recover it. 00:37:35.285 [2024-11-19 08:01:27.117837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.285 [2024-11-19 08:01:27.117886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.285 qpair failed and we were unable to recover it. 00:37:35.285 [2024-11-19 08:01:27.118003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.285 [2024-11-19 08:01:27.118038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.285 qpair failed and we were unable to recover it. 00:37:35.285 [2024-11-19 08:01:27.118204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.285 [2024-11-19 08:01:27.118256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.285 qpair failed and we were unable to recover it. 00:37:35.285 [2024-11-19 08:01:27.118390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.285 [2024-11-19 08:01:27.118426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.285 qpair failed and we were unable to recover it. 00:37:35.285 [2024-11-19 08:01:27.118571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.285 [2024-11-19 08:01:27.118609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.285 qpair failed and we were unable to recover it. 00:37:35.285 [2024-11-19 08:01:27.118771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.285 [2024-11-19 08:01:27.118805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.285 qpair failed and we were unable to recover it. 00:37:35.285 [2024-11-19 08:01:27.118910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.285 [2024-11-19 08:01:27.118944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.285 qpair failed and we were unable to recover it. 00:37:35.285 [2024-11-19 08:01:27.119060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.285 [2024-11-19 08:01:27.119094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.285 qpair failed and we were unable to recover it. 00:37:35.285 [2024-11-19 08:01:27.119232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.285 [2024-11-19 08:01:27.119271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.285 qpair failed and we were unable to recover it. 00:37:35.285 [2024-11-19 08:01:27.119424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.285 [2024-11-19 08:01:27.119460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.285 qpair failed and we were unable to recover it. 00:37:35.285 [2024-11-19 08:01:27.119584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.285 [2024-11-19 08:01:27.119626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.285 qpair failed and we were unable to recover it. 00:37:35.285 [2024-11-19 08:01:27.119802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.285 [2024-11-19 08:01:27.119851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.285 qpair failed and we were unable to recover it. 00:37:35.285 [2024-11-19 08:01:27.120012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.285 [2024-11-19 08:01:27.120066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.285 qpair failed and we were unable to recover it. 00:37:35.285 [2024-11-19 08:01:27.120230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.285 [2024-11-19 08:01:27.120282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.285 qpair failed and we were unable to recover it. 00:37:35.285 [2024-11-19 08:01:27.120400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.285 [2024-11-19 08:01:27.120435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.285 qpair failed and we were unable to recover it. 00:37:35.285 [2024-11-19 08:01:27.120591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.285 [2024-11-19 08:01:27.120640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.285 qpair failed and we were unable to recover it. 00:37:35.285 [2024-11-19 08:01:27.120780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.285 [2024-11-19 08:01:27.120817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.285 qpair failed and we were unable to recover it. 00:37:35.285 [2024-11-19 08:01:27.120921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.285 [2024-11-19 08:01:27.120971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.285 qpair failed and we were unable to recover it. 00:37:35.285 [2024-11-19 08:01:27.121111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.286 [2024-11-19 08:01:27.121148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.286 qpair failed and we were unable to recover it. 00:37:35.286 [2024-11-19 08:01:27.121324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.286 [2024-11-19 08:01:27.121374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.286 qpair failed and we were unable to recover it. 00:37:35.286 [2024-11-19 08:01:27.121530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.286 [2024-11-19 08:01:27.121580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.286 qpair failed and we were unable to recover it. 00:37:35.286 [2024-11-19 08:01:27.121724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.286 [2024-11-19 08:01:27.121761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.286 qpair failed and we were unable to recover it. 00:37:35.286 [2024-11-19 08:01:27.121878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.286 [2024-11-19 08:01:27.121911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.286 qpair failed and we were unable to recover it. 00:37:35.286 [2024-11-19 08:01:27.122049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.286 [2024-11-19 08:01:27.122085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.286 qpair failed and we were unable to recover it. 00:37:35.286 [2024-11-19 08:01:27.122223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.286 [2024-11-19 08:01:27.122258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.286 qpair failed and we were unable to recover it. 00:37:35.286 [2024-11-19 08:01:27.122403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.286 [2024-11-19 08:01:27.122438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.286 qpair failed and we were unable to recover it. 00:37:35.286 [2024-11-19 08:01:27.122576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.286 [2024-11-19 08:01:27.122611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.286 qpair failed and we were unable to recover it. 00:37:35.286 [2024-11-19 08:01:27.122746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.286 [2024-11-19 08:01:27.122787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.286 qpair failed and we were unable to recover it. 00:37:35.286 [2024-11-19 08:01:27.122918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.286 [2024-11-19 08:01:27.122984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.286 qpair failed and we were unable to recover it. 00:37:35.286 [2024-11-19 08:01:27.123131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.286 [2024-11-19 08:01:27.123169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.286 qpair failed and we were unable to recover it. 00:37:35.286 [2024-11-19 08:01:27.123307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.286 [2024-11-19 08:01:27.123342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.286 qpair failed and we were unable to recover it. 00:37:35.286 [2024-11-19 08:01:27.123458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.286 [2024-11-19 08:01:27.123499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.286 qpair failed and we were unable to recover it. 00:37:35.286 [2024-11-19 08:01:27.123635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.286 [2024-11-19 08:01:27.123699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.286 qpair failed and we were unable to recover it. 00:37:35.286 [2024-11-19 08:01:27.123819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.286 [2024-11-19 08:01:27.123854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.286 qpair failed and we were unable to recover it. 00:37:35.286 [2024-11-19 08:01:27.123977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.286 [2024-11-19 08:01:27.124014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.286 qpair failed and we were unable to recover it. 00:37:35.286 [2024-11-19 08:01:27.124134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.286 [2024-11-19 08:01:27.124169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.286 qpair failed and we were unable to recover it. 00:37:35.286 [2024-11-19 08:01:27.124311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.286 [2024-11-19 08:01:27.124362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.286 qpair failed and we were unable to recover it. 00:37:35.286 [2024-11-19 08:01:27.124487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.286 [2024-11-19 08:01:27.124520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.286 qpair failed and we were unable to recover it. 00:37:35.286 [2024-11-19 08:01:27.124631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.286 [2024-11-19 08:01:27.124666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.286 qpair failed and we were unable to recover it. 00:37:35.286 [2024-11-19 08:01:27.124858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.286 [2024-11-19 08:01:27.124907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.286 qpair failed and we were unable to recover it. 00:37:35.286 [2024-11-19 08:01:27.125028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.286 [2024-11-19 08:01:27.125066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.286 qpair failed and we were unable to recover it. 00:37:35.286 [2024-11-19 08:01:27.125282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.286 [2024-11-19 08:01:27.125317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.286 qpair failed and we were unable to recover it. 00:37:35.286 [2024-11-19 08:01:27.125456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.286 [2024-11-19 08:01:27.125490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.286 qpair failed and we were unable to recover it. 00:37:35.286 [2024-11-19 08:01:27.125620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.286 [2024-11-19 08:01:27.125654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.286 qpair failed and we were unable to recover it. 00:37:35.286 [2024-11-19 08:01:27.125823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.286 [2024-11-19 08:01:27.125872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.286 qpair failed and we were unable to recover it. 00:37:35.286 [2024-11-19 08:01:27.126001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.286 [2024-11-19 08:01:27.126050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.286 qpair failed and we were unable to recover it. 00:37:35.286 [2024-11-19 08:01:27.126210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.286 [2024-11-19 08:01:27.126249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.286 qpair failed and we were unable to recover it. 00:37:35.286 [2024-11-19 08:01:27.126358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.286 [2024-11-19 08:01:27.126404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.286 qpair failed and we were unable to recover it. 00:37:35.286 [2024-11-19 08:01:27.126622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.286 [2024-11-19 08:01:27.126663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.286 qpair failed and we were unable to recover it. 00:37:35.286 [2024-11-19 08:01:27.126830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.286 [2024-11-19 08:01:27.126879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.286 qpair failed and we were unable to recover it. 00:37:35.286 [2024-11-19 08:01:27.127007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.286 [2024-11-19 08:01:27.127044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.286 qpair failed and we were unable to recover it. 00:37:35.286 [2024-11-19 08:01:27.127189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.286 [2024-11-19 08:01:27.127224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.286 qpair failed and we were unable to recover it. 00:37:35.286 [2024-11-19 08:01:27.127356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.286 [2024-11-19 08:01:27.127390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.287 qpair failed and we were unable to recover it. 00:37:35.287 [2024-11-19 08:01:27.127535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.287 [2024-11-19 08:01:27.127571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.287 qpair failed and we were unable to recover it. 00:37:35.287 [2024-11-19 08:01:27.127685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.287 [2024-11-19 08:01:27.127738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.287 qpair failed and we were unable to recover it. 00:37:35.287 [2024-11-19 08:01:27.127864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.287 [2024-11-19 08:01:27.127900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.287 qpair failed and we were unable to recover it. 00:37:35.287 [2024-11-19 08:01:27.128013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.287 [2024-11-19 08:01:27.128047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.287 qpair failed and we were unable to recover it. 00:37:35.287 [2024-11-19 08:01:27.128158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.287 [2024-11-19 08:01:27.128193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.287 qpair failed and we were unable to recover it. 00:37:35.287 [2024-11-19 08:01:27.128310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.287 [2024-11-19 08:01:27.128347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.287 qpair failed and we were unable to recover it. 00:37:35.287 [2024-11-19 08:01:27.128491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.287 [2024-11-19 08:01:27.128527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.287 qpair failed and we were unable to recover it. 00:37:35.287 [2024-11-19 08:01:27.128640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.287 [2024-11-19 08:01:27.128675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.287 qpair failed and we were unable to recover it. 00:37:35.287 [2024-11-19 08:01:27.128804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.287 [2024-11-19 08:01:27.128839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.287 qpair failed and we were unable to recover it. 00:37:35.287 [2024-11-19 08:01:27.128977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.287 [2024-11-19 08:01:27.129011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.287 qpair failed and we were unable to recover it. 00:37:35.287 [2024-11-19 08:01:27.129145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.287 [2024-11-19 08:01:27.129180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.287 qpair failed and we were unable to recover it. 00:37:35.287 [2024-11-19 08:01:27.129283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.287 [2024-11-19 08:01:27.129316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.287 qpair failed and we were unable to recover it. 00:37:35.287 [2024-11-19 08:01:27.129456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.287 [2024-11-19 08:01:27.129490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.287 qpair failed and we were unable to recover it. 00:37:35.287 [2024-11-19 08:01:27.129585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.287 [2024-11-19 08:01:27.129620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.287 qpair failed and we were unable to recover it. 00:37:35.287 [2024-11-19 08:01:27.129807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.287 [2024-11-19 08:01:27.129855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.287 qpair failed and we were unable to recover it. 00:37:35.287 [2024-11-19 08:01:27.129988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.287 [2024-11-19 08:01:27.130026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.287 qpair failed and we were unable to recover it. 00:37:35.287 [2024-11-19 08:01:27.130165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.287 [2024-11-19 08:01:27.130201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.287 qpair failed and we were unable to recover it. 00:37:35.287 [2024-11-19 08:01:27.130308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.287 [2024-11-19 08:01:27.130343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.287 qpair failed and we were unable to recover it. 00:37:35.287 [2024-11-19 08:01:27.130449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.287 [2024-11-19 08:01:27.130483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.287 qpair failed and we were unable to recover it. 00:37:35.287 [2024-11-19 08:01:27.130601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.287 [2024-11-19 08:01:27.130638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.287 qpair failed and we were unable to recover it. 00:37:35.287 [2024-11-19 08:01:27.130755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.287 [2024-11-19 08:01:27.130790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.287 qpair failed and we were unable to recover it. 00:37:35.287 [2024-11-19 08:01:27.130937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.287 [2024-11-19 08:01:27.130985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.287 qpair failed and we were unable to recover it. 00:37:35.287 [2024-11-19 08:01:27.131104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.287 [2024-11-19 08:01:27.131140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.287 qpair failed and we were unable to recover it. 00:37:35.287 [2024-11-19 08:01:27.131261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.287 [2024-11-19 08:01:27.131297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.287 qpair failed and we were unable to recover it. 00:37:35.287 [2024-11-19 08:01:27.131395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.287 [2024-11-19 08:01:27.131429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.287 qpair failed and we were unable to recover it. 00:37:35.287 [2024-11-19 08:01:27.131573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.287 [2024-11-19 08:01:27.131608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.287 qpair failed and we were unable to recover it. 00:37:35.287 [2024-11-19 08:01:27.131714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.287 [2024-11-19 08:01:27.131748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.287 qpair failed and we were unable to recover it. 00:37:35.287 [2024-11-19 08:01:27.131921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.287 [2024-11-19 08:01:27.131957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.287 qpair failed and we were unable to recover it. 00:37:35.287 [2024-11-19 08:01:27.132105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.287 [2024-11-19 08:01:27.132149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.287 qpair failed and we were unable to recover it. 00:37:35.287 [2024-11-19 08:01:27.132312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.287 [2024-11-19 08:01:27.132346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.287 qpair failed and we were unable to recover it. 00:37:35.287 [2024-11-19 08:01:27.132447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.287 [2024-11-19 08:01:27.132481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.287 qpair failed and we were unable to recover it. 00:37:35.287 [2024-11-19 08:01:27.132601] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2780 is same with the state(6) to be set 00:37:35.287 [2024-11-19 08:01:27.132775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.287 [2024-11-19 08:01:27.132823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.287 qpair failed and we were unable to recover it. 00:37:35.287 [2024-11-19 08:01:27.132969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.287 [2024-11-19 08:01:27.133005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.287 qpair failed and we were unable to recover it. 00:37:35.287 [2024-11-19 08:01:27.133230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.287 [2024-11-19 08:01:27.133266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.287 qpair failed and we were unable to recover it. 00:37:35.287 [2024-11-19 08:01:27.133399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.287 [2024-11-19 08:01:27.133434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.287 qpair failed and we were unable to recover it. 00:37:35.287 [2024-11-19 08:01:27.133561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.287 [2024-11-19 08:01:27.133597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.287 qpair failed and we were unable to recover it. 00:37:35.287 [2024-11-19 08:01:27.133747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.287 [2024-11-19 08:01:27.133796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.287 qpair failed and we were unable to recover it. 00:37:35.288 [2024-11-19 08:01:27.133941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.288 [2024-11-19 08:01:27.133979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.288 qpair failed and we were unable to recover it. 00:37:35.288 [2024-11-19 08:01:27.134199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.288 [2024-11-19 08:01:27.134235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.288 qpair failed and we were unable to recover it. 00:37:35.288 [2024-11-19 08:01:27.134446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.288 [2024-11-19 08:01:27.134481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.288 qpair failed and we were unable to recover it. 00:37:35.288 [2024-11-19 08:01:27.134600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.288 [2024-11-19 08:01:27.134636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.288 qpair failed and we were unable to recover it. 00:37:35.288 [2024-11-19 08:01:27.134789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.288 [2024-11-19 08:01:27.134824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.288 qpair failed and we were unable to recover it. 00:37:35.288 [2024-11-19 08:01:27.134933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.288 [2024-11-19 08:01:27.134967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.288 qpair failed and we were unable to recover it. 00:37:35.288 [2024-11-19 08:01:27.135064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.288 [2024-11-19 08:01:27.135098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.288 qpair failed and we were unable to recover it. 00:37:35.288 [2024-11-19 08:01:27.135211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.288 [2024-11-19 08:01:27.135245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.288 qpair failed and we were unable to recover it. 00:37:35.288 [2024-11-19 08:01:27.135385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.288 [2024-11-19 08:01:27.135433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.288 qpair failed and we were unable to recover it. 00:37:35.288 [2024-11-19 08:01:27.135577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.288 [2024-11-19 08:01:27.135614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.288 qpair failed and we were unable to recover it. 00:37:35.288 [2024-11-19 08:01:27.135755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.288 [2024-11-19 08:01:27.135792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.288 qpair failed and we were unable to recover it. 00:37:35.288 [2024-11-19 08:01:27.135915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.288 [2024-11-19 08:01:27.135955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.288 qpair failed and we were unable to recover it. 00:37:35.288 [2024-11-19 08:01:27.136102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.288 [2024-11-19 08:01:27.136137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.288 qpair failed and we were unable to recover it. 00:37:35.288 [2024-11-19 08:01:27.136268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.288 [2024-11-19 08:01:27.136303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.288 qpair failed and we were unable to recover it. 00:37:35.288 [2024-11-19 08:01:27.136415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.288 [2024-11-19 08:01:27.136450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.288 qpair failed and we were unable to recover it. 00:37:35.288 [2024-11-19 08:01:27.136607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.288 [2024-11-19 08:01:27.136655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.288 qpair failed and we were unable to recover it. 00:37:35.288 [2024-11-19 08:01:27.136787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.288 [2024-11-19 08:01:27.136836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.288 qpair failed and we were unable to recover it. 00:37:35.288 [2024-11-19 08:01:27.136960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.288 [2024-11-19 08:01:27.136995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.288 qpair failed and we were unable to recover it. 00:37:35.288 [2024-11-19 08:01:27.137109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.288 [2024-11-19 08:01:27.137145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.288 qpair failed and we were unable to recover it. 00:37:35.288 [2024-11-19 08:01:27.137287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.288 [2024-11-19 08:01:27.137321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.288 qpair failed and we were unable to recover it. 00:37:35.288 [2024-11-19 08:01:27.137435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.288 [2024-11-19 08:01:27.137470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.288 qpair failed and we were unable to recover it. 00:37:35.288 [2024-11-19 08:01:27.137580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.288 [2024-11-19 08:01:27.137614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.288 qpair failed and we were unable to recover it. 00:37:35.288 [2024-11-19 08:01:27.137735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.288 [2024-11-19 08:01:27.137774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.288 qpair failed and we were unable to recover it. 00:37:35.288 [2024-11-19 08:01:27.137925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.288 [2024-11-19 08:01:27.137961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.288 qpair failed and we were unable to recover it. 00:37:35.288 [2024-11-19 08:01:27.138065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.288 [2024-11-19 08:01:27.138099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.288 qpair failed and we were unable to recover it. 00:37:35.288 [2024-11-19 08:01:27.138247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.288 [2024-11-19 08:01:27.138282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.288 qpair failed and we were unable to recover it. 00:37:35.288 [2024-11-19 08:01:27.138392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.288 [2024-11-19 08:01:27.138425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.288 qpair failed and we were unable to recover it. 00:37:35.288 [2024-11-19 08:01:27.138535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.288 [2024-11-19 08:01:27.138569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.288 qpair failed and we were unable to recover it. 00:37:35.288 [2024-11-19 08:01:27.138711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.288 [2024-11-19 08:01:27.138747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.288 qpair failed and we were unable to recover it. 00:37:35.288 [2024-11-19 08:01:27.138864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.288 [2024-11-19 08:01:27.138904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.288 qpair failed and we were unable to recover it. 00:37:35.288 [2024-11-19 08:01:27.139046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.288 [2024-11-19 08:01:27.139082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.288 qpair failed and we were unable to recover it. 00:37:35.288 [2024-11-19 08:01:27.139225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.288 [2024-11-19 08:01:27.139260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.288 qpair failed and we were unable to recover it. 00:37:35.288 [2024-11-19 08:01:27.139371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.288 [2024-11-19 08:01:27.139406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.288 qpair failed and we were unable to recover it. 00:37:35.288 [2024-11-19 08:01:27.139524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.288 [2024-11-19 08:01:27.139572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.288 qpair failed and we were unable to recover it. 00:37:35.288 [2024-11-19 08:01:27.139714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.288 [2024-11-19 08:01:27.139749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.288 qpair failed and we were unable to recover it. 00:37:35.288 [2024-11-19 08:01:27.139891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.288 [2024-11-19 08:01:27.139926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.288 qpair failed and we were unable to recover it. 00:37:35.288 [2024-11-19 08:01:27.140033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.288 [2024-11-19 08:01:27.140067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.288 qpair failed and we were unable to recover it. 00:37:35.288 [2024-11-19 08:01:27.140184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.288 [2024-11-19 08:01:27.140219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.289 qpair failed and we were unable to recover it. 00:37:35.289 [2024-11-19 08:01:27.140345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.289 [2024-11-19 08:01:27.140382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.289 qpair failed and we were unable to recover it. 00:37:35.289 [2024-11-19 08:01:27.140547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.289 [2024-11-19 08:01:27.140583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.289 qpair failed and we were unable to recover it. 00:37:35.289 [2024-11-19 08:01:27.140704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.289 [2024-11-19 08:01:27.140743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.289 qpair failed and we were unable to recover it. 00:37:35.289 [2024-11-19 08:01:27.140867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.289 [2024-11-19 08:01:27.140902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.289 qpair failed and we were unable to recover it. 00:37:35.289 [2024-11-19 08:01:27.141038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.289 [2024-11-19 08:01:27.141072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.289 qpair failed and we were unable to recover it. 00:37:35.289 [2024-11-19 08:01:27.141177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.289 [2024-11-19 08:01:27.141212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.289 qpair failed and we were unable to recover it. 00:37:35.289 [2024-11-19 08:01:27.141316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.289 [2024-11-19 08:01:27.141351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.289 qpair failed and we were unable to recover it. 00:37:35.289 [2024-11-19 08:01:27.141455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.289 [2024-11-19 08:01:27.141489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.289 qpair failed and we were unable to recover it. 00:37:35.289 [2024-11-19 08:01:27.141600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.289 [2024-11-19 08:01:27.141636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.289 qpair failed and we were unable to recover it. 00:37:35.289 [2024-11-19 08:01:27.141762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.289 [2024-11-19 08:01:27.141799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.289 qpair failed and we were unable to recover it. 00:37:35.289 [2024-11-19 08:01:27.141940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.289 [2024-11-19 08:01:27.141976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.289 qpair failed and we were unable to recover it. 00:37:35.289 [2024-11-19 08:01:27.142108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.289 [2024-11-19 08:01:27.142142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.289 qpair failed and we were unable to recover it. 00:37:35.289 [2024-11-19 08:01:27.142246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.289 [2024-11-19 08:01:27.142281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.289 qpair failed and we were unable to recover it. 00:37:35.289 [2024-11-19 08:01:27.142397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.289 [2024-11-19 08:01:27.142437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.289 qpair failed and we were unable to recover it. 00:37:35.289 [2024-11-19 08:01:27.142651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.289 [2024-11-19 08:01:27.142686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.289 qpair failed and we were unable to recover it. 00:37:35.289 [2024-11-19 08:01:27.142802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.289 [2024-11-19 08:01:27.142837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.289 qpair failed and we were unable to recover it. 00:37:35.289 [2024-11-19 08:01:27.142954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.289 [2024-11-19 08:01:27.142989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.289 qpair failed and we were unable to recover it. 00:37:35.289 [2024-11-19 08:01:27.143134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.289 [2024-11-19 08:01:27.143168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.289 qpair failed and we were unable to recover it. 00:37:35.289 [2024-11-19 08:01:27.143300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.289 [2024-11-19 08:01:27.143335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.289 qpair failed and we were unable to recover it. 00:37:35.289 [2024-11-19 08:01:27.143472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.289 [2024-11-19 08:01:27.143508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.289 qpair failed and we were unable to recover it. 00:37:35.289 [2024-11-19 08:01:27.143620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.289 [2024-11-19 08:01:27.143655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.289 qpair failed and we were unable to recover it. 00:37:35.289 [2024-11-19 08:01:27.143812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.289 [2024-11-19 08:01:27.143860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.289 qpair failed and we were unable to recover it. 00:37:35.289 [2024-11-19 08:01:27.143980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.289 [2024-11-19 08:01:27.144016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.289 qpair failed and we were unable to recover it. 00:37:35.289 [2024-11-19 08:01:27.144156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.289 [2024-11-19 08:01:27.144191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.289 qpair failed and we were unable to recover it. 00:37:35.289 [2024-11-19 08:01:27.144296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.289 [2024-11-19 08:01:27.144332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.289 qpair failed and we were unable to recover it. 00:37:35.289 [2024-11-19 08:01:27.144476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.289 [2024-11-19 08:01:27.144511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.289 qpair failed and we were unable to recover it. 00:37:35.289 [2024-11-19 08:01:27.144626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.289 [2024-11-19 08:01:27.144674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.289 qpair failed and we were unable to recover it. 00:37:35.289 [2024-11-19 08:01:27.144815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.289 [2024-11-19 08:01:27.144852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.289 qpair failed and we were unable to recover it. 00:37:35.289 [2024-11-19 08:01:27.144961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.289 [2024-11-19 08:01:27.144995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.289 qpair failed and we were unable to recover it. 00:37:35.289 [2024-11-19 08:01:27.145124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.289 [2024-11-19 08:01:27.145157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.289 qpair failed and we were unable to recover it. 00:37:35.289 [2024-11-19 08:01:27.145300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.289 [2024-11-19 08:01:27.145334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.289 qpair failed and we were unable to recover it. 00:37:35.289 [2024-11-19 08:01:27.145455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.289 [2024-11-19 08:01:27.145489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.289 qpair failed and we were unable to recover it. 00:37:35.289 [2024-11-19 08:01:27.145593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.289 [2024-11-19 08:01:27.145626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.289 qpair failed and we were unable to recover it. 00:37:35.289 [2024-11-19 08:01:27.145813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.289 [2024-11-19 08:01:27.145862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.289 qpair failed and we were unable to recover it. 00:37:35.289 [2024-11-19 08:01:27.146002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.289 [2024-11-19 08:01:27.146038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.289 qpair failed and we were unable to recover it. 00:37:35.289 [2024-11-19 08:01:27.146175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.289 [2024-11-19 08:01:27.146210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.289 qpair failed and we were unable to recover it. 00:37:35.289 [2024-11-19 08:01:27.146325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.289 [2024-11-19 08:01:27.146360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.289 qpair failed and we were unable to recover it. 00:37:35.289 [2024-11-19 08:01:27.146480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.290 [2024-11-19 08:01:27.146518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.290 qpair failed and we were unable to recover it. 00:37:35.290 [2024-11-19 08:01:27.146674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.290 [2024-11-19 08:01:27.146729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.290 qpair failed and we were unable to recover it. 00:37:35.290 [2024-11-19 08:01:27.146850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.290 [2024-11-19 08:01:27.146886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.290 qpair failed and we were unable to recover it. 00:37:35.290 [2024-11-19 08:01:27.146999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.290 [2024-11-19 08:01:27.147035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.290 qpair failed and we were unable to recover it. 00:37:35.290 [2024-11-19 08:01:27.147149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.290 [2024-11-19 08:01:27.147183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.290 qpair failed and we were unable to recover it. 00:37:35.290 [2024-11-19 08:01:27.147325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.290 [2024-11-19 08:01:27.147359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.290 qpair failed and we were unable to recover it. 00:37:35.290 [2024-11-19 08:01:27.147462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.290 [2024-11-19 08:01:27.147496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.290 qpair failed and we were unable to recover it. 00:37:35.290 [2024-11-19 08:01:27.147652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.290 [2024-11-19 08:01:27.147708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.290 qpair failed and we were unable to recover it. 00:37:35.290 [2024-11-19 08:01:27.147838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.290 [2024-11-19 08:01:27.147887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.290 qpair failed and we were unable to recover it. 00:37:35.290 [2024-11-19 08:01:27.148007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.290 [2024-11-19 08:01:27.148042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.290 qpair failed and we were unable to recover it. 00:37:35.290 [2024-11-19 08:01:27.148156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.290 [2024-11-19 08:01:27.148191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.290 qpair failed and we were unable to recover it. 00:37:35.290 [2024-11-19 08:01:27.148324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.290 [2024-11-19 08:01:27.148358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.290 qpair failed and we were unable to recover it. 00:37:35.290 [2024-11-19 08:01:27.148468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.290 [2024-11-19 08:01:27.148502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.290 qpair failed and we were unable to recover it. 00:37:35.290 [2024-11-19 08:01:27.148619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.290 [2024-11-19 08:01:27.148655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.290 qpair failed and we were unable to recover it. 00:37:35.290 [2024-11-19 08:01:27.148777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.290 [2024-11-19 08:01:27.148816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.290 qpair failed and we were unable to recover it. 00:37:35.290 [2024-11-19 08:01:27.148950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.290 [2024-11-19 08:01:27.148984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.290 qpair failed and we were unable to recover it. 00:37:35.290 [2024-11-19 08:01:27.149124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.290 [2024-11-19 08:01:27.149163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.290 qpair failed and we were unable to recover it. 00:37:35.290 [2024-11-19 08:01:27.149286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.290 [2024-11-19 08:01:27.149322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.290 qpair failed and we were unable to recover it. 00:37:35.290 [2024-11-19 08:01:27.149419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.290 [2024-11-19 08:01:27.149453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.290 qpair failed and we were unable to recover it. 00:37:35.290 [2024-11-19 08:01:27.149587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.290 [2024-11-19 08:01:27.149622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.290 qpair failed and we were unable to recover it. 00:37:35.290 [2024-11-19 08:01:27.149738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.290 [2024-11-19 08:01:27.149773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.290 qpair failed and we were unable to recover it. 00:37:35.290 [2024-11-19 08:01:27.149883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.290 [2024-11-19 08:01:27.149917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.290 qpair failed and we were unable to recover it. 00:37:35.290 [2024-11-19 08:01:27.150056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.290 [2024-11-19 08:01:27.150090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.290 qpair failed and we were unable to recover it. 00:37:35.290 [2024-11-19 08:01:27.150193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.290 [2024-11-19 08:01:27.150227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.290 qpair failed and we were unable to recover it. 00:37:35.290 [2024-11-19 08:01:27.150330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.290 [2024-11-19 08:01:27.150364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.290 qpair failed and we were unable to recover it. 00:37:35.290 [2024-11-19 08:01:27.150506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.290 [2024-11-19 08:01:27.150540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.290 qpair failed and we were unable to recover it. 00:37:35.290 [2024-11-19 08:01:27.150670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.290 [2024-11-19 08:01:27.150731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.290 qpair failed and we were unable to recover it. 00:37:35.290 [2024-11-19 08:01:27.150867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.290 [2024-11-19 08:01:27.150917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.290 qpair failed and we were unable to recover it. 00:37:35.290 [2024-11-19 08:01:27.151025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.290 [2024-11-19 08:01:27.151060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.290 qpair failed and we were unable to recover it. 00:37:35.290 [2024-11-19 08:01:27.151199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.290 [2024-11-19 08:01:27.151233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.290 qpair failed and we were unable to recover it. 00:37:35.290 [2024-11-19 08:01:27.151347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.290 [2024-11-19 08:01:27.151382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.290 qpair failed and we were unable to recover it. 00:37:35.290 [2024-11-19 08:01:27.151498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.290 [2024-11-19 08:01:27.151533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.290 qpair failed and we were unable to recover it. 00:37:35.290 [2024-11-19 08:01:27.151652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.290 [2024-11-19 08:01:27.151694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.290 qpair failed and we were unable to recover it. 00:37:35.290 [2024-11-19 08:01:27.151806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.290 [2024-11-19 08:01:27.151842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.290 qpair failed and we were unable to recover it. 00:37:35.290 [2024-11-19 08:01:27.151990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.290 [2024-11-19 08:01:27.152029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.290 qpair failed and we were unable to recover it. 00:37:35.290 [2024-11-19 08:01:27.152140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.290 [2024-11-19 08:01:27.152175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.290 qpair failed and we were unable to recover it. 00:37:35.290 [2024-11-19 08:01:27.152308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.290 [2024-11-19 08:01:27.152343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.290 qpair failed and we were unable to recover it. 00:37:35.290 [2024-11-19 08:01:27.152484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.290 [2024-11-19 08:01:27.152520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.290 qpair failed and we were unable to recover it. 00:37:35.291 [2024-11-19 08:01:27.152647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.291 [2024-11-19 08:01:27.152703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.291 qpair failed and we were unable to recover it. 00:37:35.291 [2024-11-19 08:01:27.152865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.291 [2024-11-19 08:01:27.152913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.291 qpair failed and we were unable to recover it. 00:37:35.291 [2024-11-19 08:01:27.153087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.291 [2024-11-19 08:01:27.153122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.291 qpair failed and we were unable to recover it. 00:37:35.291 [2024-11-19 08:01:27.153236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.291 [2024-11-19 08:01:27.153270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.291 qpair failed and we were unable to recover it. 00:37:35.291 [2024-11-19 08:01:27.153412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.291 [2024-11-19 08:01:27.153446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.291 qpair failed and we were unable to recover it. 00:37:35.291 [2024-11-19 08:01:27.153567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.291 [2024-11-19 08:01:27.153605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.291 qpair failed and we were unable to recover it. 00:37:35.291 [2024-11-19 08:01:27.153773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.291 [2024-11-19 08:01:27.153810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.291 qpair failed and we were unable to recover it. 00:37:35.291 [2024-11-19 08:01:27.153924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.291 [2024-11-19 08:01:27.153961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.291 qpair failed and we were unable to recover it. 00:37:35.291 [2024-11-19 08:01:27.154081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.291 [2024-11-19 08:01:27.154116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.291 qpair failed and we were unable to recover it. 00:37:35.291 [2024-11-19 08:01:27.154233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.291 [2024-11-19 08:01:27.154268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.291 qpair failed and we were unable to recover it. 00:37:35.291 [2024-11-19 08:01:27.154407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.291 [2024-11-19 08:01:27.154441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.291 qpair failed and we were unable to recover it. 00:37:35.291 [2024-11-19 08:01:27.154546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.291 [2024-11-19 08:01:27.154580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.291 qpair failed and we were unable to recover it. 00:37:35.291 [2024-11-19 08:01:27.154715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.291 [2024-11-19 08:01:27.154753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.291 qpair failed and we were unable to recover it. 00:37:35.291 [2024-11-19 08:01:27.154912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.291 [2024-11-19 08:01:27.154947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.291 qpair failed and we were unable to recover it. 00:37:35.291 [2024-11-19 08:01:27.155068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.291 [2024-11-19 08:01:27.155116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.291 qpair failed and we were unable to recover it. 00:37:35.291 [2024-11-19 08:01:27.155255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.291 [2024-11-19 08:01:27.155292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.291 qpair failed and we were unable to recover it. 00:37:35.291 [2024-11-19 08:01:27.155432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.291 [2024-11-19 08:01:27.155466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.291 qpair failed and we were unable to recover it. 00:37:35.291 [2024-11-19 08:01:27.155602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.291 [2024-11-19 08:01:27.155636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.291 qpair failed and we were unable to recover it. 00:37:35.291 [2024-11-19 08:01:27.155749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.291 [2024-11-19 08:01:27.155789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.291 qpair failed and we were unable to recover it. 00:37:35.291 [2024-11-19 08:01:27.155907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.291 [2024-11-19 08:01:27.155944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.291 qpair failed and we were unable to recover it. 00:37:35.291 [2024-11-19 08:01:27.156096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.291 [2024-11-19 08:01:27.156131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.291 qpair failed and we were unable to recover it. 00:37:35.291 [2024-11-19 08:01:27.156270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.291 [2024-11-19 08:01:27.156305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.291 qpair failed and we were unable to recover it. 00:37:35.291 [2024-11-19 08:01:27.156438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.291 [2024-11-19 08:01:27.156473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.291 qpair failed and we were unable to recover it. 00:37:35.291 [2024-11-19 08:01:27.156580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.291 [2024-11-19 08:01:27.156615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.291 qpair failed and we were unable to recover it. 00:37:35.291 [2024-11-19 08:01:27.156787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.291 [2024-11-19 08:01:27.156824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.291 qpair failed and we were unable to recover it. 00:37:35.291 [2024-11-19 08:01:27.156940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.291 [2024-11-19 08:01:27.156976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.291 qpair failed and we were unable to recover it. 00:37:35.291 [2024-11-19 08:01:27.157091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.291 [2024-11-19 08:01:27.157139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.291 qpair failed and we were unable to recover it. 00:37:35.291 [2024-11-19 08:01:27.157269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.291 [2024-11-19 08:01:27.157304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.291 qpair failed and we were unable to recover it. 00:37:35.291 [2024-11-19 08:01:27.157411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.291 [2024-11-19 08:01:27.157446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.291 qpair failed and we were unable to recover it. 00:37:35.291 [2024-11-19 08:01:27.157557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.291 [2024-11-19 08:01:27.157592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.291 qpair failed and we were unable to recover it. 00:37:35.291 [2024-11-19 08:01:27.157760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.291 [2024-11-19 08:01:27.157797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.291 qpair failed and we were unable to recover it. 00:37:35.291 [2024-11-19 08:01:27.157943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.291 [2024-11-19 08:01:27.157978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.291 qpair failed and we were unable to recover it. 00:37:35.291 [2024-11-19 08:01:27.158129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.291 [2024-11-19 08:01:27.158164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.291 qpair failed and we were unable to recover it. 00:37:35.291 [2024-11-19 08:01:27.158378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.292 [2024-11-19 08:01:27.158414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.292 qpair failed and we were unable to recover it. 00:37:35.292 [2024-11-19 08:01:27.158552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.292 [2024-11-19 08:01:27.158588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.292 qpair failed and we were unable to recover it. 00:37:35.292 [2024-11-19 08:01:27.158717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.292 [2024-11-19 08:01:27.158753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.292 qpair failed and we were unable to recover it. 00:37:35.292 [2024-11-19 08:01:27.158904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.292 [2024-11-19 08:01:27.158939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.292 qpair failed and we were unable to recover it. 00:37:35.292 [2024-11-19 08:01:27.159050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.292 [2024-11-19 08:01:27.159084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.292 qpair failed and we were unable to recover it. 00:37:35.292 [2024-11-19 08:01:27.159228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.292 [2024-11-19 08:01:27.159262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.292 qpair failed and we were unable to recover it. 00:37:35.292 [2024-11-19 08:01:27.159404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.292 [2024-11-19 08:01:27.159440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.292 qpair failed and we were unable to recover it. 00:37:35.292 [2024-11-19 08:01:27.159554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.292 [2024-11-19 08:01:27.159590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.292 qpair failed and we were unable to recover it. 00:37:35.292 [2024-11-19 08:01:27.159732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.292 [2024-11-19 08:01:27.159768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.292 qpair failed and we were unable to recover it. 00:37:35.292 [2024-11-19 08:01:27.159870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.292 [2024-11-19 08:01:27.159906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.292 qpair failed and we were unable to recover it. 00:37:35.292 [2024-11-19 08:01:27.160022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.292 [2024-11-19 08:01:27.160058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.292 qpair failed and we were unable to recover it. 00:37:35.292 [2024-11-19 08:01:27.160189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.292 [2024-11-19 08:01:27.160224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.292 qpair failed and we were unable to recover it. 00:37:35.292 [2024-11-19 08:01:27.160345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.292 [2024-11-19 08:01:27.160379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.292 qpair failed and we were unable to recover it. 00:37:35.292 [2024-11-19 08:01:27.160521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.292 [2024-11-19 08:01:27.160556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.292 qpair failed and we were unable to recover it. 00:37:35.292 [2024-11-19 08:01:27.160697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.292 [2024-11-19 08:01:27.160733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.292 qpair failed and we were unable to recover it. 00:37:35.292 [2024-11-19 08:01:27.160855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.292 [2024-11-19 08:01:27.160889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.292 qpair failed and we were unable to recover it. 00:37:35.292 [2024-11-19 08:01:27.160992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.292 [2024-11-19 08:01:27.161026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.292 qpair failed and we were unable to recover it. 00:37:35.292 [2024-11-19 08:01:27.161218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.292 [2024-11-19 08:01:27.161266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.292 qpair failed and we were unable to recover it. 00:37:35.292 [2024-11-19 08:01:27.161418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.292 [2024-11-19 08:01:27.161455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.292 qpair failed and we were unable to recover it. 00:37:35.292 [2024-11-19 08:01:27.161674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.292 [2024-11-19 08:01:27.161717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.292 qpair failed and we were unable to recover it. 00:37:35.292 [2024-11-19 08:01:27.161824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.292 [2024-11-19 08:01:27.161859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.292 qpair failed and we were unable to recover it. 00:37:35.292 [2024-11-19 08:01:27.161992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.292 [2024-11-19 08:01:27.162026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.292 qpair failed and we were unable to recover it. 00:37:35.292 [2024-11-19 08:01:27.162162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.292 [2024-11-19 08:01:27.162197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.292 qpair failed and we were unable to recover it. 00:37:35.292 [2024-11-19 08:01:27.162334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.292 [2024-11-19 08:01:27.162369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.292 qpair failed and we were unable to recover it. 00:37:35.292 [2024-11-19 08:01:27.162475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.292 [2024-11-19 08:01:27.162509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.292 qpair failed and we were unable to recover it. 00:37:35.292 [2024-11-19 08:01:27.162650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.292 [2024-11-19 08:01:27.162698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.292 qpair failed and we were unable to recover it. 00:37:35.292 [2024-11-19 08:01:27.162826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.292 [2024-11-19 08:01:27.162862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.292 qpair failed and we were unable to recover it. 00:37:35.292 [2024-11-19 08:01:27.162997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.292 [2024-11-19 08:01:27.163043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.292 qpair failed and we were unable to recover it. 00:37:35.292 [2024-11-19 08:01:27.163209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.292 [2024-11-19 08:01:27.163244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.292 qpair failed and we were unable to recover it. 00:37:35.292 [2024-11-19 08:01:27.163391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.292 [2024-11-19 08:01:27.163427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.292 qpair failed and we were unable to recover it. 00:37:35.292 [2024-11-19 08:01:27.163563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.292 [2024-11-19 08:01:27.163598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.292 qpair failed and we were unable to recover it. 00:37:35.292 [2024-11-19 08:01:27.163718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.292 [2024-11-19 08:01:27.163755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.292 qpair failed and we were unable to recover it. 00:37:35.292 [2024-11-19 08:01:27.163867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.292 [2024-11-19 08:01:27.163901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.292 qpair failed and we were unable to recover it. 00:37:35.292 [2024-11-19 08:01:27.164118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.292 [2024-11-19 08:01:27.164155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.292 qpair failed and we were unable to recover it. 00:37:35.292 [2024-11-19 08:01:27.164319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.292 [2024-11-19 08:01:27.164379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.292 qpair failed and we were unable to recover it. 00:37:35.292 [2024-11-19 08:01:27.164509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.292 [2024-11-19 08:01:27.164545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.292 qpair failed and we were unable to recover it. 00:37:35.292 [2024-11-19 08:01:27.164665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.292 [2024-11-19 08:01:27.164707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.292 qpair failed and we were unable to recover it. 00:37:35.292 [2024-11-19 08:01:27.164813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.292 [2024-11-19 08:01:27.164868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.292 qpair failed and we were unable to recover it. 00:37:35.293 [2024-11-19 08:01:27.165068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.293 [2024-11-19 08:01:27.165125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.293 qpair failed and we were unable to recover it. 00:37:35.293 [2024-11-19 08:01:27.165251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.293 [2024-11-19 08:01:27.165290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.293 qpair failed and we were unable to recover it. 00:37:35.293 [2024-11-19 08:01:27.165419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.293 [2024-11-19 08:01:27.165459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.293 qpair failed and we were unable to recover it. 00:37:35.293 [2024-11-19 08:01:27.165615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.293 [2024-11-19 08:01:27.165650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.293 qpair failed and we were unable to recover it. 00:37:35.293 [2024-11-19 08:01:27.165783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.293 [2024-11-19 08:01:27.165831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.293 qpair failed and we were unable to recover it. 00:37:35.293 [2024-11-19 08:01:27.165962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.293 [2024-11-19 08:01:27.166002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.293 qpair failed and we were unable to recover it. 00:37:35.293 [2024-11-19 08:01:27.166168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.293 [2024-11-19 08:01:27.166203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.293 qpair failed and we were unable to recover it. 00:37:35.293 [2024-11-19 08:01:27.166317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.293 [2024-11-19 08:01:27.166351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.293 qpair failed and we were unable to recover it. 00:37:35.293 [2024-11-19 08:01:27.166457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.293 [2024-11-19 08:01:27.166490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.293 qpair failed and we were unable to recover it. 00:37:35.293 [2024-11-19 08:01:27.166614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.293 [2024-11-19 08:01:27.166650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.293 qpair failed and we were unable to recover it. 00:37:35.293 [2024-11-19 08:01:27.166773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.293 [2024-11-19 08:01:27.166810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.293 qpair failed and we were unable to recover it. 00:37:35.293 [2024-11-19 08:01:27.166959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.293 [2024-11-19 08:01:27.166994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.293 qpair failed and we were unable to recover it. 00:37:35.293 [2024-11-19 08:01:27.167139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.293 [2024-11-19 08:01:27.167174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.293 qpair failed and we were unable to recover it. 00:37:35.293 [2024-11-19 08:01:27.167281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.293 [2024-11-19 08:01:27.167316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.293 qpair failed and we were unable to recover it. 00:37:35.293 [2024-11-19 08:01:27.167429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.293 [2024-11-19 08:01:27.167463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.293 qpair failed and we were unable to recover it. 00:37:35.583 [2024-11-19 08:01:27.167582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.583 [2024-11-19 08:01:27.167632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.583 qpair failed and we were unable to recover it. 00:37:35.583 [2024-11-19 08:01:27.167780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.583 [2024-11-19 08:01:27.167816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.583 qpair failed and we were unable to recover it. 00:37:35.583 [2024-11-19 08:01:27.167943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.583 [2024-11-19 08:01:27.167992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.583 qpair failed and we were unable to recover it. 00:37:35.583 [2024-11-19 08:01:27.168166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.583 [2024-11-19 08:01:27.168202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.583 qpair failed and we were unable to recover it. 00:37:35.583 [2024-11-19 08:01:27.168315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.583 [2024-11-19 08:01:27.168351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.583 qpair failed and we were unable to recover it. 00:37:35.583 [2024-11-19 08:01:27.168459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.583 [2024-11-19 08:01:27.168495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.583 qpair failed and we were unable to recover it. 00:37:35.583 [2024-11-19 08:01:27.168634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.583 [2024-11-19 08:01:27.168669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.583 qpair failed and we were unable to recover it. 00:37:35.583 [2024-11-19 08:01:27.168794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.583 [2024-11-19 08:01:27.168842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.583 qpair failed and we were unable to recover it. 00:37:35.583 [2024-11-19 08:01:27.168968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.583 [2024-11-19 08:01:27.169003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.583 qpair failed and we were unable to recover it. 00:37:35.583 [2024-11-19 08:01:27.169107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.583 [2024-11-19 08:01:27.169142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.583 qpair failed and we were unable to recover it. 00:37:35.583 [2024-11-19 08:01:27.169282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.583 [2024-11-19 08:01:27.169316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.583 qpair failed and we were unable to recover it. 00:37:35.583 [2024-11-19 08:01:27.169427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.583 [2024-11-19 08:01:27.169462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.583 qpair failed and we were unable to recover it. 00:37:35.583 [2024-11-19 08:01:27.169603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.583 [2024-11-19 08:01:27.169643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.583 qpair failed and we were unable to recover it. 00:37:35.583 [2024-11-19 08:01:27.169757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.583 [2024-11-19 08:01:27.169794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.583 qpair failed and we were unable to recover it. 00:37:35.583 [2024-11-19 08:01:27.169924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.583 [2024-11-19 08:01:27.169962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.583 qpair failed and we were unable to recover it. 00:37:35.583 [2024-11-19 08:01:27.170079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.583 [2024-11-19 08:01:27.170119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.583 qpair failed and we were unable to recover it. 00:37:35.583 [2024-11-19 08:01:27.170237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.583 [2024-11-19 08:01:27.170274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.583 qpair failed and we were unable to recover it. 00:37:35.583 [2024-11-19 08:01:27.170400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.583 [2024-11-19 08:01:27.170434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.583 qpair failed and we were unable to recover it. 00:37:35.583 [2024-11-19 08:01:27.170570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.583 [2024-11-19 08:01:27.170605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.583 qpair failed and we were unable to recover it. 00:37:35.583 [2024-11-19 08:01:27.170746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.583 [2024-11-19 08:01:27.170782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.583 qpair failed and we were unable to recover it. 00:37:35.583 [2024-11-19 08:01:27.170895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.583 [2024-11-19 08:01:27.170929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.583 qpair failed and we were unable to recover it. 00:37:35.583 [2024-11-19 08:01:27.171037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.583 [2024-11-19 08:01:27.171072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.583 qpair failed and we were unable to recover it. 00:37:35.583 [2024-11-19 08:01:27.171233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.583 [2024-11-19 08:01:27.171267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.583 qpair failed and we were unable to recover it. 00:37:35.583 [2024-11-19 08:01:27.171405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.583 [2024-11-19 08:01:27.171439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.583 qpair failed and we were unable to recover it. 00:37:35.583 [2024-11-19 08:01:27.171585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.583 [2024-11-19 08:01:27.171620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.583 qpair failed and we were unable to recover it. 00:37:35.583 [2024-11-19 08:01:27.171739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.583 [2024-11-19 08:01:27.171775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.583 qpair failed and we were unable to recover it. 00:37:35.583 [2024-11-19 08:01:27.171920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.583 [2024-11-19 08:01:27.171954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.583 qpair failed and we were unable to recover it. 00:37:35.583 [2024-11-19 08:01:27.172071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.583 [2024-11-19 08:01:27.172105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.583 qpair failed and we were unable to recover it. 00:37:35.583 [2024-11-19 08:01:27.172216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.583 [2024-11-19 08:01:27.172251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.584 qpair failed and we were unable to recover it. 00:37:35.584 [2024-11-19 08:01:27.172409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.584 [2024-11-19 08:01:27.172459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.584 qpair failed and we were unable to recover it. 00:37:35.584 [2024-11-19 08:01:27.172592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.584 [2024-11-19 08:01:27.172641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.584 qpair failed and we were unable to recover it. 00:37:35.584 [2024-11-19 08:01:27.172766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.584 [2024-11-19 08:01:27.172803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.584 qpair failed and we were unable to recover it. 00:37:35.584 [2024-11-19 08:01:27.172906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.584 [2024-11-19 08:01:27.172940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.584 qpair failed and we were unable to recover it. 00:37:35.584 [2024-11-19 08:01:27.173076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.584 [2024-11-19 08:01:27.173110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.584 qpair failed and we were unable to recover it. 00:37:35.584 [2024-11-19 08:01:27.173218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.584 [2024-11-19 08:01:27.173252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.584 qpair failed and we were unable to recover it. 00:37:35.584 [2024-11-19 08:01:27.173357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.584 [2024-11-19 08:01:27.173397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.584 qpair failed and we were unable to recover it. 00:37:35.584 [2024-11-19 08:01:27.173533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.584 [2024-11-19 08:01:27.173572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.584 qpair failed and we were unable to recover it. 00:37:35.584 [2024-11-19 08:01:27.173699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.584 [2024-11-19 08:01:27.173736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.584 qpair failed and we were unable to recover it. 00:37:35.584 [2024-11-19 08:01:27.173846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.584 [2024-11-19 08:01:27.173882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.584 qpair failed and we were unable to recover it. 00:37:35.584 [2024-11-19 08:01:27.174001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.584 [2024-11-19 08:01:27.174036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.584 qpair failed and we were unable to recover it. 00:37:35.584 [2024-11-19 08:01:27.174175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.584 [2024-11-19 08:01:27.174209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.584 qpair failed and we were unable to recover it. 00:37:35.584 [2024-11-19 08:01:27.174310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.584 [2024-11-19 08:01:27.174343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.584 qpair failed and we were unable to recover it. 00:37:35.584 [2024-11-19 08:01:27.174453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.584 [2024-11-19 08:01:27.174487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.584 qpair failed and we were unable to recover it. 00:37:35.584 [2024-11-19 08:01:27.174631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.584 [2024-11-19 08:01:27.174665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.584 qpair failed and we were unable to recover it. 00:37:35.584 [2024-11-19 08:01:27.174814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.584 [2024-11-19 08:01:27.174851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.584 qpair failed and we were unable to recover it. 00:37:35.584 [2024-11-19 08:01:27.174961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.584 [2024-11-19 08:01:27.174995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.584 qpair failed and we were unable to recover it. 00:37:35.584 [2024-11-19 08:01:27.175118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.584 [2024-11-19 08:01:27.175153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.584 qpair failed and we were unable to recover it. 00:37:35.584 [2024-11-19 08:01:27.175289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.584 [2024-11-19 08:01:27.175324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.584 qpair failed and we were unable to recover it. 00:37:35.584 [2024-11-19 08:01:27.175445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.584 [2024-11-19 08:01:27.175483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.584 qpair failed and we were unable to recover it. 00:37:35.584 [2024-11-19 08:01:27.175619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.584 [2024-11-19 08:01:27.175656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.584 qpair failed and we were unable to recover it. 00:37:35.584 [2024-11-19 08:01:27.175788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.584 [2024-11-19 08:01:27.175824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.584 qpair failed and we were unable to recover it. 00:37:35.584 [2024-11-19 08:01:27.175929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.584 [2024-11-19 08:01:27.175966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.584 qpair failed and we were unable to recover it. 00:37:35.584 [2024-11-19 08:01:27.176072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.584 [2024-11-19 08:01:27.176111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.584 qpair failed and we were unable to recover it. 00:37:35.584 [2024-11-19 08:01:27.176226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.584 [2024-11-19 08:01:27.176260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.584 qpair failed and we were unable to recover it. 00:37:35.584 [2024-11-19 08:01:27.176396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.584 [2024-11-19 08:01:27.176431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.584 qpair failed and we were unable to recover it. 00:37:35.584 [2024-11-19 08:01:27.176557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.584 [2024-11-19 08:01:27.176605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.584 qpair failed and we were unable to recover it. 00:37:35.584 [2024-11-19 08:01:27.176774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.584 [2024-11-19 08:01:27.176823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.584 qpair failed and we were unable to recover it. 00:37:35.584 [2024-11-19 08:01:27.176998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.584 [2024-11-19 08:01:27.177033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.584 qpair failed and we were unable to recover it. 00:37:35.584 [2024-11-19 08:01:27.177136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.584 [2024-11-19 08:01:27.177174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.584 qpair failed and we were unable to recover it. 00:37:35.584 [2024-11-19 08:01:27.177286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.584 [2024-11-19 08:01:27.177320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.584 qpair failed and we were unable to recover it. 00:37:35.584 [2024-11-19 08:01:27.177425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.584 [2024-11-19 08:01:27.177459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.584 qpair failed and we were unable to recover it. 00:37:35.584 [2024-11-19 08:01:27.177609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.584 [2024-11-19 08:01:27.177643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.585 qpair failed and we were unable to recover it. 00:37:35.585 [2024-11-19 08:01:27.177793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.585 [2024-11-19 08:01:27.177832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.585 qpair failed and we were unable to recover it. 00:37:35.585 [2024-11-19 08:01:27.178053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.585 [2024-11-19 08:01:27.178089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.585 qpair failed and we were unable to recover it. 00:37:35.585 [2024-11-19 08:01:27.178263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.585 [2024-11-19 08:01:27.178298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.585 qpair failed and we were unable to recover it. 00:37:35.585 [2024-11-19 08:01:27.178438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.585 [2024-11-19 08:01:27.178473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.585 qpair failed and we were unable to recover it. 00:37:35.585 [2024-11-19 08:01:27.178592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.585 [2024-11-19 08:01:27.178627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.585 qpair failed and we were unable to recover it. 00:37:35.585 [2024-11-19 08:01:27.178764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.585 [2024-11-19 08:01:27.178812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.585 qpair failed and we were unable to recover it. 00:37:35.585 [2024-11-19 08:01:27.178926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.585 [2024-11-19 08:01:27.178962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.585 qpair failed and we were unable to recover it. 00:37:35.585 [2024-11-19 08:01:27.179102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.585 [2024-11-19 08:01:27.179136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.585 qpair failed and we were unable to recover it. 00:37:35.585 [2024-11-19 08:01:27.179271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.585 [2024-11-19 08:01:27.179305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.585 qpair failed and we were unable to recover it. 00:37:35.585 [2024-11-19 08:01:27.179414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.585 [2024-11-19 08:01:27.179448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.585 qpair failed and we were unable to recover it. 00:37:35.585 [2024-11-19 08:01:27.179557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.585 [2024-11-19 08:01:27.179596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.585 qpair failed and we were unable to recover it. 00:37:35.585 [2024-11-19 08:01:27.179715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.585 [2024-11-19 08:01:27.179753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.585 qpair failed and we were unable to recover it. 00:37:35.585 [2024-11-19 08:01:27.179897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.585 [2024-11-19 08:01:27.179932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.585 qpair failed and we were unable to recover it. 00:37:35.585 [2024-11-19 08:01:27.180071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.585 [2024-11-19 08:01:27.180106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.585 qpair failed and we were unable to recover it. 00:37:35.585 [2024-11-19 08:01:27.180213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.585 [2024-11-19 08:01:27.180249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.585 qpair failed and we were unable to recover it. 00:37:35.585 [2024-11-19 08:01:27.180403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.585 [2024-11-19 08:01:27.180452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.585 qpair failed and we were unable to recover it. 00:37:35.585 [2024-11-19 08:01:27.180622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.585 [2024-11-19 08:01:27.180657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.585 qpair failed and we were unable to recover it. 00:37:35.585 [2024-11-19 08:01:27.180808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.585 [2024-11-19 08:01:27.180859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.585 qpair failed and we were unable to recover it. 00:37:35.585 [2024-11-19 08:01:27.181006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.585 [2024-11-19 08:01:27.181041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.585 qpair failed and we were unable to recover it. 00:37:35.585 [2024-11-19 08:01:27.181188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.585 [2024-11-19 08:01:27.181223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.585 qpair failed and we were unable to recover it. 00:37:35.585 [2024-11-19 08:01:27.181338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.585 [2024-11-19 08:01:27.181373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.585 qpair failed and we were unable to recover it. 00:37:35.585 [2024-11-19 08:01:27.181536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.585 [2024-11-19 08:01:27.181570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.585 qpair failed and we were unable to recover it. 00:37:35.585 [2024-11-19 08:01:27.181683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.585 [2024-11-19 08:01:27.181728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.585 qpair failed and we were unable to recover it. 00:37:35.585 [2024-11-19 08:01:27.181882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.585 [2024-11-19 08:01:27.181930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.585 qpair failed and we were unable to recover it. 00:37:35.585 [2024-11-19 08:01:27.182075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.585 [2024-11-19 08:01:27.182111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.585 qpair failed and we were unable to recover it. 00:37:35.585 [2024-11-19 08:01:27.182244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.585 [2024-11-19 08:01:27.182280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.585 qpair failed and we were unable to recover it. 00:37:35.585 [2024-11-19 08:01:27.182395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.585 [2024-11-19 08:01:27.182429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.585 qpair failed and we were unable to recover it. 00:37:35.585 [2024-11-19 08:01:27.182555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.585 [2024-11-19 08:01:27.182604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.585 qpair failed and we were unable to recover it. 00:37:35.585 [2024-11-19 08:01:27.182736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.585 [2024-11-19 08:01:27.182773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.585 qpair failed and we were unable to recover it. 00:37:35.585 [2024-11-19 08:01:27.182916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.585 [2024-11-19 08:01:27.182951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.585 qpair failed and we were unable to recover it. 00:37:35.585 [2024-11-19 08:01:27.183059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.585 [2024-11-19 08:01:27.183094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.585 qpair failed and we were unable to recover it. 00:37:35.585 [2024-11-19 08:01:27.183212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.585 [2024-11-19 08:01:27.183247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.586 qpair failed and we were unable to recover it. 00:37:35.586 [2024-11-19 08:01:27.183352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.586 [2024-11-19 08:01:27.183387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.586 qpair failed and we were unable to recover it. 00:37:35.586 [2024-11-19 08:01:27.183526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.586 [2024-11-19 08:01:27.183560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.586 qpair failed and we were unable to recover it. 00:37:35.586 [2024-11-19 08:01:27.183696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.586 [2024-11-19 08:01:27.183745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.586 qpair failed and we were unable to recover it. 00:37:35.586 [2024-11-19 08:01:27.183899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.586 [2024-11-19 08:01:27.183946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.586 qpair failed and we were unable to recover it. 00:37:35.586 [2024-11-19 08:01:27.184077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.586 [2024-11-19 08:01:27.184125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.586 qpair failed and we were unable to recover it. 00:37:35.586 [2024-11-19 08:01:27.184268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.586 [2024-11-19 08:01:27.184305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.586 qpair failed and we were unable to recover it. 00:37:35.586 [2024-11-19 08:01:27.184433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.586 [2024-11-19 08:01:27.184468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.586 qpair failed and we were unable to recover it. 00:37:35.586 [2024-11-19 08:01:27.184604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.586 [2024-11-19 08:01:27.184638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.586 qpair failed and we were unable to recover it. 00:37:35.586 [2024-11-19 08:01:27.184759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.586 [2024-11-19 08:01:27.184795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.586 qpair failed and we were unable to recover it. 00:37:35.586 [2024-11-19 08:01:27.185013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.586 [2024-11-19 08:01:27.185047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.586 qpair failed and we were unable to recover it. 00:37:35.586 [2024-11-19 08:01:27.185184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.586 [2024-11-19 08:01:27.185224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.586 qpair failed and we were unable to recover it. 00:37:35.586 [2024-11-19 08:01:27.185340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.586 [2024-11-19 08:01:27.185375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.586 qpair failed and we were unable to recover it. 00:37:35.586 [2024-11-19 08:01:27.185500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.586 [2024-11-19 08:01:27.185549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.586 qpair failed and we were unable to recover it. 00:37:35.586 [2024-11-19 08:01:27.185700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.586 [2024-11-19 08:01:27.185738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.586 qpair failed and we were unable to recover it. 00:37:35.586 [2024-11-19 08:01:27.185856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.586 [2024-11-19 08:01:27.185891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.586 qpair failed and we were unable to recover it. 00:37:35.586 [2024-11-19 08:01:27.185995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.586 [2024-11-19 08:01:27.186029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.586 qpair failed and we were unable to recover it. 00:37:35.586 [2024-11-19 08:01:27.186175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.586 [2024-11-19 08:01:27.186210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.586 qpair failed and we were unable to recover it. 00:37:35.586 [2024-11-19 08:01:27.186313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.586 [2024-11-19 08:01:27.186346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.586 qpair failed and we were unable to recover it. 00:37:35.586 [2024-11-19 08:01:27.186452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.586 [2024-11-19 08:01:27.186486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.586 qpair failed and we were unable to recover it. 00:37:35.586 [2024-11-19 08:01:27.186595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.586 [2024-11-19 08:01:27.186629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.586 qpair failed and we were unable to recover it. 00:37:35.586 [2024-11-19 08:01:27.186793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.586 [2024-11-19 08:01:27.186841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.586 qpair failed and we were unable to recover it. 00:37:35.586 [2024-11-19 08:01:27.186957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.586 [2024-11-19 08:01:27.186993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.586 qpair failed and we were unable to recover it. 00:37:35.586 [2024-11-19 08:01:27.187098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.586 [2024-11-19 08:01:27.187133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.586 qpair failed and we were unable to recover it. 00:37:35.586 [2024-11-19 08:01:27.187248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.586 [2024-11-19 08:01:27.187282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.586 qpair failed and we were unable to recover it. 00:37:35.586 [2024-11-19 08:01:27.187425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.586 [2024-11-19 08:01:27.187460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.586 qpair failed and we were unable to recover it. 00:37:35.586 [2024-11-19 08:01:27.187575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.586 [2024-11-19 08:01:27.187615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.586 qpair failed and we were unable to recover it. 00:37:35.586 [2024-11-19 08:01:27.187739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.586 [2024-11-19 08:01:27.187775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.586 qpair failed and we were unable to recover it. 00:37:35.586 [2024-11-19 08:01:27.187883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.586 [2024-11-19 08:01:27.187918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.586 qpair failed and we were unable to recover it. 00:37:35.586 [2024-11-19 08:01:27.188026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.586 [2024-11-19 08:01:27.188060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.586 qpair failed and we were unable to recover it. 00:37:35.586 [2024-11-19 08:01:27.188170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.586 [2024-11-19 08:01:27.188204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.586 qpair failed and we were unable to recover it. 00:37:35.586 [2024-11-19 08:01:27.188346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.586 [2024-11-19 08:01:27.188380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.586 qpair failed and we were unable to recover it. 00:37:35.586 [2024-11-19 08:01:27.188480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.586 [2024-11-19 08:01:27.188514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.586 qpair failed and we were unable to recover it. 00:37:35.586 [2024-11-19 08:01:27.188630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.586 [2024-11-19 08:01:27.188664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.586 qpair failed and we were unable to recover it. 00:37:35.586 [2024-11-19 08:01:27.188828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.587 [2024-11-19 08:01:27.188876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.587 qpair failed and we were unable to recover it. 00:37:35.587 [2024-11-19 08:01:27.189043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.587 [2024-11-19 08:01:27.189091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.587 qpair failed and we were unable to recover it. 00:37:35.587 [2024-11-19 08:01:27.189262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.587 [2024-11-19 08:01:27.189296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.587 qpair failed and we were unable to recover it. 00:37:35.587 [2024-11-19 08:01:27.189395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.587 [2024-11-19 08:01:27.189429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.587 qpair failed and we were unable to recover it. 00:37:35.587 [2024-11-19 08:01:27.189547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.587 [2024-11-19 08:01:27.189581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.587 qpair failed and we were unable to recover it. 00:37:35.587 [2024-11-19 08:01:27.189683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.587 [2024-11-19 08:01:27.189725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.587 qpair failed and we were unable to recover it. 00:37:35.587 [2024-11-19 08:01:27.189862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.587 [2024-11-19 08:01:27.189896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.587 qpair failed and we were unable to recover it. 00:37:35.587 [2024-11-19 08:01:27.189997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.587 [2024-11-19 08:01:27.190031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.587 qpair failed and we were unable to recover it. 00:37:35.587 [2024-11-19 08:01:27.190138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.587 [2024-11-19 08:01:27.190171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.587 qpair failed and we were unable to recover it. 00:37:35.587 [2024-11-19 08:01:27.190284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.587 [2024-11-19 08:01:27.190321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.587 qpair failed and we were unable to recover it. 00:37:35.587 [2024-11-19 08:01:27.190438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.587 [2024-11-19 08:01:27.190474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.587 qpair failed and we were unable to recover it. 00:37:35.587 [2024-11-19 08:01:27.190588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.587 [2024-11-19 08:01:27.190622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.587 qpair failed and we were unable to recover it. 00:37:35.587 [2024-11-19 08:01:27.190754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.587 [2024-11-19 08:01:27.190788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.587 qpair failed and we were unable to recover it. 00:37:35.587 [2024-11-19 08:01:27.190912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.587 [2024-11-19 08:01:27.190960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.587 qpair failed and we were unable to recover it. 00:37:35.587 [2024-11-19 08:01:27.191093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.587 [2024-11-19 08:01:27.191129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.587 qpair failed and we were unable to recover it. 00:37:35.587 [2024-11-19 08:01:27.191278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.587 [2024-11-19 08:01:27.191313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.587 qpair failed and we were unable to recover it. 00:37:35.587 [2024-11-19 08:01:27.191446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.587 [2024-11-19 08:01:27.191481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.587 qpair failed and we were unable to recover it. 00:37:35.587 [2024-11-19 08:01:27.191663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.587 [2024-11-19 08:01:27.191706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.587 qpair failed and we were unable to recover it. 00:37:35.587 [2024-11-19 08:01:27.191818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.587 [2024-11-19 08:01:27.191864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.587 qpair failed and we were unable to recover it. 00:37:35.587 [2024-11-19 08:01:27.191999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.587 [2024-11-19 08:01:27.192034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.587 qpair failed and we were unable to recover it. 00:37:35.587 [2024-11-19 08:01:27.192134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.587 [2024-11-19 08:01:27.192169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.587 qpair failed and we were unable to recover it. 00:37:35.587 [2024-11-19 08:01:27.192309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.587 [2024-11-19 08:01:27.192344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.587 qpair failed and we were unable to recover it. 00:37:35.587 [2024-11-19 08:01:27.192480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.587 [2024-11-19 08:01:27.192516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.587 qpair failed and we were unable to recover it. 00:37:35.587 [2024-11-19 08:01:27.192629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.587 [2024-11-19 08:01:27.192662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.587 qpair failed and we were unable to recover it. 00:37:35.587 [2024-11-19 08:01:27.192892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.587 [2024-11-19 08:01:27.192927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.587 qpair failed and we were unable to recover it. 00:37:35.587 [2024-11-19 08:01:27.193036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.587 [2024-11-19 08:01:27.193071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.587 qpair failed and we were unable to recover it. 00:37:35.587 [2024-11-19 08:01:27.193202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.587 [2024-11-19 08:01:27.193236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.587 qpair failed and we were unable to recover it. 00:37:35.587 [2024-11-19 08:01:27.193372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.587 [2024-11-19 08:01:27.193407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.587 qpair failed and we were unable to recover it. 00:37:35.587 [2024-11-19 08:01:27.193577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.587 [2024-11-19 08:01:27.193624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.587 qpair failed and we were unable to recover it. 00:37:35.587 [2024-11-19 08:01:27.193746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.587 [2024-11-19 08:01:27.193784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.587 qpair failed and we were unable to recover it. 00:37:35.587 [2024-11-19 08:01:27.193929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.587 [2024-11-19 08:01:27.193964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.587 qpair failed and we were unable to recover it. 00:37:35.587 [2024-11-19 08:01:27.194080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.587 [2024-11-19 08:01:27.194114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.587 qpair failed and we were unable to recover it. 00:37:35.587 [2024-11-19 08:01:27.194244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.587 [2024-11-19 08:01:27.194283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.588 qpair failed and we were unable to recover it. 00:37:35.588 [2024-11-19 08:01:27.194409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.588 [2024-11-19 08:01:27.194443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.588 qpair failed and we were unable to recover it. 00:37:35.588 [2024-11-19 08:01:27.194554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.588 [2024-11-19 08:01:27.194589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.588 qpair failed and we were unable to recover it. 00:37:35.588 [2024-11-19 08:01:27.194718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.588 [2024-11-19 08:01:27.194767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.588 qpair failed and we were unable to recover it. 00:37:35.588 [2024-11-19 08:01:27.194900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.588 [2024-11-19 08:01:27.194948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.588 qpair failed and we were unable to recover it. 00:37:35.588 [2024-11-19 08:01:27.195099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.588 [2024-11-19 08:01:27.195134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.588 qpair failed and we were unable to recover it. 00:37:35.588 [2024-11-19 08:01:27.195242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.588 [2024-11-19 08:01:27.195276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.588 qpair failed and we were unable to recover it. 00:37:35.588 [2024-11-19 08:01:27.195386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.588 [2024-11-19 08:01:27.195420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.588 qpair failed and we were unable to recover it. 00:37:35.588 [2024-11-19 08:01:27.195524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.588 [2024-11-19 08:01:27.195558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.588 qpair failed and we were unable to recover it. 00:37:35.588 [2024-11-19 08:01:27.195700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.588 [2024-11-19 08:01:27.195737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.588 qpair failed and we were unable to recover it. 00:37:35.588 [2024-11-19 08:01:27.195869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.588 [2024-11-19 08:01:27.195905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.588 qpair failed and we were unable to recover it. 00:37:35.588 [2024-11-19 08:01:27.196010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.588 [2024-11-19 08:01:27.196044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.588 qpair failed and we were unable to recover it. 00:37:35.588 [2024-11-19 08:01:27.196159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.588 [2024-11-19 08:01:27.196193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.588 qpair failed and we were unable to recover it. 00:37:35.588 [2024-11-19 08:01:27.196325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.588 [2024-11-19 08:01:27.196360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.588 qpair failed and we were unable to recover it. 00:37:35.588 [2024-11-19 08:01:27.196543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.588 [2024-11-19 08:01:27.196583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.588 qpair failed and we were unable to recover it. 00:37:35.588 [2024-11-19 08:01:27.196705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.588 [2024-11-19 08:01:27.196740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.588 qpair failed and we were unable to recover it. 00:37:35.588 [2024-11-19 08:01:27.196896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.588 [2024-11-19 08:01:27.196945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.588 qpair failed and we were unable to recover it. 00:37:35.588 [2024-11-19 08:01:27.197090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.588 [2024-11-19 08:01:27.197127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.588 qpair failed and we were unable to recover it. 00:37:35.588 [2024-11-19 08:01:27.197249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.588 [2024-11-19 08:01:27.197285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.588 qpair failed and we were unable to recover it. 00:37:35.588 [2024-11-19 08:01:27.197398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.588 [2024-11-19 08:01:27.197433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.588 qpair failed and we were unable to recover it. 00:37:35.588 [2024-11-19 08:01:27.197586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.588 [2024-11-19 08:01:27.197621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.588 qpair failed and we were unable to recover it. 00:37:35.588 [2024-11-19 08:01:27.197741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.588 [2024-11-19 08:01:27.197790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.588 qpair failed and we were unable to recover it. 00:37:35.588 [2024-11-19 08:01:27.197900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.588 [2024-11-19 08:01:27.197935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.588 qpair failed and we were unable to recover it. 00:37:35.588 [2024-11-19 08:01:27.198043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.588 [2024-11-19 08:01:27.198078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.588 qpair failed and we were unable to recover it. 00:37:35.588 [2024-11-19 08:01:27.198178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.588 [2024-11-19 08:01:27.198211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.588 qpair failed and we were unable to recover it. 00:37:35.588 [2024-11-19 08:01:27.198352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.588 [2024-11-19 08:01:27.198387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.588 qpair failed and we were unable to recover it. 00:37:35.588 [2024-11-19 08:01:27.198501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.588 [2024-11-19 08:01:27.198537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.588 qpair failed and we were unable to recover it. 00:37:35.588 [2024-11-19 08:01:27.198703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.588 [2024-11-19 08:01:27.198753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.588 qpair failed and we were unable to recover it. 00:37:35.588 [2024-11-19 08:01:27.198912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.589 [2024-11-19 08:01:27.198950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.589 qpair failed and we were unable to recover it. 00:37:35.589 [2024-11-19 08:01:27.199087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.589 [2024-11-19 08:01:27.199122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.589 qpair failed and we were unable to recover it. 00:37:35.589 [2024-11-19 08:01:27.199336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.589 [2024-11-19 08:01:27.199370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.589 qpair failed and we were unable to recover it. 00:37:35.589 [2024-11-19 08:01:27.199512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.589 [2024-11-19 08:01:27.199547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.589 qpair failed and we were unable to recover it. 00:37:35.589 [2024-11-19 08:01:27.199668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.589 [2024-11-19 08:01:27.199724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.589 qpair failed and we were unable to recover it. 00:37:35.589 [2024-11-19 08:01:27.199880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.589 [2024-11-19 08:01:27.199929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.589 qpair failed and we were unable to recover it. 00:37:35.589 [2024-11-19 08:01:27.200055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.589 [2024-11-19 08:01:27.200093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.589 qpair failed and we were unable to recover it. 00:37:35.589 [2024-11-19 08:01:27.200216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.589 [2024-11-19 08:01:27.200252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.589 qpair failed and we were unable to recover it. 00:37:35.589 [2024-11-19 08:01:27.200404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.589 [2024-11-19 08:01:27.200439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.589 qpair failed and we were unable to recover it. 00:37:35.589 [2024-11-19 08:01:27.200569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.589 [2024-11-19 08:01:27.200603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.589 qpair failed and we were unable to recover it. 00:37:35.589 [2024-11-19 08:01:27.200712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.589 [2024-11-19 08:01:27.200748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.589 qpair failed and we were unable to recover it. 00:37:35.589 [2024-11-19 08:01:27.200859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.589 [2024-11-19 08:01:27.200894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.589 qpair failed and we were unable to recover it. 00:37:35.589 [2024-11-19 08:01:27.201107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.589 [2024-11-19 08:01:27.201147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.589 qpair failed and we were unable to recover it. 00:37:35.589 [2024-11-19 08:01:27.201283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.589 [2024-11-19 08:01:27.201318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.589 qpair failed and we were unable to recover it. 00:37:35.589 [2024-11-19 08:01:27.201451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.589 [2024-11-19 08:01:27.201485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.589 qpair failed and we were unable to recover it. 00:37:35.589 [2024-11-19 08:01:27.201633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.589 [2024-11-19 08:01:27.201672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.589 qpair failed and we were unable to recover it. 00:37:35.589 [2024-11-19 08:01:27.201800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.589 [2024-11-19 08:01:27.201837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.589 qpair failed and we were unable to recover it. 00:37:35.589 [2024-11-19 08:01:27.201955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.589 [2024-11-19 08:01:27.202003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.589 qpair failed and we were unable to recover it. 00:37:35.589 [2024-11-19 08:01:27.202129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.589 [2024-11-19 08:01:27.202164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.589 qpair failed and we were unable to recover it. 00:37:35.589 [2024-11-19 08:01:27.202304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.589 [2024-11-19 08:01:27.202339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.589 qpair failed and we were unable to recover it. 00:37:35.589 [2024-11-19 08:01:27.202475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.589 [2024-11-19 08:01:27.202510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.589 qpair failed and we were unable to recover it. 00:37:35.589 [2024-11-19 08:01:27.202611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.589 [2024-11-19 08:01:27.202645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.589 qpair failed and we were unable to recover it. 00:37:35.589 [2024-11-19 08:01:27.202793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.589 [2024-11-19 08:01:27.202828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.589 qpair failed and we were unable to recover it. 00:37:35.589 [2024-11-19 08:01:27.203051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.589 [2024-11-19 08:01:27.203087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.589 qpair failed and we were unable to recover it. 00:37:35.589 [2024-11-19 08:01:27.203200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.589 [2024-11-19 08:01:27.203234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.589 qpair failed and we were unable to recover it. 00:37:35.589 [2024-11-19 08:01:27.203364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.589 [2024-11-19 08:01:27.203397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.589 qpair failed and we were unable to recover it. 00:37:35.589 [2024-11-19 08:01:27.203653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.589 [2024-11-19 08:01:27.203696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.589 qpair failed and we were unable to recover it. 00:37:35.589 [2024-11-19 08:01:27.203816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.589 [2024-11-19 08:01:27.203854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.589 qpair failed and we were unable to recover it. 00:37:35.589 [2024-11-19 08:01:27.204026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.589 [2024-11-19 08:01:27.204061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.589 qpair failed and we were unable to recover it. 00:37:35.589 [2024-11-19 08:01:27.204171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.589 [2024-11-19 08:01:27.204206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.589 qpair failed and we were unable to recover it. 00:37:35.589 [2024-11-19 08:01:27.204320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.589 [2024-11-19 08:01:27.204354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.589 qpair failed and we were unable to recover it. 00:37:35.589 [2024-11-19 08:01:27.204502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.589 [2024-11-19 08:01:27.204552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.589 qpair failed and we were unable to recover it. 00:37:35.589 [2024-11-19 08:01:27.204668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.589 [2024-11-19 08:01:27.204713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.589 qpair failed and we were unable to recover it. 00:37:35.589 [2024-11-19 08:01:27.204830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.589 [2024-11-19 08:01:27.204865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.589 qpair failed and we were unable to recover it. 00:37:35.589 [2024-11-19 08:01:27.204996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.589 [2024-11-19 08:01:27.205031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.589 qpair failed and we were unable to recover it. 00:37:35.589 [2024-11-19 08:01:27.205143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.589 [2024-11-19 08:01:27.205177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.589 qpair failed and we were unable to recover it. 00:37:35.589 [2024-11-19 08:01:27.205308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.590 [2024-11-19 08:01:27.205355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.590 qpair failed and we were unable to recover it. 00:37:35.590 [2024-11-19 08:01:27.205480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.590 [2024-11-19 08:01:27.205515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.590 qpair failed and we were unable to recover it. 00:37:35.590 [2024-11-19 08:01:27.205636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.590 [2024-11-19 08:01:27.205671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.590 qpair failed and we were unable to recover it. 00:37:35.590 [2024-11-19 08:01:27.205848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.590 [2024-11-19 08:01:27.205883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.590 qpair failed and we were unable to recover it. 00:37:35.590 [2024-11-19 08:01:27.206018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.590 [2024-11-19 08:01:27.206052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.590 qpair failed and we were unable to recover it. 00:37:35.590 [2024-11-19 08:01:27.206159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.590 [2024-11-19 08:01:27.206192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.590 qpair failed and we were unable to recover it. 00:37:35.590 [2024-11-19 08:01:27.206309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.590 [2024-11-19 08:01:27.206344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.590 qpair failed and we were unable to recover it. 00:37:35.590 [2024-11-19 08:01:27.206569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.590 [2024-11-19 08:01:27.206607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.590 qpair failed and we were unable to recover it. 00:37:35.590 [2024-11-19 08:01:27.206740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.590 [2024-11-19 08:01:27.206789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.590 qpair failed and we were unable to recover it. 00:37:35.590 [2024-11-19 08:01:27.206919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.590 [2024-11-19 08:01:27.206967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.590 qpair failed and we were unable to recover it. 00:37:35.590 [2024-11-19 08:01:27.207112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.590 [2024-11-19 08:01:27.207149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.590 qpair failed and we were unable to recover it. 00:37:35.590 [2024-11-19 08:01:27.207251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.590 [2024-11-19 08:01:27.207287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.590 qpair failed and we were unable to recover it. 00:37:35.590 [2024-11-19 08:01:27.207405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.590 [2024-11-19 08:01:27.207456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.590 qpair failed and we were unable to recover it. 00:37:35.590 [2024-11-19 08:01:27.207617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.590 [2024-11-19 08:01:27.207652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.590 qpair failed and we were unable to recover it. 00:37:35.590 [2024-11-19 08:01:27.207799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.590 [2024-11-19 08:01:27.207834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.590 qpair failed and we were unable to recover it. 00:37:35.590 [2024-11-19 08:01:27.207978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.590 [2024-11-19 08:01:27.208034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.590 qpair failed and we were unable to recover it. 00:37:35.590 [2024-11-19 08:01:27.208266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.590 [2024-11-19 08:01:27.208316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.590 qpair failed and we were unable to recover it. 00:37:35.590 [2024-11-19 08:01:27.208514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.590 [2024-11-19 08:01:27.208550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.590 qpair failed and we were unable to recover it. 00:37:35.590 [2024-11-19 08:01:27.208661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.590 [2024-11-19 08:01:27.208705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.590 qpair failed and we were unable to recover it. 00:37:35.590 [2024-11-19 08:01:27.208813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.590 [2024-11-19 08:01:27.208848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.590 qpair failed and we were unable to recover it. 00:37:35.590 [2024-11-19 08:01:27.208999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.590 [2024-11-19 08:01:27.209047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.590 qpair failed and we were unable to recover it. 00:37:35.590 [2024-11-19 08:01:27.209168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.590 [2024-11-19 08:01:27.209204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.590 qpair failed and we were unable to recover it. 00:37:35.590 [2024-11-19 08:01:27.209361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.590 [2024-11-19 08:01:27.209395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.590 qpair failed and we were unable to recover it. 00:37:35.590 [2024-11-19 08:01:27.209496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.590 [2024-11-19 08:01:27.209529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.590 qpair failed and we were unable to recover it. 00:37:35.590 [2024-11-19 08:01:27.209642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.590 [2024-11-19 08:01:27.209675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.590 qpair failed and we were unable to recover it. 00:37:35.590 [2024-11-19 08:01:27.209795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.590 [2024-11-19 08:01:27.209829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.590 qpair failed and we were unable to recover it. 00:37:35.590 [2024-11-19 08:01:27.209982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.590 [2024-11-19 08:01:27.210018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.590 qpair failed and we were unable to recover it. 00:37:35.590 [2024-11-19 08:01:27.210154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.590 [2024-11-19 08:01:27.210188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.590 qpair failed and we were unable to recover it. 00:37:35.590 [2024-11-19 08:01:27.210294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.590 [2024-11-19 08:01:27.210329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.590 qpair failed and we were unable to recover it. 00:37:35.590 [2024-11-19 08:01:27.210479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.590 [2024-11-19 08:01:27.210515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.590 qpair failed and we were unable to recover it. 00:37:35.590 [2024-11-19 08:01:27.210651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.590 [2024-11-19 08:01:27.210708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.590 qpair failed and we were unable to recover it. 00:37:35.590 [2024-11-19 08:01:27.210868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.590 [2024-11-19 08:01:27.210916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.590 qpair failed and we were unable to recover it. 00:37:35.590 [2024-11-19 08:01:27.211032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.590 [2024-11-19 08:01:27.211068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.590 qpair failed and we were unable to recover it. 00:37:35.590 [2024-11-19 08:01:27.211183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.590 [2024-11-19 08:01:27.211217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.590 qpair failed and we were unable to recover it. 00:37:35.590 [2024-11-19 08:01:27.211338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.590 [2024-11-19 08:01:27.211373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.590 qpair failed and we were unable to recover it. 00:37:35.590 [2024-11-19 08:01:27.211499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.590 [2024-11-19 08:01:27.211556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.590 qpair failed and we were unable to recover it. 00:37:35.590 [2024-11-19 08:01:27.211686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.590 [2024-11-19 08:01:27.211730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.590 qpair failed and we were unable to recover it. 00:37:35.590 [2024-11-19 08:01:27.211848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.591 [2024-11-19 08:01:27.211885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.591 qpair failed and we were unable to recover it. 00:37:35.591 [2024-11-19 08:01:27.211991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.591 [2024-11-19 08:01:27.212027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.591 qpair failed and we were unable to recover it. 00:37:35.591 [2024-11-19 08:01:27.212245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.591 [2024-11-19 08:01:27.212281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.591 qpair failed and we were unable to recover it. 00:37:35.591 [2024-11-19 08:01:27.212392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.591 [2024-11-19 08:01:27.212425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.591 qpair failed and we were unable to recover it. 00:37:35.591 [2024-11-19 08:01:27.212560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.591 [2024-11-19 08:01:27.212593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.591 qpair failed and we were unable to recover it. 00:37:35.591 [2024-11-19 08:01:27.212712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.591 [2024-11-19 08:01:27.212746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.591 qpair failed and we were unable to recover it. 00:37:35.591 [2024-11-19 08:01:27.212855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.591 [2024-11-19 08:01:27.212889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.591 qpair failed and we were unable to recover it. 00:37:35.591 [2024-11-19 08:01:27.213006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.591 [2024-11-19 08:01:27.213043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.591 qpair failed and we were unable to recover it. 00:37:35.591 [2024-11-19 08:01:27.213170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.591 [2024-11-19 08:01:27.213205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.591 qpair failed and we were unable to recover it. 00:37:35.591 [2024-11-19 08:01:27.213312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.591 [2024-11-19 08:01:27.213346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.591 qpair failed and we were unable to recover it. 00:37:35.591 [2024-11-19 08:01:27.213481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.591 [2024-11-19 08:01:27.213515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.591 qpair failed and we were unable to recover it. 00:37:35.591 [2024-11-19 08:01:27.213665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.591 [2024-11-19 08:01:27.213725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.591 qpair failed and we were unable to recover it. 00:37:35.591 [2024-11-19 08:01:27.213920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.591 [2024-11-19 08:01:27.213969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.591 qpair failed and we were unable to recover it. 00:37:35.591 [2024-11-19 08:01:27.214089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.591 [2024-11-19 08:01:27.214126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.591 qpair failed and we were unable to recover it. 00:37:35.591 [2024-11-19 08:01:27.214265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.591 [2024-11-19 08:01:27.214301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.591 qpair failed and we were unable to recover it. 00:37:35.591 [2024-11-19 08:01:27.214410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.591 [2024-11-19 08:01:27.214446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.591 qpair failed and we were unable to recover it. 00:37:35.591 [2024-11-19 08:01:27.214586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.591 [2024-11-19 08:01:27.214621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.591 qpair failed and we were unable to recover it. 00:37:35.591 [2024-11-19 08:01:27.214754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.591 [2024-11-19 08:01:27.214802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.591 qpair failed and we were unable to recover it. 00:37:35.591 [2024-11-19 08:01:27.214957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.591 [2024-11-19 08:01:27.215006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.591 qpair failed and we were unable to recover it. 00:37:35.591 [2024-11-19 08:01:27.215134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.591 [2024-11-19 08:01:27.215175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.591 qpair failed and we were unable to recover it. 00:37:35.591 [2024-11-19 08:01:27.215288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.591 [2024-11-19 08:01:27.215323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.591 qpair failed and we were unable to recover it. 00:37:35.591 [2024-11-19 08:01:27.215430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.591 [2024-11-19 08:01:27.215464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.591 qpair failed and we were unable to recover it. 00:37:35.591 [2024-11-19 08:01:27.215573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.591 [2024-11-19 08:01:27.215608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.591 qpair failed and we were unable to recover it. 00:37:35.591 [2024-11-19 08:01:27.215795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.591 [2024-11-19 08:01:27.215844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.591 qpair failed and we were unable to recover it. 00:37:35.591 [2024-11-19 08:01:27.215975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.591 [2024-11-19 08:01:27.216023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.591 qpair failed and we were unable to recover it. 00:37:35.591 [2024-11-19 08:01:27.216164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.591 [2024-11-19 08:01:27.216201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.591 qpair failed and we were unable to recover it. 00:37:35.591 [2024-11-19 08:01:27.216338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.591 [2024-11-19 08:01:27.216374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.591 qpair failed and we were unable to recover it. 00:37:35.591 [2024-11-19 08:01:27.216492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.591 [2024-11-19 08:01:27.216526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.591 qpair failed and we were unable to recover it. 00:37:35.591 [2024-11-19 08:01:27.216643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.591 [2024-11-19 08:01:27.216678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.591 qpair failed and we were unable to recover it. 00:37:35.591 [2024-11-19 08:01:27.216830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.591 [2024-11-19 08:01:27.216866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.591 qpair failed and we were unable to recover it. 00:37:35.591 [2024-11-19 08:01:27.216976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.591 [2024-11-19 08:01:27.217011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.591 qpair failed and we were unable to recover it. 00:37:35.591 [2024-11-19 08:01:27.217124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.591 [2024-11-19 08:01:27.217160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.591 qpair failed and we were unable to recover it. 00:37:35.591 [2024-11-19 08:01:27.217376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.591 [2024-11-19 08:01:27.217411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.591 qpair failed and we were unable to recover it. 00:37:35.591 [2024-11-19 08:01:27.217566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.591 [2024-11-19 08:01:27.217605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.591 qpair failed and we were unable to recover it. 00:37:35.591 [2024-11-19 08:01:27.217747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.591 [2024-11-19 08:01:27.217805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.591 qpair failed and we were unable to recover it. 00:37:35.591 [2024-11-19 08:01:27.217929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.592 [2024-11-19 08:01:27.217963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.592 qpair failed and we were unable to recover it. 00:37:35.592 [2024-11-19 08:01:27.218107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.592 [2024-11-19 08:01:27.218143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.592 qpair failed and we were unable to recover it. 00:37:35.592 [2024-11-19 08:01:27.218271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.592 [2024-11-19 08:01:27.218304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.592 qpair failed and we were unable to recover it. 00:37:35.592 [2024-11-19 08:01:27.218434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.592 [2024-11-19 08:01:27.218469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.592 qpair failed and we were unable to recover it. 00:37:35.592 [2024-11-19 08:01:27.218611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.592 [2024-11-19 08:01:27.218646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.592 qpair failed and we were unable to recover it. 00:37:35.592 [2024-11-19 08:01:27.218793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.592 [2024-11-19 08:01:27.218842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.592 qpair failed and we were unable to recover it. 00:37:35.592 [2024-11-19 08:01:27.218985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.592 [2024-11-19 08:01:27.219021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.592 qpair failed and we were unable to recover it. 00:37:35.592 [2024-11-19 08:01:27.219130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.592 [2024-11-19 08:01:27.219163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.592 qpair failed and we were unable to recover it. 00:37:35.592 [2024-11-19 08:01:27.219308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.592 [2024-11-19 08:01:27.219343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.592 qpair failed and we were unable to recover it. 00:37:35.592 [2024-11-19 08:01:27.219465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.592 [2024-11-19 08:01:27.219514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.592 qpair failed and we were unable to recover it. 00:37:35.592 [2024-11-19 08:01:27.219681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.592 [2024-11-19 08:01:27.219724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.592 qpair failed and we were unable to recover it. 00:37:35.592 [2024-11-19 08:01:27.219896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.592 [2024-11-19 08:01:27.219945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.592 qpair failed and we were unable to recover it. 00:37:35.592 [2024-11-19 08:01:27.220116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.592 [2024-11-19 08:01:27.220151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.592 qpair failed and we were unable to recover it. 00:37:35.592 [2024-11-19 08:01:27.220309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.592 [2024-11-19 08:01:27.220344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.592 qpair failed and we were unable to recover it. 00:37:35.592 [2024-11-19 08:01:27.220480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.592 [2024-11-19 08:01:27.220514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.592 qpair failed and we were unable to recover it. 00:37:35.592 [2024-11-19 08:01:27.220646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.592 [2024-11-19 08:01:27.220679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.592 qpair failed and we were unable to recover it. 00:37:35.592 [2024-11-19 08:01:27.220798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.592 [2024-11-19 08:01:27.220833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.592 qpair failed and we were unable to recover it. 00:37:35.592 [2024-11-19 08:01:27.220987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.592 [2024-11-19 08:01:27.221036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.592 qpair failed and we were unable to recover it. 00:37:35.592 [2024-11-19 08:01:27.221182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.592 [2024-11-19 08:01:27.221218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.592 qpair failed and we were unable to recover it. 00:37:35.592 [2024-11-19 08:01:27.221332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.592 [2024-11-19 08:01:27.221366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.592 qpair failed and we were unable to recover it. 00:37:35.592 [2024-11-19 08:01:27.221464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.592 [2024-11-19 08:01:27.221498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.592 qpair failed and we were unable to recover it. 00:37:35.592 [2024-11-19 08:01:27.221650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.592 [2024-11-19 08:01:27.221685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.592 qpair failed and we were unable to recover it. 00:37:35.592 [2024-11-19 08:01:27.221807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.592 [2024-11-19 08:01:27.221840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.592 qpair failed and we were unable to recover it. 00:37:35.592 [2024-11-19 08:01:27.221948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.592 [2024-11-19 08:01:27.221982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.592 qpair failed and we were unable to recover it. 00:37:35.592 [2024-11-19 08:01:27.222138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.592 [2024-11-19 08:01:27.222193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.592 qpair failed and we were unable to recover it. 00:37:35.592 [2024-11-19 08:01:27.222345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.592 [2024-11-19 08:01:27.222381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.592 qpair failed and we were unable to recover it. 00:37:35.592 [2024-11-19 08:01:27.222546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.592 [2024-11-19 08:01:27.222582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.592 qpair failed and we were unable to recover it. 00:37:35.592 [2024-11-19 08:01:27.222722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.592 [2024-11-19 08:01:27.222758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.592 qpair failed and we were unable to recover it. 00:37:35.592 [2024-11-19 08:01:27.222904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.592 [2024-11-19 08:01:27.222938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.592 qpair failed and we were unable to recover it. 00:37:35.592 [2024-11-19 08:01:27.223039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.592 [2024-11-19 08:01:27.223072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.592 qpair failed and we were unable to recover it. 00:37:35.592 [2024-11-19 08:01:27.223213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.592 [2024-11-19 08:01:27.223247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.592 qpair failed and we were unable to recover it. 00:37:35.592 [2024-11-19 08:01:27.223462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.592 [2024-11-19 08:01:27.223519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.592 qpair failed and we were unable to recover it. 00:37:35.592 [2024-11-19 08:01:27.223719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.592 [2024-11-19 08:01:27.223769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.592 qpair failed and we were unable to recover it. 00:37:35.592 [2024-11-19 08:01:27.223908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.592 [2024-11-19 08:01:27.223948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.592 qpair failed and we were unable to recover it. 00:37:35.592 [2024-11-19 08:01:27.224186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.592 [2024-11-19 08:01:27.224224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.592 qpair failed and we were unable to recover it. 00:37:35.593 [2024-11-19 08:01:27.224365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.593 [2024-11-19 08:01:27.224429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.593 qpair failed and we were unable to recover it. 00:37:35.593 [2024-11-19 08:01:27.224561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.593 [2024-11-19 08:01:27.224596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.593 qpair failed and we were unable to recover it. 00:37:35.593 [2024-11-19 08:01:27.224714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.593 [2024-11-19 08:01:27.224753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.593 qpair failed and we were unable to recover it. 00:37:35.593 [2024-11-19 08:01:27.224872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.593 [2024-11-19 08:01:27.224907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.593 qpair failed and we were unable to recover it. 00:37:35.593 [2024-11-19 08:01:27.225025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.593 [2024-11-19 08:01:27.225076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.593 qpair failed and we were unable to recover it. 00:37:35.593 [2024-11-19 08:01:27.225209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.593 [2024-11-19 08:01:27.225243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.593 qpair failed and we were unable to recover it. 00:37:35.593 [2024-11-19 08:01:27.225374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.593 [2024-11-19 08:01:27.225411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.593 qpair failed and we were unable to recover it. 00:37:35.593 [2024-11-19 08:01:27.225544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.593 [2024-11-19 08:01:27.225583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.593 qpair failed and we were unable to recover it. 00:37:35.593 [2024-11-19 08:01:27.225747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.593 [2024-11-19 08:01:27.225797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.593 qpair failed and we were unable to recover it. 00:37:35.593 [2024-11-19 08:01:27.225957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.593 [2024-11-19 08:01:27.226005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.593 qpair failed and we were unable to recover it. 00:37:35.593 [2024-11-19 08:01:27.226157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.593 [2024-11-19 08:01:27.226193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.593 qpair failed and we were unable to recover it. 00:37:35.593 [2024-11-19 08:01:27.226381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.593 [2024-11-19 08:01:27.226419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.593 qpair failed and we were unable to recover it. 00:37:35.593 [2024-11-19 08:01:27.226581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.593 [2024-11-19 08:01:27.226621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.593 qpair failed and we were unable to recover it. 00:37:35.593 [2024-11-19 08:01:27.226758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.593 [2024-11-19 08:01:27.226792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.593 qpair failed and we were unable to recover it. 00:37:35.593 [2024-11-19 08:01:27.226912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.593 [2024-11-19 08:01:27.226966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.593 qpair failed and we were unable to recover it. 00:37:35.593 [2024-11-19 08:01:27.227150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.593 [2024-11-19 08:01:27.227187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.593 qpair failed and we were unable to recover it. 00:37:35.593 [2024-11-19 08:01:27.227310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.593 [2024-11-19 08:01:27.227348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.593 qpair failed and we were unable to recover it. 00:37:35.593 [2024-11-19 08:01:27.227469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.593 [2024-11-19 08:01:27.227506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.593 qpair failed and we were unable to recover it. 00:37:35.593 [2024-11-19 08:01:27.227649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.593 [2024-11-19 08:01:27.227693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.593 qpair failed and we were unable to recover it. 00:37:35.593 [2024-11-19 08:01:27.227835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.593 [2024-11-19 08:01:27.227870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.593 qpair failed and we were unable to recover it. 00:37:35.593 [2024-11-19 08:01:27.227971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.593 [2024-11-19 08:01:27.228006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.593 qpair failed and we were unable to recover it. 00:37:35.593 [2024-11-19 08:01:27.228137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.593 [2024-11-19 08:01:27.228189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.593 qpair failed and we were unable to recover it. 00:37:35.593 [2024-11-19 08:01:27.228341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.593 [2024-11-19 08:01:27.228380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.593 qpair failed and we were unable to recover it. 00:37:35.593 [2024-11-19 08:01:27.228531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.593 [2024-11-19 08:01:27.228569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.593 qpair failed and we were unable to recover it. 00:37:35.593 [2024-11-19 08:01:27.228702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.593 [2024-11-19 08:01:27.228737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.593 qpair failed and we were unable to recover it. 00:37:35.593 [2024-11-19 08:01:27.228871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.593 [2024-11-19 08:01:27.228905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.593 qpair failed and we were unable to recover it. 00:37:35.593 [2024-11-19 08:01:27.229034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.593 [2024-11-19 08:01:27.229072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.593 qpair failed and we were unable to recover it. 00:37:35.593 [2024-11-19 08:01:27.229213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.593 [2024-11-19 08:01:27.229251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.593 qpair failed and we were unable to recover it. 00:37:35.593 [2024-11-19 08:01:27.229398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.593 [2024-11-19 08:01:27.229436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.593 qpair failed and we were unable to recover it. 00:37:35.593 [2024-11-19 08:01:27.229582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.593 [2024-11-19 08:01:27.229637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.594 qpair failed and we were unable to recover it. 00:37:35.594 [2024-11-19 08:01:27.229787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.594 [2024-11-19 08:01:27.229825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.594 qpair failed and we were unable to recover it. 00:37:35.594 [2024-11-19 08:01:27.230004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.594 [2024-11-19 08:01:27.230063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.594 qpair failed and we were unable to recover it. 00:37:35.594 [2024-11-19 08:01:27.230227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.594 [2024-11-19 08:01:27.230273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.594 qpair failed and we were unable to recover it. 00:37:35.594 [2024-11-19 08:01:27.230392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.594 [2024-11-19 08:01:27.230430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.594 qpair failed and we were unable to recover it. 00:37:35.594 [2024-11-19 08:01:27.230590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.594 [2024-11-19 08:01:27.230630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.594 qpair failed and we were unable to recover it. 00:37:35.594 [2024-11-19 08:01:27.230769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.594 [2024-11-19 08:01:27.230824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.594 qpair failed and we were unable to recover it. 00:37:35.594 [2024-11-19 08:01:27.230962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.594 [2024-11-19 08:01:27.230996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.594 qpair failed and we were unable to recover it. 00:37:35.594 [2024-11-19 08:01:27.231131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.594 [2024-11-19 08:01:27.231167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.594 qpair failed and we were unable to recover it. 00:37:35.594 [2024-11-19 08:01:27.231301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.594 [2024-11-19 08:01:27.231336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.594 qpair failed and we were unable to recover it. 00:37:35.594 [2024-11-19 08:01:27.231504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.594 [2024-11-19 08:01:27.231538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.594 qpair failed and we were unable to recover it. 00:37:35.594 [2024-11-19 08:01:27.231715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.594 [2024-11-19 08:01:27.231764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.594 qpair failed and we were unable to recover it. 00:37:35.594 [2024-11-19 08:01:27.231932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.594 [2024-11-19 08:01:27.231981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.594 qpair failed and we were unable to recover it. 00:37:35.594 [2024-11-19 08:01:27.232102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.594 [2024-11-19 08:01:27.232139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.594 qpair failed and we were unable to recover it. 00:37:35.594 [2024-11-19 08:01:27.232341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.594 [2024-11-19 08:01:27.232399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.594 qpair failed and we were unable to recover it. 00:37:35.594 [2024-11-19 08:01:27.232554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.594 [2024-11-19 08:01:27.232592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.594 qpair failed and we were unable to recover it. 00:37:35.594 [2024-11-19 08:01:27.232771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.594 [2024-11-19 08:01:27.232826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.594 qpair failed and we were unable to recover it. 00:37:35.594 [2024-11-19 08:01:27.232955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.594 [2024-11-19 08:01:27.233009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.594 qpair failed and we were unable to recover it. 00:37:35.594 [2024-11-19 08:01:27.233166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.594 [2024-11-19 08:01:27.233235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.594 qpair failed and we were unable to recover it. 00:37:35.594 [2024-11-19 08:01:27.233465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.594 [2024-11-19 08:01:27.233518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.594 qpair failed and we were unable to recover it. 00:37:35.594 [2024-11-19 08:01:27.233653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.594 [2024-11-19 08:01:27.233696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.594 qpair failed and we were unable to recover it. 00:37:35.594 [2024-11-19 08:01:27.233873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.594 [2024-11-19 08:01:27.233916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.594 qpair failed and we were unable to recover it. 00:37:35.594 [2024-11-19 08:01:27.234117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.594 [2024-11-19 08:01:27.234157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.594 qpair failed and we were unable to recover it. 00:37:35.594 [2024-11-19 08:01:27.234308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.594 [2024-11-19 08:01:27.234368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.594 qpair failed and we were unable to recover it. 00:37:35.594 [2024-11-19 08:01:27.234520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.594 [2024-11-19 08:01:27.234554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.594 qpair failed and we were unable to recover it. 00:37:35.594 [2024-11-19 08:01:27.234694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.594 [2024-11-19 08:01:27.234729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.594 qpair failed and we were unable to recover it. 00:37:35.594 [2024-11-19 08:01:27.234848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.594 [2024-11-19 08:01:27.234897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.594 qpair failed and we were unable to recover it. 00:37:35.594 [2024-11-19 08:01:27.235084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.594 [2024-11-19 08:01:27.235146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.594 qpair failed and we were unable to recover it. 00:37:35.594 [2024-11-19 08:01:27.235245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.594 [2024-11-19 08:01:27.235280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.594 qpair failed and we were unable to recover it. 00:37:35.594 [2024-11-19 08:01:27.235444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.594 [2024-11-19 08:01:27.235501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.594 qpair failed and we were unable to recover it. 00:37:35.594 [2024-11-19 08:01:27.235651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.594 [2024-11-19 08:01:27.235686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.594 qpair failed and we were unable to recover it. 00:37:35.594 [2024-11-19 08:01:27.235870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.594 [2024-11-19 08:01:27.235923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.594 qpair failed and we were unable to recover it. 00:37:35.594 [2024-11-19 08:01:27.236112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.594 [2024-11-19 08:01:27.236172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.594 qpair failed and we were unable to recover it. 00:37:35.594 [2024-11-19 08:01:27.236396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.594 [2024-11-19 08:01:27.236454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.594 qpair failed and we were unable to recover it. 00:37:35.594 [2024-11-19 08:01:27.236602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.594 [2024-11-19 08:01:27.236636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.595 qpair failed and we were unable to recover it. 00:37:35.595 [2024-11-19 08:01:27.236814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.595 [2024-11-19 08:01:27.236849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.595 qpair failed and we were unable to recover it. 00:37:35.595 [2024-11-19 08:01:27.237030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.595 [2024-11-19 08:01:27.237085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.595 qpair failed and we were unable to recover it. 00:37:35.595 [2024-11-19 08:01:27.237225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.595 [2024-11-19 08:01:27.237261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.595 qpair failed and we were unable to recover it. 00:37:35.595 [2024-11-19 08:01:27.237436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.595 [2024-11-19 08:01:27.237493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.595 qpair failed and we were unable to recover it. 00:37:35.595 [2024-11-19 08:01:27.237714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.595 [2024-11-19 08:01:27.237751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.595 qpair failed and we were unable to recover it. 00:37:35.595 [2024-11-19 08:01:27.237894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.595 [2024-11-19 08:01:27.237952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.595 qpair failed and we were unable to recover it. 00:37:35.595 [2024-11-19 08:01:27.238106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.595 [2024-11-19 08:01:27.238159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.595 qpair failed and we were unable to recover it. 00:37:35.595 [2024-11-19 08:01:27.238368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.595 [2024-11-19 08:01:27.238423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.595 qpair failed and we were unable to recover it. 00:37:35.595 [2024-11-19 08:01:27.238569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.595 [2024-11-19 08:01:27.238604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.595 qpair failed and we were unable to recover it. 00:37:35.595 [2024-11-19 08:01:27.238745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.595 [2024-11-19 08:01:27.238808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.595 qpair failed and we were unable to recover it. 00:37:35.595 [2024-11-19 08:01:27.238925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.595 [2024-11-19 08:01:27.238960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.595 qpair failed and we were unable to recover it. 00:37:35.595 [2024-11-19 08:01:27.239062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.595 [2024-11-19 08:01:27.239096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.595 qpair failed and we were unable to recover it. 00:37:35.595 [2024-11-19 08:01:27.239254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.595 [2024-11-19 08:01:27.239302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.595 qpair failed and we were unable to recover it. 00:37:35.595 [2024-11-19 08:01:27.239418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.595 [2024-11-19 08:01:27.239455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.595 qpair failed and we were unable to recover it. 00:37:35.595 [2024-11-19 08:01:27.239597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.595 [2024-11-19 08:01:27.239631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.595 qpair failed and we were unable to recover it. 00:37:35.595 [2024-11-19 08:01:27.239784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.595 [2024-11-19 08:01:27.239819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.595 qpair failed and we were unable to recover it. 00:37:35.595 [2024-11-19 08:01:27.239953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.595 [2024-11-19 08:01:27.240002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.595 qpair failed and we were unable to recover it. 00:37:35.595 [2024-11-19 08:01:27.240159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.595 [2024-11-19 08:01:27.240196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.595 qpair failed and we were unable to recover it. 00:37:35.595 [2024-11-19 08:01:27.240407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.595 [2024-11-19 08:01:27.240462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.595 qpair failed and we were unable to recover it. 00:37:35.595 [2024-11-19 08:01:27.240575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.595 [2024-11-19 08:01:27.240611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.595 qpair failed and we were unable to recover it. 00:37:35.595 [2024-11-19 08:01:27.240745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.595 [2024-11-19 08:01:27.240799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.595 qpair failed and we were unable to recover it. 00:37:35.595 [2024-11-19 08:01:27.240987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.595 [2024-11-19 08:01:27.241040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.595 qpair failed and we were unable to recover it. 00:37:35.595 [2024-11-19 08:01:27.241238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.595 [2024-11-19 08:01:27.241303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.595 qpair failed and we were unable to recover it. 00:37:35.595 [2024-11-19 08:01:27.241441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.595 [2024-11-19 08:01:27.241504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.595 qpair failed and we were unable to recover it. 00:37:35.595 [2024-11-19 08:01:27.241677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.595 [2024-11-19 08:01:27.241740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.595 qpair failed and we were unable to recover it. 00:37:35.595 [2024-11-19 08:01:27.241901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.595 [2024-11-19 08:01:27.241946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.595 qpair failed and we were unable to recover it. 00:37:35.595 [2024-11-19 08:01:27.242089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.595 [2024-11-19 08:01:27.242144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.595 qpair failed and we were unable to recover it. 00:37:35.595 [2024-11-19 08:01:27.242355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.595 [2024-11-19 08:01:27.242417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.595 qpair failed and we were unable to recover it. 00:37:35.595 [2024-11-19 08:01:27.242607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.595 [2024-11-19 08:01:27.242643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.595 qpair failed and we were unable to recover it. 00:37:35.595 [2024-11-19 08:01:27.242874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.595 [2024-11-19 08:01:27.242909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.595 qpair failed and we were unable to recover it. 00:37:35.595 [2024-11-19 08:01:27.243059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.595 [2024-11-19 08:01:27.243111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.595 qpair failed and we were unable to recover it. 00:37:35.595 [2024-11-19 08:01:27.243323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.595 [2024-11-19 08:01:27.243375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.595 qpair failed and we were unable to recover it. 00:37:35.595 [2024-11-19 08:01:27.243571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.595 [2024-11-19 08:01:27.243607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.595 qpair failed and we were unable to recover it. 00:37:35.595 [2024-11-19 08:01:27.243757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.595 [2024-11-19 08:01:27.243792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.595 qpair failed and we were unable to recover it. 00:37:35.595 [2024-11-19 08:01:27.243925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.596 [2024-11-19 08:01:27.243962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.596 qpair failed and we were unable to recover it. 00:37:35.596 [2024-11-19 08:01:27.244081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.596 [2024-11-19 08:01:27.244120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.596 qpair failed and we were unable to recover it. 00:37:35.596 [2024-11-19 08:01:27.244293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.596 [2024-11-19 08:01:27.244367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.596 qpair failed and we were unable to recover it. 00:37:35.596 [2024-11-19 08:01:27.244552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.596 [2024-11-19 08:01:27.244613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.596 qpair failed and we were unable to recover it. 00:37:35.596 [2024-11-19 08:01:27.244773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.596 [2024-11-19 08:01:27.244809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.596 qpair failed and we were unable to recover it. 00:37:35.596 [2024-11-19 08:01:27.244950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.596 [2024-11-19 08:01:27.244985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.596 qpair failed and we were unable to recover it. 00:37:35.596 [2024-11-19 08:01:27.245144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.596 [2024-11-19 08:01:27.245178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.596 qpair failed and we were unable to recover it. 00:37:35.596 [2024-11-19 08:01:27.245330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.596 [2024-11-19 08:01:27.245382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.596 qpair failed and we were unable to recover it. 00:37:35.596 [2024-11-19 08:01:27.245527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.596 [2024-11-19 08:01:27.245563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.596 qpair failed and we were unable to recover it. 00:37:35.596 [2024-11-19 08:01:27.245666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.596 [2024-11-19 08:01:27.245706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.596 qpair failed and we were unable to recover it. 00:37:35.596 [2024-11-19 08:01:27.245811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.596 [2024-11-19 08:01:27.245864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.596 qpair failed and we were unable to recover it. 00:37:35.596 [2024-11-19 08:01:27.246022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.596 [2024-11-19 08:01:27.246062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.596 qpair failed and we were unable to recover it. 00:37:35.596 [2024-11-19 08:01:27.246214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.596 [2024-11-19 08:01:27.246263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.596 qpair failed and we were unable to recover it. 00:37:35.596 [2024-11-19 08:01:27.246394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.596 [2024-11-19 08:01:27.246435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.596 qpair failed and we were unable to recover it. 00:37:35.596 [2024-11-19 08:01:27.246606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.596 [2024-11-19 08:01:27.246641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.596 qpair failed and we were unable to recover it. 00:37:35.596 [2024-11-19 08:01:27.246793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.596 [2024-11-19 08:01:27.246827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.596 qpair failed and we were unable to recover it. 00:37:35.596 [2024-11-19 08:01:27.246970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.596 [2024-11-19 08:01:27.247019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.596 qpair failed and we were unable to recover it. 00:37:35.596 [2024-11-19 08:01:27.247160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.596 [2024-11-19 08:01:27.247222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.596 qpair failed and we were unable to recover it. 00:37:35.596 [2024-11-19 08:01:27.247376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.596 [2024-11-19 08:01:27.247428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.596 qpair failed and we were unable to recover it. 00:37:35.596 [2024-11-19 08:01:27.247530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.596 [2024-11-19 08:01:27.247564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.596 qpair failed and we were unable to recover it. 00:37:35.596 [2024-11-19 08:01:27.247746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.596 [2024-11-19 08:01:27.247785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.596 qpair failed and we were unable to recover it. 00:37:35.596 [2024-11-19 08:01:27.247961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.596 [2024-11-19 08:01:27.248014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.596 qpair failed and we were unable to recover it. 00:37:35.596 [2024-11-19 08:01:27.248125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.596 [2024-11-19 08:01:27.248161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.596 qpair failed and we were unable to recover it. 00:37:35.596 [2024-11-19 08:01:27.248272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.596 [2024-11-19 08:01:27.248307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.596 qpair failed and we were unable to recover it. 00:37:35.596 [2024-11-19 08:01:27.248466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.596 [2024-11-19 08:01:27.248514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.596 qpair failed and we were unable to recover it. 00:37:35.596 [2024-11-19 08:01:27.248676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.596 [2024-11-19 08:01:27.248746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.596 qpair failed and we were unable to recover it. 00:37:35.596 [2024-11-19 08:01:27.248907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.596 [2024-11-19 08:01:27.248943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.596 qpair failed and we were unable to recover it. 00:37:35.596 [2024-11-19 08:01:27.249100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.596 [2024-11-19 08:01:27.249135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.596 qpair failed and we were unable to recover it. 00:37:35.596 [2024-11-19 08:01:27.249264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.596 [2024-11-19 08:01:27.249298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.596 qpair failed and we were unable to recover it. 00:37:35.596 [2024-11-19 08:01:27.249400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.596 [2024-11-19 08:01:27.249435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.596 qpair failed and we were unable to recover it. 00:37:35.596 [2024-11-19 08:01:27.249596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.596 [2024-11-19 08:01:27.249629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.596 qpair failed and we were unable to recover it. 00:37:35.596 [2024-11-19 08:01:27.249815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.596 [2024-11-19 08:01:27.249863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.596 qpair failed and we were unable to recover it. 00:37:35.596 [2024-11-19 08:01:27.249994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.596 [2024-11-19 08:01:27.250044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.596 qpair failed and we were unable to recover it. 00:37:35.596 [2024-11-19 08:01:27.250234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.596 [2024-11-19 08:01:27.250275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.596 qpair failed and we were unable to recover it. 00:37:35.596 [2024-11-19 08:01:27.250429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.596 [2024-11-19 08:01:27.250468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.596 qpair failed and we were unable to recover it. 00:37:35.596 [2024-11-19 08:01:27.250613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.597 [2024-11-19 08:01:27.250652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.597 qpair failed and we were unable to recover it. 00:37:35.597 [2024-11-19 08:01:27.250846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.597 [2024-11-19 08:01:27.250881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.597 qpair failed and we were unable to recover it. 00:37:35.597 [2024-11-19 08:01:27.251044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.597 [2024-11-19 08:01:27.251117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.597 qpair failed and we were unable to recover it. 00:37:35.597 [2024-11-19 08:01:27.251269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.597 [2024-11-19 08:01:27.251326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.597 qpair failed and we were unable to recover it. 00:37:35.597 [2024-11-19 08:01:27.251531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.597 [2024-11-19 08:01:27.251591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.597 qpair failed and we were unable to recover it. 00:37:35.597 [2024-11-19 08:01:27.251754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.597 [2024-11-19 08:01:27.251788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.597 qpair failed and we were unable to recover it. 00:37:35.597 [2024-11-19 08:01:27.251909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.597 [2024-11-19 08:01:27.251957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.597 qpair failed and we were unable to recover it. 00:37:35.597 [2024-11-19 08:01:27.252171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.597 [2024-11-19 08:01:27.252236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.597 qpair failed and we were unable to recover it. 00:37:35.597 [2024-11-19 08:01:27.252412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.597 [2024-11-19 08:01:27.252475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.597 qpair failed and we were unable to recover it. 00:37:35.597 [2024-11-19 08:01:27.252623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.597 [2024-11-19 08:01:27.252674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.597 qpair failed and we were unable to recover it. 00:37:35.597 [2024-11-19 08:01:27.252843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.597 [2024-11-19 08:01:27.252878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.597 qpair failed and we were unable to recover it. 00:37:35.597 [2024-11-19 08:01:27.252985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.597 [2024-11-19 08:01:27.253019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.597 qpair failed and we were unable to recover it. 00:37:35.597 [2024-11-19 08:01:27.253182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.597 [2024-11-19 08:01:27.253220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.597 qpair failed and we were unable to recover it. 00:37:35.597 [2024-11-19 08:01:27.253367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.597 [2024-11-19 08:01:27.253406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.597 qpair failed and we were unable to recover it. 00:37:35.597 [2024-11-19 08:01:27.253569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.597 [2024-11-19 08:01:27.253603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.597 qpair failed and we were unable to recover it. 00:37:35.597 [2024-11-19 08:01:27.253750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.597 [2024-11-19 08:01:27.253786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.597 qpair failed and we were unable to recover it. 00:37:35.597 [2024-11-19 08:01:27.253923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.597 [2024-11-19 08:01:27.253977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.597 qpair failed and we were unable to recover it. 00:37:35.597 [2024-11-19 08:01:27.254107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.597 [2024-11-19 08:01:27.254163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.597 qpair failed and we were unable to recover it. 00:37:35.597 [2024-11-19 08:01:27.254366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.597 [2024-11-19 08:01:27.254422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.597 qpair failed and we were unable to recover it. 00:37:35.597 [2024-11-19 08:01:27.254567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.597 [2024-11-19 08:01:27.254601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.597 qpair failed and we were unable to recover it. 00:37:35.597 [2024-11-19 08:01:27.254744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.597 [2024-11-19 08:01:27.254780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.597 qpair failed and we were unable to recover it. 00:37:35.597 [2024-11-19 08:01:27.254922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.597 [2024-11-19 08:01:27.254962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.597 qpair failed and we were unable to recover it. 00:37:35.597 [2024-11-19 08:01:27.255130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.597 [2024-11-19 08:01:27.255165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.597 qpair failed and we were unable to recover it. 00:37:35.597 [2024-11-19 08:01:27.255278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.597 [2024-11-19 08:01:27.255314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.597 qpair failed and we were unable to recover it. 00:37:35.597 [2024-11-19 08:01:27.255450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.597 [2024-11-19 08:01:27.255485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.597 qpair failed and we were unable to recover it. 00:37:35.597 [2024-11-19 08:01:27.255652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.597 [2024-11-19 08:01:27.255687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.597 qpair failed and we were unable to recover it. 00:37:35.597 [2024-11-19 08:01:27.255862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.597 [2024-11-19 08:01:27.255916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.597 qpair failed and we were unable to recover it. 00:37:35.597 [2024-11-19 08:01:27.256098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.597 [2024-11-19 08:01:27.256191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.597 qpair failed and we were unable to recover it. 00:37:35.597 [2024-11-19 08:01:27.256386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.597 [2024-11-19 08:01:27.256446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.597 qpair failed and we were unable to recover it. 00:37:35.597 [2024-11-19 08:01:27.256608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.597 [2024-11-19 08:01:27.256643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.597 qpair failed and we were unable to recover it. 00:37:35.597 [2024-11-19 08:01:27.256774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.597 [2024-11-19 08:01:27.256810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.597 qpair failed and we were unable to recover it. 00:37:35.597 [2024-11-19 08:01:27.257026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.597 [2024-11-19 08:01:27.257080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.597 qpair failed and we were unable to recover it. 00:37:35.597 [2024-11-19 08:01:27.257298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.597 [2024-11-19 08:01:27.257371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.597 qpair failed and we were unable to recover it. 00:37:35.597 [2024-11-19 08:01:27.257532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.597 [2024-11-19 08:01:27.257573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.597 qpair failed and we were unable to recover it. 00:37:35.597 [2024-11-19 08:01:27.257746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.597 [2024-11-19 08:01:27.257782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.597 qpair failed and we were unable to recover it. 00:37:35.598 [2024-11-19 08:01:27.257941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.598 [2024-11-19 08:01:27.257995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.598 qpair failed and we were unable to recover it. 00:37:35.598 [2024-11-19 08:01:27.258174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.598 [2024-11-19 08:01:27.258226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.598 qpair failed and we were unable to recover it. 00:37:35.598 [2024-11-19 08:01:27.258391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.598 [2024-11-19 08:01:27.258446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.598 qpair failed and we were unable to recover it. 00:37:35.598 [2024-11-19 08:01:27.258573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.598 [2024-11-19 08:01:27.258607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.598 qpair failed and we were unable to recover it. 00:37:35.598 [2024-11-19 08:01:27.258770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.598 [2024-11-19 08:01:27.258825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.598 qpair failed and we were unable to recover it. 00:37:35.598 [2024-11-19 08:01:27.258975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.598 [2024-11-19 08:01:27.259063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.598 qpair failed and we were unable to recover it. 00:37:35.598 [2024-11-19 08:01:27.259169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.598 [2024-11-19 08:01:27.259204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.598 qpair failed and we were unable to recover it. 00:37:35.598 [2024-11-19 08:01:27.259329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.598 [2024-11-19 08:01:27.259390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.598 qpair failed and we were unable to recover it. 00:37:35.598 [2024-11-19 08:01:27.259507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.598 [2024-11-19 08:01:27.259545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.598 qpair failed and we were unable to recover it. 00:37:35.598 [2024-11-19 08:01:27.259716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.598 [2024-11-19 08:01:27.259762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.598 qpair failed and we were unable to recover it. 00:37:35.598 [2024-11-19 08:01:27.259874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.598 [2024-11-19 08:01:27.259908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.598 qpair failed and we were unable to recover it. 00:37:35.598 [2024-11-19 08:01:27.260083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.598 [2024-11-19 08:01:27.260117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.598 qpair failed and we were unable to recover it. 00:37:35.598 [2024-11-19 08:01:27.260267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.598 [2024-11-19 08:01:27.260316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.598 qpair failed and we were unable to recover it. 00:37:35.598 [2024-11-19 08:01:27.260524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.598 [2024-11-19 08:01:27.260580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.598 qpair failed and we were unable to recover it. 00:37:35.598 [2024-11-19 08:01:27.260786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.598 [2024-11-19 08:01:27.260826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.598 qpair failed and we were unable to recover it. 00:37:35.598 [2024-11-19 08:01:27.260949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.598 [2024-11-19 08:01:27.260999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.598 qpair failed and we were unable to recover it. 00:37:35.598 [2024-11-19 08:01:27.261111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.598 [2024-11-19 08:01:27.261145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.598 qpair failed and we were unable to recover it. 00:37:35.598 [2024-11-19 08:01:27.261334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.598 [2024-11-19 08:01:27.261396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.598 qpair failed and we were unable to recover it. 00:37:35.598 [2024-11-19 08:01:27.261529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.598 [2024-11-19 08:01:27.261565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.598 qpair failed and we were unable to recover it. 00:37:35.598 [2024-11-19 08:01:27.261719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.598 [2024-11-19 08:01:27.261755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.598 qpair failed and we were unable to recover it. 00:37:35.598 [2024-11-19 08:01:27.261859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.598 [2024-11-19 08:01:27.261893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.598 qpair failed and we were unable to recover it. 00:37:35.598 [2024-11-19 08:01:27.262036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.598 [2024-11-19 08:01:27.262082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.598 qpair failed and we were unable to recover it. 00:37:35.598 [2024-11-19 08:01:27.262191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.598 [2024-11-19 08:01:27.262226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.598 qpair failed and we were unable to recover it. 00:37:35.598 [2024-11-19 08:01:27.262367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.598 [2024-11-19 08:01:27.262403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.598 qpair failed and we were unable to recover it. 00:37:35.598 [2024-11-19 08:01:27.262516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.598 [2024-11-19 08:01:27.262563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.598 qpair failed and we were unable to recover it. 00:37:35.598 [2024-11-19 08:01:27.262769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.598 [2024-11-19 08:01:27.262824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.598 qpair failed and we were unable to recover it. 00:37:35.598 [2024-11-19 08:01:27.262964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.598 [2024-11-19 08:01:27.263003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.598 qpair failed and we were unable to recover it. 00:37:35.598 [2024-11-19 08:01:27.263153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.598 [2024-11-19 08:01:27.263191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.598 qpair failed and we were unable to recover it. 00:37:35.598 [2024-11-19 08:01:27.263378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.598 [2024-11-19 08:01:27.263412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.598 qpair failed and we were unable to recover it. 00:37:35.598 [2024-11-19 08:01:27.263514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.598 [2024-11-19 08:01:27.263549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.598 qpair failed and we were unable to recover it. 00:37:35.598 [2024-11-19 08:01:27.263698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.598 [2024-11-19 08:01:27.263739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.598 qpair failed and we were unable to recover it. 00:37:35.599 [2024-11-19 08:01:27.263902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.599 [2024-11-19 08:01:27.263936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.599 qpair failed and we were unable to recover it. 00:37:35.599 [2024-11-19 08:01:27.264053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.599 [2024-11-19 08:01:27.264087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.599 qpair failed and we were unable to recover it. 00:37:35.599 [2024-11-19 08:01:27.264189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.599 [2024-11-19 08:01:27.264223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.599 qpair failed and we were unable to recover it. 00:37:35.599 [2024-11-19 08:01:27.264355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.599 [2024-11-19 08:01:27.264390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.599 qpair failed and we were unable to recover it. 00:37:35.599 [2024-11-19 08:01:27.264549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.599 [2024-11-19 08:01:27.264604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.599 qpair failed and we were unable to recover it. 00:37:35.599 [2024-11-19 08:01:27.264755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.599 [2024-11-19 08:01:27.264791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.599 qpair failed and we were unable to recover it. 00:37:35.599 [2024-11-19 08:01:27.265022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.599 [2024-11-19 08:01:27.265071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.599 qpair failed and we were unable to recover it. 00:37:35.599 [2024-11-19 08:01:27.265191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.599 [2024-11-19 08:01:27.265229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.599 qpair failed and we were unable to recover it. 00:37:35.599 [2024-11-19 08:01:27.265428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.599 [2024-11-19 08:01:27.265463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.599 qpair failed and we were unable to recover it. 00:37:35.599 [2024-11-19 08:01:27.265599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.599 [2024-11-19 08:01:27.265633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.599 qpair failed and we were unable to recover it. 00:37:35.599 [2024-11-19 08:01:27.265802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.599 [2024-11-19 08:01:27.265850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.599 qpair failed and we were unable to recover it. 00:37:35.599 [2024-11-19 08:01:27.265986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.599 [2024-11-19 08:01:27.266055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.599 qpair failed and we were unable to recover it. 00:37:35.599 [2024-11-19 08:01:27.266248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.599 [2024-11-19 08:01:27.266304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.599 qpair failed and we were unable to recover it. 00:37:35.599 [2024-11-19 08:01:27.266530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.599 [2024-11-19 08:01:27.266566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.599 qpair failed and we were unable to recover it. 00:37:35.599 [2024-11-19 08:01:27.266761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.599 [2024-11-19 08:01:27.266797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.599 qpair failed and we were unable to recover it. 00:37:35.599 [2024-11-19 08:01:27.266982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.599 [2024-11-19 08:01:27.267035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.599 qpair failed and we were unable to recover it. 00:37:35.599 [2024-11-19 08:01:27.267242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.599 [2024-11-19 08:01:27.267283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.599 qpair failed and we were unable to recover it. 00:37:35.599 [2024-11-19 08:01:27.267501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.599 [2024-11-19 08:01:27.267549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.599 qpair failed and we were unable to recover it. 00:37:35.599 [2024-11-19 08:01:27.267666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.599 [2024-11-19 08:01:27.267709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.599 qpair failed and we were unable to recover it. 00:37:35.599 [2024-11-19 08:01:27.267830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.599 [2024-11-19 08:01:27.267865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.599 qpair failed and we were unable to recover it. 00:37:35.599 [2024-11-19 08:01:27.268007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.599 [2024-11-19 08:01:27.268061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.599 qpair failed and we were unable to recover it. 00:37:35.599 [2024-11-19 08:01:27.268294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.599 [2024-11-19 08:01:27.268354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.599 qpair failed and we were unable to recover it. 00:37:35.599 [2024-11-19 08:01:27.268519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.599 [2024-11-19 08:01:27.268557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.599 qpair failed and we were unable to recover it. 00:37:35.599 [2024-11-19 08:01:27.268736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.599 [2024-11-19 08:01:27.268771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.599 qpair failed and we were unable to recover it. 00:37:35.599 [2024-11-19 08:01:27.268928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.599 [2024-11-19 08:01:27.268977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.599 qpair failed and we were unable to recover it. 00:37:35.599 [2024-11-19 08:01:27.269124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.599 [2024-11-19 08:01:27.269162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.599 qpair failed and we were unable to recover it. 00:37:35.599 [2024-11-19 08:01:27.269340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.599 [2024-11-19 08:01:27.269379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.599 qpair failed and we were unable to recover it. 00:37:35.599 [2024-11-19 08:01:27.269504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.599 [2024-11-19 08:01:27.269543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.599 qpair failed and we were unable to recover it. 00:37:35.599 [2024-11-19 08:01:27.269704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.599 [2024-11-19 08:01:27.269743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.599 qpair failed and we were unable to recover it. 00:37:35.599 [2024-11-19 08:01:27.269900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.599 [2024-11-19 08:01:27.269949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.599 qpair failed and we were unable to recover it. 00:37:35.599 [2024-11-19 08:01:27.270120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.599 [2024-11-19 08:01:27.270193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.599 qpair failed and we were unable to recover it. 00:37:35.599 [2024-11-19 08:01:27.270334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.599 [2024-11-19 08:01:27.270375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.599 qpair failed and we were unable to recover it. 00:37:35.599 [2024-11-19 08:01:27.270523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.599 [2024-11-19 08:01:27.270561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.599 qpair failed and we were unable to recover it. 00:37:35.599 [2024-11-19 08:01:27.270721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.599 [2024-11-19 08:01:27.270773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.599 qpair failed and we were unable to recover it. 00:37:35.599 [2024-11-19 08:01:27.270888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.600 [2024-11-19 08:01:27.270923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.600 qpair failed and we were unable to recover it. 00:37:35.600 [2024-11-19 08:01:27.271053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.600 [2024-11-19 08:01:27.271087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.600 qpair failed and we were unable to recover it. 00:37:35.600 [2024-11-19 08:01:27.271191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.600 [2024-11-19 08:01:27.271227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.600 qpair failed and we were unable to recover it. 00:37:35.600 [2024-11-19 08:01:27.271465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.600 [2024-11-19 08:01:27.271531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.600 qpair failed and we were unable to recover it. 00:37:35.600 [2024-11-19 08:01:27.271695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.600 [2024-11-19 08:01:27.271744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.600 qpair failed and we were unable to recover it. 00:37:35.600 [2024-11-19 08:01:27.271872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.600 [2024-11-19 08:01:27.271909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.600 qpair failed and we were unable to recover it. 00:37:35.600 [2024-11-19 08:01:27.272074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.600 [2024-11-19 08:01:27.272108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.600 qpair failed and we were unable to recover it. 00:37:35.600 [2024-11-19 08:01:27.272260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.600 [2024-11-19 08:01:27.272316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.600 qpair failed and we were unable to recover it. 00:37:35.600 [2024-11-19 08:01:27.272493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.600 [2024-11-19 08:01:27.272531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.600 qpair failed and we were unable to recover it. 00:37:35.600 [2024-11-19 08:01:27.272741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.600 [2024-11-19 08:01:27.272778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.600 qpair failed and we were unable to recover it. 00:37:35.600 [2024-11-19 08:01:27.272895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.600 [2024-11-19 08:01:27.272931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.600 qpair failed and we were unable to recover it. 00:37:35.600 [2024-11-19 08:01:27.273069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.600 [2024-11-19 08:01:27.273104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.600 qpair failed and we were unable to recover it. 00:37:35.600 [2024-11-19 08:01:27.273215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.600 [2024-11-19 08:01:27.273249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.600 qpair failed and we were unable to recover it. 00:37:35.600 [2024-11-19 08:01:27.273424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.600 [2024-11-19 08:01:27.273492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.600 qpair failed and we were unable to recover it. 00:37:35.600 [2024-11-19 08:01:27.273620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.600 [2024-11-19 08:01:27.273657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.600 qpair failed and we were unable to recover it. 00:37:35.600 [2024-11-19 08:01:27.273836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.600 [2024-11-19 08:01:27.273884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.600 qpair failed and we were unable to recover it. 00:37:35.600 [2024-11-19 08:01:27.274046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.600 [2024-11-19 08:01:27.274086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.600 qpair failed and we were unable to recover it. 00:37:35.600 [2024-11-19 08:01:27.274277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.600 [2024-11-19 08:01:27.274336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.600 qpair failed and we were unable to recover it. 00:37:35.600 [2024-11-19 08:01:27.274494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.600 [2024-11-19 08:01:27.274539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.600 qpair failed and we were unable to recover it. 00:37:35.600 [2024-11-19 08:01:27.274679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.600 [2024-11-19 08:01:27.274739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.600 qpair failed and we were unable to recover it. 00:37:35.600 [2024-11-19 08:01:27.274932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.600 [2024-11-19 08:01:27.274980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.600 qpair failed and we were unable to recover it. 00:37:35.600 [2024-11-19 08:01:27.275220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.600 [2024-11-19 08:01:27.275261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.600 qpair failed and we were unable to recover it. 00:37:35.600 [2024-11-19 08:01:27.275452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.600 [2024-11-19 08:01:27.275491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.600 qpair failed and we were unable to recover it. 00:37:35.600 [2024-11-19 08:01:27.275665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.600 [2024-11-19 08:01:27.275710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.600 qpair failed and we were unable to recover it. 00:37:35.600 [2024-11-19 08:01:27.275883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.600 [2024-11-19 08:01:27.275918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.600 qpair failed and we were unable to recover it. 00:37:35.600 [2024-11-19 08:01:27.276021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.600 [2024-11-19 08:01:27.276056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.600 qpair failed and we were unable to recover it. 00:37:35.600 [2024-11-19 08:01:27.276203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.600 [2024-11-19 08:01:27.276239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.600 qpair failed and we were unable to recover it. 00:37:35.600 [2024-11-19 08:01:27.276410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.600 [2024-11-19 08:01:27.276477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.600 qpair failed and we were unable to recover it. 00:37:35.600 [2024-11-19 08:01:27.276665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.600 [2024-11-19 08:01:27.276720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.600 qpair failed and we were unable to recover it. 00:37:35.600 [2024-11-19 08:01:27.276864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.600 [2024-11-19 08:01:27.276901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.600 qpair failed and we were unable to recover it. 00:37:35.600 [2024-11-19 08:01:27.277071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.600 [2024-11-19 08:01:27.277135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.600 qpair failed and we were unable to recover it. 00:37:35.600 [2024-11-19 08:01:27.277343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.600 [2024-11-19 08:01:27.277400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.600 qpair failed and we were unable to recover it. 00:37:35.600 [2024-11-19 08:01:27.277557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.600 [2024-11-19 08:01:27.277596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.600 qpair failed and we were unable to recover it. 00:37:35.600 [2024-11-19 08:01:27.277767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.600 [2024-11-19 08:01:27.277804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.600 qpair failed and we were unable to recover it. 00:37:35.600 [2024-11-19 08:01:27.277935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.600 [2024-11-19 08:01:27.277970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.601 qpair failed and we were unable to recover it. 00:37:35.601 [2024-11-19 08:01:27.278091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.601 [2024-11-19 08:01:27.278125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.601 qpair failed and we were unable to recover it. 00:37:35.601 [2024-11-19 08:01:27.278238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.601 [2024-11-19 08:01:27.278273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.601 qpair failed and we were unable to recover it. 00:37:35.601 [2024-11-19 08:01:27.278458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.601 [2024-11-19 08:01:27.278526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.601 qpair failed and we were unable to recover it. 00:37:35.601 [2024-11-19 08:01:27.278683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.601 [2024-11-19 08:01:27.278742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.601 qpair failed and we were unable to recover it. 00:37:35.601 [2024-11-19 08:01:27.278921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.601 [2024-11-19 08:01:27.278973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.601 qpair failed and we were unable to recover it. 00:37:35.601 [2024-11-19 08:01:27.279096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.601 [2024-11-19 08:01:27.279134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.601 qpair failed and we were unable to recover it. 00:37:35.601 [2024-11-19 08:01:27.279335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.601 [2024-11-19 08:01:27.279374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.601 qpair failed and we were unable to recover it. 00:37:35.601 [2024-11-19 08:01:27.279505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.601 [2024-11-19 08:01:27.279558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.601 qpair failed and we were unable to recover it. 00:37:35.601 [2024-11-19 08:01:27.279694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.601 [2024-11-19 08:01:27.279748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.601 qpair failed and we were unable to recover it. 00:37:35.601 [2024-11-19 08:01:27.279883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.601 [2024-11-19 08:01:27.279932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.601 qpair failed and we were unable to recover it. 00:37:35.601 [2024-11-19 08:01:27.280081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.601 [2024-11-19 08:01:27.280143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.601 qpair failed and we were unable to recover it. 00:37:35.601 [2024-11-19 08:01:27.280285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.601 [2024-11-19 08:01:27.280320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.601 qpair failed and we were unable to recover it. 00:37:35.601 [2024-11-19 08:01:27.280429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.601 [2024-11-19 08:01:27.280465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.601 qpair failed and we were unable to recover it. 00:37:35.601 [2024-11-19 08:01:27.280607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.601 [2024-11-19 08:01:27.280643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.601 qpair failed and we were unable to recover it. 00:37:35.601 [2024-11-19 08:01:27.280805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.601 [2024-11-19 08:01:27.280855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.601 qpair failed and we were unable to recover it. 00:37:35.601 [2024-11-19 08:01:27.281068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.601 [2024-11-19 08:01:27.281134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.601 qpair failed and we were unable to recover it. 00:37:35.601 [2024-11-19 08:01:27.281288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.601 [2024-11-19 08:01:27.281339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.601 qpair failed and we were unable to recover it. 00:37:35.601 [2024-11-19 08:01:27.281552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.601 [2024-11-19 08:01:27.281619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.601 qpair failed and we were unable to recover it. 00:37:35.601 [2024-11-19 08:01:27.281787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.601 [2024-11-19 08:01:27.281822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.601 qpair failed and we were unable to recover it. 00:37:35.601 [2024-11-19 08:01:27.281927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.601 [2024-11-19 08:01:27.281983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.601 qpair failed and we were unable to recover it. 00:37:35.601 [2024-11-19 08:01:27.282110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.601 [2024-11-19 08:01:27.282169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.601 qpair failed and we were unable to recover it. 00:37:35.601 [2024-11-19 08:01:27.282286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.601 [2024-11-19 08:01:27.282324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.601 qpair failed and we were unable to recover it. 00:37:35.601 [2024-11-19 08:01:27.282440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.601 [2024-11-19 08:01:27.282477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.601 qpair failed and we were unable to recover it. 00:37:35.601 [2024-11-19 08:01:27.282647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.601 [2024-11-19 08:01:27.282684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.601 qpair failed and we were unable to recover it. 00:37:35.601 [2024-11-19 08:01:27.282869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.601 [2024-11-19 08:01:27.282903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.601 qpair failed and we were unable to recover it. 00:37:35.601 [2024-11-19 08:01:27.283056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.601 [2024-11-19 08:01:27.283094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.601 qpair failed and we were unable to recover it. 00:37:35.601 [2024-11-19 08:01:27.283245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.601 [2024-11-19 08:01:27.283283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.601 qpair failed and we were unable to recover it. 00:37:35.601 [2024-11-19 08:01:27.283398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.601 [2024-11-19 08:01:27.283435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.601 qpair failed and we were unable to recover it. 00:37:35.601 [2024-11-19 08:01:27.283697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.601 [2024-11-19 08:01:27.283752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.601 qpair failed and we were unable to recover it. 00:37:35.601 [2024-11-19 08:01:27.283905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.601 [2024-11-19 08:01:27.283943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.601 qpair failed and we were unable to recover it. 00:37:35.601 [2024-11-19 08:01:27.284125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.601 [2024-11-19 08:01:27.284178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.601 qpair failed and we were unable to recover it. 00:37:35.601 [2024-11-19 08:01:27.284333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.601 [2024-11-19 08:01:27.284385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.601 qpair failed and we were unable to recover it. 00:37:35.601 [2024-11-19 08:01:27.284591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.601 [2024-11-19 08:01:27.284645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.601 qpair failed and we were unable to recover it. 00:37:35.601 [2024-11-19 08:01:27.284836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.601 [2024-11-19 08:01:27.284885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.601 qpair failed and we were unable to recover it. 00:37:35.601 [2024-11-19 08:01:27.284999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.601 [2024-11-19 08:01:27.285056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.601 qpair failed and we were unable to recover it. 00:37:35.602 [2024-11-19 08:01:27.285280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.602 [2024-11-19 08:01:27.285339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.602 qpair failed and we were unable to recover it. 00:37:35.602 [2024-11-19 08:01:27.285484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.602 [2024-11-19 08:01:27.285545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.602 qpair failed and we were unable to recover it. 00:37:35.602 [2024-11-19 08:01:27.285713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.602 [2024-11-19 08:01:27.285767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.602 qpair failed and we were unable to recover it. 00:37:35.602 [2024-11-19 08:01:27.285878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.602 [2024-11-19 08:01:27.285932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.602 qpair failed and we were unable to recover it. 00:37:35.602 [2024-11-19 08:01:27.286111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.602 [2024-11-19 08:01:27.286149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.602 qpair failed and we were unable to recover it. 00:37:35.602 [2024-11-19 08:01:27.286344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.602 [2024-11-19 08:01:27.286381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.602 qpair failed and we were unable to recover it. 00:37:35.602 [2024-11-19 08:01:27.286572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.602 [2024-11-19 08:01:27.286606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.602 qpair failed and we were unable to recover it. 00:37:35.602 [2024-11-19 08:01:27.286715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.602 [2024-11-19 08:01:27.286750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.602 qpair failed and we were unable to recover it. 00:37:35.602 [2024-11-19 08:01:27.286889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.602 [2024-11-19 08:01:27.286922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.602 qpair failed and we were unable to recover it. 00:37:35.602 [2024-11-19 08:01:27.287146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.602 [2024-11-19 08:01:27.287184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.602 qpair failed and we were unable to recover it. 00:37:35.602 [2024-11-19 08:01:27.287324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.602 [2024-11-19 08:01:27.287374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.602 qpair failed and we were unable to recover it. 00:37:35.602 [2024-11-19 08:01:27.287535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.602 [2024-11-19 08:01:27.287573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.602 qpair failed and we were unable to recover it. 00:37:35.602 [2024-11-19 08:01:27.287727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.602 [2024-11-19 08:01:27.287762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.602 qpair failed and we were unable to recover it. 00:37:35.602 [2024-11-19 08:01:27.287894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.602 [2024-11-19 08:01:27.287929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.602 qpair failed and we were unable to recover it. 00:37:35.602 [2024-11-19 08:01:27.288089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.602 [2024-11-19 08:01:27.288126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.602 qpair failed and we were unable to recover it. 00:37:35.602 [2024-11-19 08:01:27.288333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.602 [2024-11-19 08:01:27.288371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.602 qpair failed and we were unable to recover it. 00:37:35.602 [2024-11-19 08:01:27.288521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.602 [2024-11-19 08:01:27.288559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.602 qpair failed and we were unable to recover it. 00:37:35.602 [2024-11-19 08:01:27.288711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.602 [2024-11-19 08:01:27.288762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.602 qpair failed and we were unable to recover it. 00:37:35.602 [2024-11-19 08:01:27.288908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.602 [2024-11-19 08:01:27.288942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.602 qpair failed and we were unable to recover it. 00:37:35.602 [2024-11-19 08:01:27.289103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.602 [2024-11-19 08:01:27.289137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.602 qpair failed and we were unable to recover it. 00:37:35.602 [2024-11-19 08:01:27.289300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.602 [2024-11-19 08:01:27.289338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.602 qpair failed and we were unable to recover it. 00:37:35.602 [2024-11-19 08:01:27.289521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.602 [2024-11-19 08:01:27.289573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.602 qpair failed and we were unable to recover it. 00:37:35.602 [2024-11-19 08:01:27.289736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.602 [2024-11-19 08:01:27.289771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.602 qpair failed and we were unable to recover it. 00:37:35.602 [2024-11-19 08:01:27.289903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.602 [2024-11-19 08:01:27.289936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.602 qpair failed and we were unable to recover it. 00:37:35.602 [2024-11-19 08:01:27.290131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.602 [2024-11-19 08:01:27.290165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.602 qpair failed and we were unable to recover it. 00:37:35.602 [2024-11-19 08:01:27.290276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.602 [2024-11-19 08:01:27.290328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.602 qpair failed and we were unable to recover it. 00:37:35.602 [2024-11-19 08:01:27.290471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.602 [2024-11-19 08:01:27.290508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.602 qpair failed and we were unable to recover it. 00:37:35.602 [2024-11-19 08:01:27.290661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.602 [2024-11-19 08:01:27.290704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.602 qpair failed and we were unable to recover it. 00:37:35.602 [2024-11-19 08:01:27.290863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.602 [2024-11-19 08:01:27.290897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.602 qpair failed and we were unable to recover it. 00:37:35.602 [2024-11-19 08:01:27.291033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.602 [2024-11-19 08:01:27.291067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.602 qpair failed and we were unable to recover it. 00:37:35.602 [2024-11-19 08:01:27.291238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.602 [2024-11-19 08:01:27.291275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.602 qpair failed and we were unable to recover it. 00:37:35.602 [2024-11-19 08:01:27.291434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.602 [2024-11-19 08:01:27.291467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.602 qpair failed and we were unable to recover it. 00:37:35.602 [2024-11-19 08:01:27.291593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.602 [2024-11-19 08:01:27.291643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.602 qpair failed and we were unable to recover it. 00:37:35.602 [2024-11-19 08:01:27.291812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.602 [2024-11-19 08:01:27.291867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.602 qpair failed and we were unable to recover it. 00:37:35.602 [2024-11-19 08:01:27.292022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.602 [2024-11-19 08:01:27.292075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.603 qpair failed and we were unable to recover it. 00:37:35.603 [2024-11-19 08:01:27.292274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.603 [2024-11-19 08:01:27.292311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.603 qpair failed and we were unable to recover it. 00:37:35.603 [2024-11-19 08:01:27.292450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.603 [2024-11-19 08:01:27.292485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.603 qpair failed and we were unable to recover it. 00:37:35.603 [2024-11-19 08:01:27.292654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.603 [2024-11-19 08:01:27.292695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.603 qpair failed and we were unable to recover it. 00:37:35.603 [2024-11-19 08:01:27.292807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.603 [2024-11-19 08:01:27.292842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.603 qpair failed and we were unable to recover it. 00:37:35.603 [2024-11-19 08:01:27.293040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.603 [2024-11-19 08:01:27.293098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.603 qpair failed and we were unable to recover it. 00:37:35.603 [2024-11-19 08:01:27.293373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.603 [2024-11-19 08:01:27.293432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.603 qpair failed and we were unable to recover it. 00:37:35.603 [2024-11-19 08:01:27.293574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.603 [2024-11-19 08:01:27.293607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.603 qpair failed and we were unable to recover it. 00:37:35.603 [2024-11-19 08:01:27.293725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.603 [2024-11-19 08:01:27.293775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.603 qpair failed and we were unable to recover it. 00:37:35.603 [2024-11-19 08:01:27.293905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.603 [2024-11-19 08:01:27.293954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.603 qpair failed and we were unable to recover it. 00:37:35.603 [2024-11-19 08:01:27.294097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.603 [2024-11-19 08:01:27.294152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.603 qpair failed and we were unable to recover it. 00:37:35.603 [2024-11-19 08:01:27.294322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.603 [2024-11-19 08:01:27.294379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.603 qpair failed and we were unable to recover it. 00:37:35.603 [2024-11-19 08:01:27.294535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.603 [2024-11-19 08:01:27.294599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.603 qpair failed and we were unable to recover it. 00:37:35.603 [2024-11-19 08:01:27.294805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.603 [2024-11-19 08:01:27.294841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.603 qpair failed and we were unable to recover it. 00:37:35.603 [2024-11-19 08:01:27.294951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.603 [2024-11-19 08:01:27.294987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.603 qpair failed and we were unable to recover it. 00:37:35.603 [2024-11-19 08:01:27.295098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.603 [2024-11-19 08:01:27.295133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.603 qpair failed and we were unable to recover it. 00:37:35.603 [2024-11-19 08:01:27.295368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.603 [2024-11-19 08:01:27.295427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.603 qpair failed and we were unable to recover it. 00:37:35.603 [2024-11-19 08:01:27.295584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.603 [2024-11-19 08:01:27.295621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.603 qpair failed and we were unable to recover it. 00:37:35.603 [2024-11-19 08:01:27.295777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.603 [2024-11-19 08:01:27.295812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.603 qpair failed and we were unable to recover it. 00:37:35.603 [2024-11-19 08:01:27.295947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.603 [2024-11-19 08:01:27.295982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.603 qpair failed and we were unable to recover it. 00:37:35.603 [2024-11-19 08:01:27.296189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.603 [2024-11-19 08:01:27.296247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.603 qpair failed and we were unable to recover it. 00:37:35.603 [2024-11-19 08:01:27.296363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.603 [2024-11-19 08:01:27.296400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.603 qpair failed and we were unable to recover it. 00:37:35.603 [2024-11-19 08:01:27.296573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.603 [2024-11-19 08:01:27.296610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.603 qpair failed and we were unable to recover it. 00:37:35.603 [2024-11-19 08:01:27.296797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.603 [2024-11-19 08:01:27.296831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.603 qpair failed and we were unable to recover it. 00:37:35.603 [2024-11-19 08:01:27.296953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.603 [2024-11-19 08:01:27.296987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.603 qpair failed and we were unable to recover it. 00:37:35.603 [2024-11-19 08:01:27.297138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.603 [2024-11-19 08:01:27.297176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.603 qpair failed and we were unable to recover it. 00:37:35.603 [2024-11-19 08:01:27.297379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.603 [2024-11-19 08:01:27.297431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.603 qpair failed and we were unable to recover it. 00:37:35.603 [2024-11-19 08:01:27.297543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.603 [2024-11-19 08:01:27.297593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.603 qpair failed and we were unable to recover it. 00:37:35.603 [2024-11-19 08:01:27.297721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.603 [2024-11-19 08:01:27.297770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.603 qpair failed and we were unable to recover it. 00:37:35.603 [2024-11-19 08:01:27.297952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.603 [2024-11-19 08:01:27.298006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.603 qpair failed and we were unable to recover it. 00:37:35.603 [2024-11-19 08:01:27.298201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.604 [2024-11-19 08:01:27.298263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.604 qpair failed and we were unable to recover it. 00:37:35.604 [2024-11-19 08:01:27.298376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.604 [2024-11-19 08:01:27.298416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.604 qpair failed and we were unable to recover it. 00:37:35.604 [2024-11-19 08:01:27.298616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.604 [2024-11-19 08:01:27.298651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.604 qpair failed and we were unable to recover it. 00:37:35.604 [2024-11-19 08:01:27.298792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.604 [2024-11-19 08:01:27.298841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.604 qpair failed and we were unable to recover it. 00:37:35.604 [2024-11-19 08:01:27.298998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.604 [2024-11-19 08:01:27.299050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.604 qpair failed and we were unable to recover it. 00:37:35.604 [2024-11-19 08:01:27.299230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.604 [2024-11-19 08:01:27.299297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.604 qpair failed and we were unable to recover it. 00:37:35.604 [2024-11-19 08:01:27.299450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.604 [2024-11-19 08:01:27.299488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.604 qpair failed and we were unable to recover it. 00:37:35.604 [2024-11-19 08:01:27.299665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.604 [2024-11-19 08:01:27.299709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.604 qpair failed and we were unable to recover it. 00:37:35.604 [2024-11-19 08:01:27.299838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.604 [2024-11-19 08:01:27.299872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.604 qpair failed and we were unable to recover it. 00:37:35.604 [2024-11-19 08:01:27.300028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.604 [2024-11-19 08:01:27.300074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.604 qpair failed and we were unable to recover it. 00:37:35.604 [2024-11-19 08:01:27.300239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.604 [2024-11-19 08:01:27.300297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.604 qpair failed and we were unable to recover it. 00:37:35.604 [2024-11-19 08:01:27.300461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.604 [2024-11-19 08:01:27.300495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.604 qpair failed and we were unable to recover it. 00:37:35.604 [2024-11-19 08:01:27.300633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.604 [2024-11-19 08:01:27.300672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.604 qpair failed and we were unable to recover it. 00:37:35.604 [2024-11-19 08:01:27.300856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.604 [2024-11-19 08:01:27.300906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.604 qpair failed and we were unable to recover it. 00:37:35.604 [2024-11-19 08:01:27.301026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.604 [2024-11-19 08:01:27.301084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.604 qpair failed and we were unable to recover it. 00:37:35.604 [2024-11-19 08:01:27.301230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.604 [2024-11-19 08:01:27.301270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.604 qpair failed and we were unable to recover it. 00:37:35.604 [2024-11-19 08:01:27.301411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.604 [2024-11-19 08:01:27.301450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.604 qpair failed and we were unable to recover it. 00:37:35.604 [2024-11-19 08:01:27.301609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.604 [2024-11-19 08:01:27.301643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.604 qpair failed and we were unable to recover it. 00:37:35.604 [2024-11-19 08:01:27.301794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.604 [2024-11-19 08:01:27.301830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.604 qpair failed and we were unable to recover it. 00:37:35.604 [2024-11-19 08:01:27.301944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.604 [2024-11-19 08:01:27.301996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.604 qpair failed and we were unable to recover it. 00:37:35.604 [2024-11-19 08:01:27.302113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.604 [2024-11-19 08:01:27.302165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.604 qpair failed and we were unable to recover it. 00:37:35.604 [2024-11-19 08:01:27.302327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.604 [2024-11-19 08:01:27.302386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.604 qpair failed and we were unable to recover it. 00:37:35.604 [2024-11-19 08:01:27.302541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.604 [2024-11-19 08:01:27.302580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.604 qpair failed and we were unable to recover it. 00:37:35.604 [2024-11-19 08:01:27.302761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.604 [2024-11-19 08:01:27.302809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.604 qpair failed and we were unable to recover it. 00:37:35.604 [2024-11-19 08:01:27.302970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.604 [2024-11-19 08:01:27.303025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.604 qpair failed and we were unable to recover it. 00:37:35.604 [2024-11-19 08:01:27.303268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.604 [2024-11-19 08:01:27.303325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.604 qpair failed and we were unable to recover it. 00:37:35.604 [2024-11-19 08:01:27.303457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.604 [2024-11-19 08:01:27.303491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.604 qpair failed and we were unable to recover it. 00:37:35.604 [2024-11-19 08:01:27.303635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.604 [2024-11-19 08:01:27.303671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.604 qpair failed and we were unable to recover it. 00:37:35.604 [2024-11-19 08:01:27.303845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.604 [2024-11-19 08:01:27.303893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.604 qpair failed and we were unable to recover it. 00:37:35.604 [2024-11-19 08:01:27.304017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.604 [2024-11-19 08:01:27.304074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.604 qpair failed and we were unable to recover it. 00:37:35.604 [2024-11-19 08:01:27.304187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.604 [2024-11-19 08:01:27.304225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.604 qpair failed and we were unable to recover it. 00:37:35.604 [2024-11-19 08:01:27.304395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.604 [2024-11-19 08:01:27.304452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.604 qpair failed and we were unable to recover it. 00:37:35.604 [2024-11-19 08:01:27.304598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.604 [2024-11-19 08:01:27.304636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.604 qpair failed and we were unable to recover it. 00:37:35.604 [2024-11-19 08:01:27.304814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.604 [2024-11-19 08:01:27.304851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.604 qpair failed and we were unable to recover it. 00:37:35.604 [2024-11-19 08:01:27.304998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.604 [2024-11-19 08:01:27.305049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.604 qpair failed and we were unable to recover it. 00:37:35.604 [2024-11-19 08:01:27.305192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.605 [2024-11-19 08:01:27.305227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.605 qpair failed and we were unable to recover it. 00:37:35.605 [2024-11-19 08:01:27.305380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.605 [2024-11-19 08:01:27.305432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.605 qpair failed and we were unable to recover it. 00:37:35.605 [2024-11-19 08:01:27.305536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.605 [2024-11-19 08:01:27.305569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.605 qpair failed and we were unable to recover it. 00:37:35.605 [2024-11-19 08:01:27.305760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.605 [2024-11-19 08:01:27.305808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.605 qpair failed and we were unable to recover it. 00:37:35.605 [2024-11-19 08:01:27.305923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.605 [2024-11-19 08:01:27.305960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.605 qpair failed and we were unable to recover it. 00:37:35.605 [2024-11-19 08:01:27.306112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.605 [2024-11-19 08:01:27.306146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.605 qpair failed and we were unable to recover it. 00:37:35.605 [2024-11-19 08:01:27.306256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.605 [2024-11-19 08:01:27.306290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.605 qpair failed and we were unable to recover it. 00:37:35.605 [2024-11-19 08:01:27.306421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.605 [2024-11-19 08:01:27.306456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.605 qpair failed and we were unable to recover it. 00:37:35.605 [2024-11-19 08:01:27.306591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.605 [2024-11-19 08:01:27.306625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.605 qpair failed and we were unable to recover it. 00:37:35.605 [2024-11-19 08:01:27.306779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.605 [2024-11-19 08:01:27.306815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.605 qpair failed and we were unable to recover it. 00:37:35.605 [2024-11-19 08:01:27.306980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.605 [2024-11-19 08:01:27.307037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.605 qpair failed and we were unable to recover it. 00:37:35.605 [2024-11-19 08:01:27.307190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.605 [2024-11-19 08:01:27.307244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.605 qpair failed and we were unable to recover it. 00:37:35.605 [2024-11-19 08:01:27.307377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.605 [2024-11-19 08:01:27.307412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.605 qpair failed and we were unable to recover it. 00:37:35.605 [2024-11-19 08:01:27.307526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.605 [2024-11-19 08:01:27.307562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.605 qpair failed and we were unable to recover it. 00:37:35.605 [2024-11-19 08:01:27.307717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.605 [2024-11-19 08:01:27.307775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.605 qpair failed and we were unable to recover it. 00:37:35.605 [2024-11-19 08:01:27.307895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.605 [2024-11-19 08:01:27.307931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.605 qpair failed and we were unable to recover it. 00:37:35.605 [2024-11-19 08:01:27.308099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.605 [2024-11-19 08:01:27.308133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.605 qpair failed and we were unable to recover it. 00:37:35.605 [2024-11-19 08:01:27.308263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.605 [2024-11-19 08:01:27.308297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.605 qpair failed and we were unable to recover it. 00:37:35.605 [2024-11-19 08:01:27.308433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.605 [2024-11-19 08:01:27.308467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.605 qpair failed and we were unable to recover it. 00:37:35.605 [2024-11-19 08:01:27.308615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.605 [2024-11-19 08:01:27.308651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.605 qpair failed and we were unable to recover it. 00:37:35.605 [2024-11-19 08:01:27.308785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.605 [2024-11-19 08:01:27.308821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.605 qpair failed and we were unable to recover it. 00:37:35.605 [2024-11-19 08:01:27.308993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.605 [2024-11-19 08:01:27.309036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.605 qpair failed and we were unable to recover it. 00:37:35.605 [2024-11-19 08:01:27.309175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.605 [2024-11-19 08:01:27.309237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.605 qpair failed and we were unable to recover it. 00:37:35.605 [2024-11-19 08:01:27.309430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.605 [2024-11-19 08:01:27.309491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.605 qpair failed and we were unable to recover it. 00:37:35.605 [2024-11-19 08:01:27.309610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.605 [2024-11-19 08:01:27.309648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.605 qpair failed and we were unable to recover it. 00:37:35.605 [2024-11-19 08:01:27.309860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.605 [2024-11-19 08:01:27.309909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.605 qpair failed and we were unable to recover it. 00:37:35.605 [2024-11-19 08:01:27.310077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.605 [2024-11-19 08:01:27.310114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.605 qpair failed and we were unable to recover it. 00:37:35.605 [2024-11-19 08:01:27.310320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.605 [2024-11-19 08:01:27.310381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.605 qpair failed and we were unable to recover it. 00:37:35.605 [2024-11-19 08:01:27.310549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.605 [2024-11-19 08:01:27.310587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.605 qpair failed and we were unable to recover it. 00:37:35.605 [2024-11-19 08:01:27.310755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.605 [2024-11-19 08:01:27.310789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.605 qpair failed and we were unable to recover it. 00:37:35.605 [2024-11-19 08:01:27.310903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.605 [2024-11-19 08:01:27.310939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.605 qpair failed and we were unable to recover it. 00:37:35.605 [2024-11-19 08:01:27.311148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.605 [2024-11-19 08:01:27.311186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.605 qpair failed and we were unable to recover it. 00:37:35.605 [2024-11-19 08:01:27.311418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.605 [2024-11-19 08:01:27.311476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.605 qpair failed and we were unable to recover it. 00:37:35.605 [2024-11-19 08:01:27.311656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.605 [2024-11-19 08:01:27.311736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.605 qpair failed and we were unable to recover it. 00:37:35.605 [2024-11-19 08:01:27.311866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.605 [2024-11-19 08:01:27.311903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.605 qpair failed and we were unable to recover it. 00:37:35.606 [2024-11-19 08:01:27.312099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.606 [2024-11-19 08:01:27.312165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.606 qpair failed and we were unable to recover it. 00:37:35.606 [2024-11-19 08:01:27.312323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.606 [2024-11-19 08:01:27.312377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.606 qpair failed and we were unable to recover it. 00:37:35.606 [2024-11-19 08:01:27.312514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.606 [2024-11-19 08:01:27.312549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.606 qpair failed and we were unable to recover it. 00:37:35.606 [2024-11-19 08:01:27.312712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.606 [2024-11-19 08:01:27.312756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.606 qpair failed and we were unable to recover it. 00:37:35.606 [2024-11-19 08:01:27.312935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.606 [2024-11-19 08:01:27.312989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.606 qpair failed and we were unable to recover it. 00:37:35.606 [2024-11-19 08:01:27.313150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.606 [2024-11-19 08:01:27.313191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.606 qpair failed and we were unable to recover it. 00:37:35.606 [2024-11-19 08:01:27.313338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.606 [2024-11-19 08:01:27.313389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.606 qpair failed and we were unable to recover it. 00:37:35.606 [2024-11-19 08:01:27.313507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.606 [2024-11-19 08:01:27.313559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.606 qpair failed and we were unable to recover it. 00:37:35.606 [2024-11-19 08:01:27.313716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.606 [2024-11-19 08:01:27.313765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.606 qpair failed and we were unable to recover it. 00:37:35.606 [2024-11-19 08:01:27.313912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.606 [2024-11-19 08:01:27.313961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.606 qpair failed and we were unable to recover it. 00:37:35.606 [2024-11-19 08:01:27.314142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.606 [2024-11-19 08:01:27.314180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.606 qpair failed and we were unable to recover it. 00:37:35.606 [2024-11-19 08:01:27.314321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.606 [2024-11-19 08:01:27.314357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.606 qpair failed and we were unable to recover it. 00:37:35.606 [2024-11-19 08:01:27.314500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.606 [2024-11-19 08:01:27.314553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.606 qpair failed and we were unable to recover it. 00:37:35.606 [2024-11-19 08:01:27.314719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.606 [2024-11-19 08:01:27.314756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.606 qpair failed and we were unable to recover it. 00:37:35.606 [2024-11-19 08:01:27.314872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.606 [2024-11-19 08:01:27.314908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.606 qpair failed and we were unable to recover it. 00:37:35.606 [2024-11-19 08:01:27.315013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.606 [2024-11-19 08:01:27.315047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.606 qpair failed and we were unable to recover it. 00:37:35.606 [2024-11-19 08:01:27.315237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.606 [2024-11-19 08:01:27.315297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.606 qpair failed and we were unable to recover it. 00:37:35.606 [2024-11-19 08:01:27.315427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.606 [2024-11-19 08:01:27.315481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.606 qpair failed and we were unable to recover it. 00:37:35.606 [2024-11-19 08:01:27.315601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.606 [2024-11-19 08:01:27.315638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.606 qpair failed and we were unable to recover it. 00:37:35.606 [2024-11-19 08:01:27.315768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.606 [2024-11-19 08:01:27.315808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.606 qpair failed and we were unable to recover it. 00:37:35.606 [2024-11-19 08:01:27.315983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.606 [2024-11-19 08:01:27.316055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.606 qpair failed and we were unable to recover it. 00:37:35.606 [2024-11-19 08:01:27.316266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.606 [2024-11-19 08:01:27.316321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.606 qpair failed and we were unable to recover it. 00:37:35.606 [2024-11-19 08:01:27.316529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.606 [2024-11-19 08:01:27.316596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.606 qpair failed and we were unable to recover it. 00:37:35.606 [2024-11-19 08:01:27.316782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.606 [2024-11-19 08:01:27.316819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.606 qpair failed and we were unable to recover it. 00:37:35.606 [2024-11-19 08:01:27.317053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.606 [2024-11-19 08:01:27.317103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.606 qpair failed and we were unable to recover it. 00:37:35.606 [2024-11-19 08:01:27.317256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.606 [2024-11-19 08:01:27.317295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.606 qpair failed and we were unable to recover it. 00:37:35.606 [2024-11-19 08:01:27.317484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.606 [2024-11-19 08:01:27.317547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.606 qpair failed and we were unable to recover it. 00:37:35.606 [2024-11-19 08:01:27.317683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.606 [2024-11-19 08:01:27.317735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.606 qpair failed and we were unable to recover it. 00:37:35.606 [2024-11-19 08:01:27.317863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.606 [2024-11-19 08:01:27.317897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.606 qpair failed and we were unable to recover it. 00:37:35.606 [2024-11-19 08:01:27.318006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.606 [2024-11-19 08:01:27.318041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.606 qpair failed and we were unable to recover it. 00:37:35.606 [2024-11-19 08:01:27.318204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.606 [2024-11-19 08:01:27.318237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.606 qpair failed and we were unable to recover it. 00:37:35.606 [2024-11-19 08:01:27.318420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.606 [2024-11-19 08:01:27.318481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.606 qpair failed and we were unable to recover it. 00:37:35.606 [2024-11-19 08:01:27.318650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.606 [2024-11-19 08:01:27.318704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.606 qpair failed and we were unable to recover it. 00:37:35.606 [2024-11-19 08:01:27.318868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.606 [2024-11-19 08:01:27.318902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.606 qpair failed and we were unable to recover it. 00:37:35.606 [2024-11-19 08:01:27.319013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.606 [2024-11-19 08:01:27.319064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.606 qpair failed and we were unable to recover it. 00:37:35.606 [2024-11-19 08:01:27.319194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.607 [2024-11-19 08:01:27.319241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.607 qpair failed and we were unable to recover it. 00:37:35.607 [2024-11-19 08:01:27.319367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.607 [2024-11-19 08:01:27.319420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.607 qpair failed and we were unable to recover it. 00:37:35.607 [2024-11-19 08:01:27.319603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.607 [2024-11-19 08:01:27.319642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.607 qpair failed and we were unable to recover it. 00:37:35.607 [2024-11-19 08:01:27.319838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.607 [2024-11-19 08:01:27.319873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.607 qpair failed and we were unable to recover it. 00:37:35.607 [2024-11-19 08:01:27.320022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.607 [2024-11-19 08:01:27.320056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.607 qpair failed and we were unable to recover it. 00:37:35.607 [2024-11-19 08:01:27.320168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.607 [2024-11-19 08:01:27.320220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.607 qpair failed and we were unable to recover it. 00:37:35.607 [2024-11-19 08:01:27.320353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.607 [2024-11-19 08:01:27.320391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.607 qpair failed and we were unable to recover it. 00:37:35.607 [2024-11-19 08:01:27.320573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.607 [2024-11-19 08:01:27.320611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.607 qpair failed and we were unable to recover it. 00:37:35.607 [2024-11-19 08:01:27.320827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.607 [2024-11-19 08:01:27.320861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.607 qpair failed and we were unable to recover it. 00:37:35.607 [2024-11-19 08:01:27.321005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.607 [2024-11-19 08:01:27.321065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.607 qpair failed and we were unable to recover it. 00:37:35.607 [2024-11-19 08:01:27.321202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.607 [2024-11-19 08:01:27.321258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.607 qpair failed and we were unable to recover it. 00:37:35.607 [2024-11-19 08:01:27.321421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.607 [2024-11-19 08:01:27.321460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.607 qpair failed and we were unable to recover it. 00:37:35.607 [2024-11-19 08:01:27.321608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.607 [2024-11-19 08:01:27.321647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.607 qpair failed and we were unable to recover it. 00:37:35.607 [2024-11-19 08:01:27.321919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.607 [2024-11-19 08:01:27.321977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.607 qpair failed and we were unable to recover it. 00:37:35.607 [2024-11-19 08:01:27.322101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.607 [2024-11-19 08:01:27.322138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.607 qpair failed and we were unable to recover it. 00:37:35.607 [2024-11-19 08:01:27.322365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.607 [2024-11-19 08:01:27.322425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.607 qpair failed and we were unable to recover it. 00:37:35.607 [2024-11-19 08:01:27.322637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.607 [2024-11-19 08:01:27.322672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.607 qpair failed and we were unable to recover it. 00:37:35.607 [2024-11-19 08:01:27.322794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.607 [2024-11-19 08:01:27.322829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.607 qpair failed and we were unable to recover it. 00:37:35.607 [2024-11-19 08:01:27.322989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.607 [2024-11-19 08:01:27.323043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.607 qpair failed and we were unable to recover it. 00:37:35.607 [2024-11-19 08:01:27.323176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.607 [2024-11-19 08:01:27.323234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.607 qpair failed and we were unable to recover it. 00:37:35.607 [2024-11-19 08:01:27.323440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.607 [2024-11-19 08:01:27.323495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.607 qpair failed and we were unable to recover it. 00:37:35.607 [2024-11-19 08:01:27.323711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.607 [2024-11-19 08:01:27.323758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.607 qpair failed and we were unable to recover it. 00:37:35.607 [2024-11-19 08:01:27.323944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.607 [2024-11-19 08:01:27.323995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.607 qpair failed and we were unable to recover it. 00:37:35.607 [2024-11-19 08:01:27.324153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.607 [2024-11-19 08:01:27.324206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.607 qpair failed and we were unable to recover it. 00:37:35.607 [2024-11-19 08:01:27.324366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.607 [2024-11-19 08:01:27.324426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.607 qpair failed and we were unable to recover it. 00:37:35.607 [2024-11-19 08:01:27.324560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.607 [2024-11-19 08:01:27.324593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.607 qpair failed and we were unable to recover it. 00:37:35.607 [2024-11-19 08:01:27.324756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.607 [2024-11-19 08:01:27.324872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.607 qpair failed and we were unable to recover it. 00:37:35.607 [2024-11-19 08:01:27.325055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.607 [2024-11-19 08:01:27.325098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.607 qpair failed and we were unable to recover it. 00:37:35.607 [2024-11-19 08:01:27.325328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.607 [2024-11-19 08:01:27.325387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.607 qpair failed and we were unable to recover it. 00:37:35.607 [2024-11-19 08:01:27.325525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.607 [2024-11-19 08:01:27.325560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.607 qpair failed and we were unable to recover it. 00:37:35.607 [2024-11-19 08:01:27.325718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.607 [2024-11-19 08:01:27.325755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.607 qpair failed and we were unable to recover it. 00:37:35.607 [2024-11-19 08:01:27.325874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.607 [2024-11-19 08:01:27.325909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.607 qpair failed and we were unable to recover it. 00:37:35.607 [2024-11-19 08:01:27.326148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.607 [2024-11-19 08:01:27.326187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.607 qpair failed and we were unable to recover it. 00:37:35.607 [2024-11-19 08:01:27.326369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.607 [2024-11-19 08:01:27.326408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.607 qpair failed and we were unable to recover it. 00:37:35.607 [2024-11-19 08:01:27.326548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.608 [2024-11-19 08:01:27.326586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.608 qpair failed and we were unable to recover it. 00:37:35.608 [2024-11-19 08:01:27.326724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.608 [2024-11-19 08:01:27.326767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.608 qpair failed and we were unable to recover it. 00:37:35.608 [2024-11-19 08:01:27.326921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.608 [2024-11-19 08:01:27.326985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.608 qpair failed and we were unable to recover it. 00:37:35.608 [2024-11-19 08:01:27.327135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.608 [2024-11-19 08:01:27.327203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.608 qpair failed and we were unable to recover it. 00:37:35.608 [2024-11-19 08:01:27.327445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.608 [2024-11-19 08:01:27.327480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.608 qpair failed and we were unable to recover it. 00:37:35.608 [2024-11-19 08:01:27.327624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.608 [2024-11-19 08:01:27.327658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.608 qpair failed and we were unable to recover it. 00:37:35.608 [2024-11-19 08:01:27.327839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.608 [2024-11-19 08:01:27.327887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.608 qpair failed and we were unable to recover it. 00:37:35.608 [2024-11-19 08:01:27.328049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.608 [2024-11-19 08:01:27.328098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.608 qpair failed and we were unable to recover it. 00:37:35.608 [2024-11-19 08:01:27.328233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.608 [2024-11-19 08:01:27.328274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.608 qpair failed and we were unable to recover it. 00:37:35.608 [2024-11-19 08:01:27.328454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.608 [2024-11-19 08:01:27.328493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.608 qpair failed and we were unable to recover it. 00:37:35.608 [2024-11-19 08:01:27.328644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.608 [2024-11-19 08:01:27.328679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.608 qpair failed and we were unable to recover it. 00:37:35.608 [2024-11-19 08:01:27.328816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.608 [2024-11-19 08:01:27.328851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.608 qpair failed and we were unable to recover it. 00:37:35.608 [2024-11-19 08:01:27.328984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.608 [2024-11-19 08:01:27.329052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.608 qpair failed and we were unable to recover it. 00:37:35.608 [2024-11-19 08:01:27.329240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.608 [2024-11-19 08:01:27.329301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.608 qpair failed and we were unable to recover it. 00:37:35.608 [2024-11-19 08:01:27.329421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.608 [2024-11-19 08:01:27.329459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.608 qpair failed and we were unable to recover it. 00:37:35.608 [2024-11-19 08:01:27.329575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.608 [2024-11-19 08:01:27.329628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.608 qpair failed and we were unable to recover it. 00:37:35.608 [2024-11-19 08:01:27.329761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.608 [2024-11-19 08:01:27.329809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.608 qpair failed and we were unable to recover it. 00:37:35.608 [2024-11-19 08:01:27.329997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.608 [2024-11-19 08:01:27.330055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.608 qpair failed and we were unable to recover it. 00:37:35.608 [2024-11-19 08:01:27.330277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.608 [2024-11-19 08:01:27.330336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.608 qpair failed and we were unable to recover it. 00:37:35.608 [2024-11-19 08:01:27.330545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.608 [2024-11-19 08:01:27.330602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.608 qpair failed and we were unable to recover it. 00:37:35.608 [2024-11-19 08:01:27.330750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.608 [2024-11-19 08:01:27.330786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.608 qpair failed and we were unable to recover it. 00:37:35.608 [2024-11-19 08:01:27.330911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.608 [2024-11-19 08:01:27.330951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.608 qpair failed and we were unable to recover it. 00:37:35.608 [2024-11-19 08:01:27.331066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.608 [2024-11-19 08:01:27.331105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.608 qpair failed and we were unable to recover it. 00:37:35.608 [2024-11-19 08:01:27.331281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.608 [2024-11-19 08:01:27.331346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.608 qpair failed and we were unable to recover it. 00:37:35.608 [2024-11-19 08:01:27.331522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.608 [2024-11-19 08:01:27.331560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.608 qpair failed and we were unable to recover it. 00:37:35.608 [2024-11-19 08:01:27.331712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.608 [2024-11-19 08:01:27.331771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.608 qpair failed and we were unable to recover it. 00:37:35.608 [2024-11-19 08:01:27.331948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.608 [2024-11-19 08:01:27.332001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.608 qpair failed and we were unable to recover it. 00:37:35.608 [2024-11-19 08:01:27.332219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.608 [2024-11-19 08:01:27.332274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.608 qpair failed and we were unable to recover it. 00:37:35.608 [2024-11-19 08:01:27.332381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.608 [2024-11-19 08:01:27.332416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.608 qpair failed and we were unable to recover it. 00:37:35.608 [2024-11-19 08:01:27.332549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.608 [2024-11-19 08:01:27.332584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.608 qpair failed and we were unable to recover it. 00:37:35.608 [2024-11-19 08:01:27.332768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.608 [2024-11-19 08:01:27.332827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.608 qpair failed and we were unable to recover it. 00:37:35.609 [2024-11-19 08:01:27.332991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.609 [2024-11-19 08:01:27.333030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.609 qpair failed and we were unable to recover it. 00:37:35.609 [2024-11-19 08:01:27.333211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.609 [2024-11-19 08:01:27.333271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.609 qpair failed and we were unable to recover it. 00:37:35.609 [2024-11-19 08:01:27.333399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.609 [2024-11-19 08:01:27.333438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.609 qpair failed and we were unable to recover it. 00:37:35.609 [2024-11-19 08:01:27.333590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.609 [2024-11-19 08:01:27.333625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.609 qpair failed and we were unable to recover it. 00:37:35.609 [2024-11-19 08:01:27.333840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.609 [2024-11-19 08:01:27.333876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.609 qpair failed and we were unable to recover it. 00:37:35.609 [2024-11-19 08:01:27.333989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.609 [2024-11-19 08:01:27.334048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.609 qpair failed and we were unable to recover it. 00:37:35.609 [2024-11-19 08:01:27.334174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.609 [2024-11-19 08:01:27.334214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.609 qpair failed and we were unable to recover it. 00:37:35.609 [2024-11-19 08:01:27.334423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.609 [2024-11-19 08:01:27.334462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.609 qpair failed and we were unable to recover it. 00:37:35.609 [2024-11-19 08:01:27.334609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.609 [2024-11-19 08:01:27.334648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.609 qpair failed and we were unable to recover it. 00:37:35.609 [2024-11-19 08:01:27.334790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.609 [2024-11-19 08:01:27.334825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.609 qpair failed and we were unable to recover it. 00:37:35.609 [2024-11-19 08:01:27.334977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.609 [2024-11-19 08:01:27.335041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.609 qpair failed and we were unable to recover it. 00:37:35.609 [2024-11-19 08:01:27.335230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.609 [2024-11-19 08:01:27.335287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.609 qpair failed and we were unable to recover it. 00:37:35.609 [2024-11-19 08:01:27.335592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.609 [2024-11-19 08:01:27.335664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.609 qpair failed and we were unable to recover it. 00:37:35.609 [2024-11-19 08:01:27.335817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.609 [2024-11-19 08:01:27.335852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.609 qpair failed and we were unable to recover it. 00:37:35.609 [2024-11-19 08:01:27.336070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.609 [2024-11-19 08:01:27.336108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.609 qpair failed and we were unable to recover it. 00:37:35.609 [2024-11-19 08:01:27.336239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.609 [2024-11-19 08:01:27.336293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.609 qpair failed and we were unable to recover it. 00:37:35.609 [2024-11-19 08:01:27.336450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.609 [2024-11-19 08:01:27.336489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.609 qpair failed and we were unable to recover it. 00:37:35.609 [2024-11-19 08:01:27.336629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.609 [2024-11-19 08:01:27.336684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.609 qpair failed and we were unable to recover it. 00:37:35.609 [2024-11-19 08:01:27.336892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.609 [2024-11-19 08:01:27.336949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.609 qpair failed and we were unable to recover it. 00:37:35.609 [2024-11-19 08:01:27.337141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.609 [2024-11-19 08:01:27.337218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.609 qpair failed and we were unable to recover it. 00:37:35.609 [2024-11-19 08:01:27.337419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.609 [2024-11-19 08:01:27.337455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.609 qpair failed and we were unable to recover it. 00:37:35.609 [2024-11-19 08:01:27.337617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.609 [2024-11-19 08:01:27.337653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.609 qpair failed and we were unable to recover it. 00:37:35.609 [2024-11-19 08:01:27.337795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.609 [2024-11-19 08:01:27.337844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.609 qpair failed and we were unable to recover it. 00:37:35.609 [2024-11-19 08:01:27.337996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.609 [2024-11-19 08:01:27.338037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.609 qpair failed and we were unable to recover it. 00:37:35.609 [2024-11-19 08:01:27.338228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.609 [2024-11-19 08:01:27.338290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.609 qpair failed and we were unable to recover it. 00:37:35.609 [2024-11-19 08:01:27.338429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.609 [2024-11-19 08:01:27.338502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.609 qpair failed and we were unable to recover it. 00:37:35.609 [2024-11-19 08:01:27.338681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.609 [2024-11-19 08:01:27.338726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.609 qpair failed and we were unable to recover it. 00:37:35.609 [2024-11-19 08:01:27.338874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.609 [2024-11-19 08:01:27.338923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.609 qpair failed and we were unable to recover it. 00:37:35.609 [2024-11-19 08:01:27.339181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.609 [2024-11-19 08:01:27.339243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.609 qpair failed and we were unable to recover it. 00:37:35.609 [2024-11-19 08:01:27.339364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.609 [2024-11-19 08:01:27.339405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.609 qpair failed and we were unable to recover it. 00:37:35.609 [2024-11-19 08:01:27.339591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.609 [2024-11-19 08:01:27.339631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.609 qpair failed and we were unable to recover it. 00:37:35.609 [2024-11-19 08:01:27.339808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.609 [2024-11-19 08:01:27.339857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.609 qpair failed and we were unable to recover it. 00:37:35.609 [2024-11-19 08:01:27.340027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.609 [2024-11-19 08:01:27.340063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.609 qpair failed and we were unable to recover it. 00:37:35.609 [2024-11-19 08:01:27.340216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.609 [2024-11-19 08:01:27.340254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.609 qpair failed and we were unable to recover it. 00:37:35.609 [2024-11-19 08:01:27.340401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.609 [2024-11-19 08:01:27.340440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.610 qpair failed and we were unable to recover it. 00:37:35.610 [2024-11-19 08:01:27.340581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.610 [2024-11-19 08:01:27.340615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.610 qpair failed and we were unable to recover it. 00:37:35.610 [2024-11-19 08:01:27.340723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.610 [2024-11-19 08:01:27.340767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.610 qpair failed and we were unable to recover it. 00:37:35.610 [2024-11-19 08:01:27.340929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.610 [2024-11-19 08:01:27.340969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.610 qpair failed and we were unable to recover it. 00:37:35.610 [2024-11-19 08:01:27.341178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.610 [2024-11-19 08:01:27.341239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.610 qpair failed and we were unable to recover it. 00:37:35.610 [2024-11-19 08:01:27.341385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.610 [2024-11-19 08:01:27.341454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.610 qpair failed and we were unable to recover it. 00:37:35.610 [2024-11-19 08:01:27.341613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.610 [2024-11-19 08:01:27.341655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.610 qpair failed and we were unable to recover it. 00:37:35.610 [2024-11-19 08:01:27.341840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.610 [2024-11-19 08:01:27.341876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.610 qpair failed and we were unable to recover it. 00:37:35.610 [2024-11-19 08:01:27.342038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.610 [2024-11-19 08:01:27.342073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.610 qpair failed and we were unable to recover it. 00:37:35.610 [2024-11-19 08:01:27.342201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.610 [2024-11-19 08:01:27.342274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.610 qpair failed and we were unable to recover it. 00:37:35.610 [2024-11-19 08:01:27.342498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.610 [2024-11-19 08:01:27.342537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.610 qpair failed and we were unable to recover it. 00:37:35.610 [2024-11-19 08:01:27.342698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.610 [2024-11-19 08:01:27.342753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.610 qpair failed and we were unable to recover it. 00:37:35.610 [2024-11-19 08:01:27.342937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.610 [2024-11-19 08:01:27.343005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.610 qpair failed and we were unable to recover it. 00:37:35.610 [2024-11-19 08:01:27.343151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.610 [2024-11-19 08:01:27.343208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.610 qpair failed and we were unable to recover it. 00:37:35.610 [2024-11-19 08:01:27.343358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.610 [2024-11-19 08:01:27.343397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.610 qpair failed and we were unable to recover it. 00:37:35.610 [2024-11-19 08:01:27.343596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.610 [2024-11-19 08:01:27.343631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.610 qpair failed and we were unable to recover it. 00:37:35.610 [2024-11-19 08:01:27.343759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.610 [2024-11-19 08:01:27.343793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.610 qpair failed and we were unable to recover it. 00:37:35.610 [2024-11-19 08:01:27.343899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.610 [2024-11-19 08:01:27.343932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.610 qpair failed and we were unable to recover it. 00:37:35.610 [2024-11-19 08:01:27.344103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.610 [2024-11-19 08:01:27.344142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.610 qpair failed and we were unable to recover it. 00:37:35.610 [2024-11-19 08:01:27.344335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.610 [2024-11-19 08:01:27.344373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.610 qpair failed and we were unable to recover it. 00:37:35.610 [2024-11-19 08:01:27.344520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.610 [2024-11-19 08:01:27.344557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.610 qpair failed and we were unable to recover it. 00:37:35.610 [2024-11-19 08:01:27.344673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.610 [2024-11-19 08:01:27.344721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.610 qpair failed and we were unable to recover it. 00:37:35.610 [2024-11-19 08:01:27.344901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.610 [2024-11-19 08:01:27.344950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.610 qpair failed and we were unable to recover it. 00:37:35.610 [2024-11-19 08:01:27.345185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.610 [2024-11-19 08:01:27.345238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.610 qpair failed and we were unable to recover it. 00:37:35.610 [2024-11-19 08:01:27.345390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.610 [2024-11-19 08:01:27.345427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.610 qpair failed and we were unable to recover it. 00:37:35.610 [2024-11-19 08:01:27.345622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.610 [2024-11-19 08:01:27.345662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.610 qpair failed and we were unable to recover it. 00:37:35.610 [2024-11-19 08:01:27.345812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.610 [2024-11-19 08:01:27.345846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.610 qpair failed and we were unable to recover it. 00:37:35.610 [2024-11-19 08:01:27.346033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.610 [2024-11-19 08:01:27.346072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.610 qpair failed and we were unable to recover it. 00:37:35.610 [2024-11-19 08:01:27.346304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.610 [2024-11-19 08:01:27.346338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.610 qpair failed and we were unable to recover it. 00:37:35.610 [2024-11-19 08:01:27.346515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.610 [2024-11-19 08:01:27.346553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.610 qpair failed and we were unable to recover it. 00:37:35.610 [2024-11-19 08:01:27.346697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.610 [2024-11-19 08:01:27.346732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.610 qpair failed and we were unable to recover it. 00:37:35.610 [2024-11-19 08:01:27.346865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.610 [2024-11-19 08:01:27.346898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.610 qpair failed and we were unable to recover it. 00:37:35.610 [2024-11-19 08:01:27.347042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.610 [2024-11-19 08:01:27.347109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.610 qpair failed and we were unable to recover it. 00:37:35.610 [2024-11-19 08:01:27.347315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.610 [2024-11-19 08:01:27.347353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.610 qpair failed and we were unable to recover it. 00:37:35.610 [2024-11-19 08:01:27.347537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.610 [2024-11-19 08:01:27.347593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.610 qpair failed and we were unable to recover it. 00:37:35.610 [2024-11-19 08:01:27.347723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.611 [2024-11-19 08:01:27.347762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.611 qpair failed and we were unable to recover it. 00:37:35.611 [2024-11-19 08:01:27.347942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.611 [2024-11-19 08:01:27.347990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.611 qpair failed and we were unable to recover it. 00:37:35.611 [2024-11-19 08:01:27.348152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.611 [2024-11-19 08:01:27.348190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.611 qpair failed and we were unable to recover it. 00:37:35.611 [2024-11-19 08:01:27.348307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.611 [2024-11-19 08:01:27.348342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.611 qpair failed and we were unable to recover it. 00:37:35.611 [2024-11-19 08:01:27.348573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.611 [2024-11-19 08:01:27.348636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.611 qpair failed and we were unable to recover it. 00:37:35.611 [2024-11-19 08:01:27.348758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.611 [2024-11-19 08:01:27.348794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.611 qpair failed and we were unable to recover it. 00:37:35.611 [2024-11-19 08:01:27.348921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.611 [2024-11-19 08:01:27.348975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.611 qpair failed and we were unable to recover it. 00:37:35.611 [2024-11-19 08:01:27.349199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.611 [2024-11-19 08:01:27.349258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.611 qpair failed and we were unable to recover it. 00:37:35.611 [2024-11-19 08:01:27.349472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.611 [2024-11-19 08:01:27.349507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.611 qpair failed and we were unable to recover it. 00:37:35.611 [2024-11-19 08:01:27.349678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.611 [2024-11-19 08:01:27.349725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.611 qpair failed and we were unable to recover it. 00:37:35.611 [2024-11-19 08:01:27.349856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.611 [2024-11-19 08:01:27.349914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.611 qpair failed and we were unable to recover it. 00:37:35.611 [2024-11-19 08:01:27.350070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.611 [2024-11-19 08:01:27.350123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.611 qpair failed and we were unable to recover it. 00:37:35.611 [2024-11-19 08:01:27.350249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.611 [2024-11-19 08:01:27.350301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.611 qpair failed and we were unable to recover it. 00:37:35.611 [2024-11-19 08:01:27.350445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.611 [2024-11-19 08:01:27.350479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.611 qpair failed and we were unable to recover it. 00:37:35.611 [2024-11-19 08:01:27.350613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.611 [2024-11-19 08:01:27.350648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.611 qpair failed and we were unable to recover it. 00:37:35.611 [2024-11-19 08:01:27.350818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.611 [2024-11-19 08:01:27.350861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.611 qpair failed and we were unable to recover it. 00:37:35.611 [2024-11-19 08:01:27.351000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.611 [2024-11-19 08:01:27.351040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.611 qpair failed and we were unable to recover it. 00:37:35.611 [2024-11-19 08:01:27.351212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.611 [2024-11-19 08:01:27.351250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.611 qpair failed and we were unable to recover it. 00:37:35.611 [2024-11-19 08:01:27.351379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.611 [2024-11-19 08:01:27.351418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.611 qpair failed and we were unable to recover it. 00:37:35.611 [2024-11-19 08:01:27.351581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.611 [2024-11-19 08:01:27.351628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.611 qpair failed and we were unable to recover it. 00:37:35.611 [2024-11-19 08:01:27.351779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.611 [2024-11-19 08:01:27.351816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.611 qpair failed and we were unable to recover it. 00:37:35.611 [2024-11-19 08:01:27.351938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.611 [2024-11-19 08:01:27.351974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.611 qpair failed and we were unable to recover it. 00:37:35.611 [2024-11-19 08:01:27.352157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.611 [2024-11-19 08:01:27.352212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.611 qpair failed and we were unable to recover it. 00:37:35.611 [2024-11-19 08:01:27.352342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.611 [2024-11-19 08:01:27.352401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.611 qpair failed and we were unable to recover it. 00:37:35.611 [2024-11-19 08:01:27.352546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.611 [2024-11-19 08:01:27.352581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.611 qpair failed and we were unable to recover it. 00:37:35.611 [2024-11-19 08:01:27.352695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.611 [2024-11-19 08:01:27.352731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.611 qpair failed and we were unable to recover it. 00:37:35.611 [2024-11-19 08:01:27.352888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.611 [2024-11-19 08:01:27.352926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.611 qpair failed and we were unable to recover it. 00:37:35.611 [2024-11-19 08:01:27.353179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.611 [2024-11-19 08:01:27.353220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.611 qpair failed and we were unable to recover it. 00:37:35.611 [2024-11-19 08:01:27.353425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.611 [2024-11-19 08:01:27.353462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.611 qpair failed and we were unable to recover it. 00:37:35.611 [2024-11-19 08:01:27.353608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.611 [2024-11-19 08:01:27.353646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.611 qpair failed and we were unable to recover it. 00:37:35.611 [2024-11-19 08:01:27.353787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.611 [2024-11-19 08:01:27.353822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.611 qpair failed and we were unable to recover it. 00:37:35.611 [2024-11-19 08:01:27.354046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.611 [2024-11-19 08:01:27.354105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.611 qpair failed and we were unable to recover it. 00:37:35.611 [2024-11-19 08:01:27.354337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.611 [2024-11-19 08:01:27.354394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.611 qpair failed and we were unable to recover it. 00:37:35.611 [2024-11-19 08:01:27.354531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.611 [2024-11-19 08:01:27.354566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.611 qpair failed and we were unable to recover it. 00:37:35.611 [2024-11-19 08:01:27.354711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.611 [2024-11-19 08:01:27.354751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.611 qpair failed and we were unable to recover it. 00:37:35.612 [2024-11-19 08:01:27.354881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.612 [2024-11-19 08:01:27.354933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.612 qpair failed and we were unable to recover it. 00:37:35.612 [2024-11-19 08:01:27.355130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.612 [2024-11-19 08:01:27.355198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.612 qpair failed and we were unable to recover it. 00:37:35.612 [2024-11-19 08:01:27.355384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.612 [2024-11-19 08:01:27.355458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.612 qpair failed and we were unable to recover it. 00:37:35.612 [2024-11-19 08:01:27.355592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.612 [2024-11-19 08:01:27.355632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.612 qpair failed and we were unable to recover it. 00:37:35.612 [2024-11-19 08:01:27.355786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.612 [2024-11-19 08:01:27.355824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.612 qpair failed and we were unable to recover it. 00:37:35.612 [2024-11-19 08:01:27.355938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.612 [2024-11-19 08:01:27.355974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.612 qpair failed and we were unable to recover it. 00:37:35.612 [2024-11-19 08:01:27.356112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.612 [2024-11-19 08:01:27.356147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.612 qpair failed and we were unable to recover it. 00:37:35.612 [2024-11-19 08:01:27.356241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.612 [2024-11-19 08:01:27.356275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.612 qpair failed and we were unable to recover it. 00:37:35.612 [2024-11-19 08:01:27.356416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.612 [2024-11-19 08:01:27.356451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.612 qpair failed and we were unable to recover it. 00:37:35.612 [2024-11-19 08:01:27.356686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.612 [2024-11-19 08:01:27.356751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.612 qpair failed and we were unable to recover it. 00:37:35.612 [2024-11-19 08:01:27.356901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.612 [2024-11-19 08:01:27.356939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.612 qpair failed and we were unable to recover it. 00:37:35.612 [2024-11-19 08:01:27.357046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.612 [2024-11-19 08:01:27.357081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.612 qpair failed and we were unable to recover it. 00:37:35.612 [2024-11-19 08:01:27.357190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.612 [2024-11-19 08:01:27.357225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.612 qpair failed and we were unable to recover it. 00:37:35.612 [2024-11-19 08:01:27.357351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.612 [2024-11-19 08:01:27.357406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.612 qpair failed and we were unable to recover it. 00:37:35.612 [2024-11-19 08:01:27.357568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.612 [2024-11-19 08:01:27.357603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.612 qpair failed and we were unable to recover it. 00:37:35.612 [2024-11-19 08:01:27.357751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.612 [2024-11-19 08:01:27.357793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.612 qpair failed and we were unable to recover it. 00:37:35.612 [2024-11-19 08:01:27.357929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.612 [2024-11-19 08:01:27.357979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.612 qpair failed and we were unable to recover it. 00:37:35.612 [2024-11-19 08:01:27.358153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.612 [2024-11-19 08:01:27.358218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.612 qpair failed and we were unable to recover it. 00:37:35.612 [2024-11-19 08:01:27.358493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.612 [2024-11-19 08:01:27.358553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.612 qpair failed and we were unable to recover it. 00:37:35.612 [2024-11-19 08:01:27.358700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.612 [2024-11-19 08:01:27.358766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.612 qpair failed and we were unable to recover it. 00:37:35.612 [2024-11-19 08:01:27.358891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.612 [2024-11-19 08:01:27.358939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.612 qpair failed and we were unable to recover it. 00:37:35.612 [2024-11-19 08:01:27.359207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.612 [2024-11-19 08:01:27.359247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.612 qpair failed and we were unable to recover it. 00:37:35.612 [2024-11-19 08:01:27.359471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.612 [2024-11-19 08:01:27.359510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.612 qpair failed and we were unable to recover it. 00:37:35.612 [2024-11-19 08:01:27.359660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.612 [2024-11-19 08:01:27.359709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.612 qpair failed and we were unable to recover it. 00:37:35.612 [2024-11-19 08:01:27.359855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.612 [2024-11-19 08:01:27.359889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.612 qpair failed and we were unable to recover it. 00:37:35.612 [2024-11-19 08:01:27.360053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.612 [2024-11-19 08:01:27.360087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.612 qpair failed and we were unable to recover it. 00:37:35.612 [2024-11-19 08:01:27.360294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.612 [2024-11-19 08:01:27.360355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.612 qpair failed and we were unable to recover it. 00:37:35.612 [2024-11-19 08:01:27.360490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.612 [2024-11-19 08:01:27.360543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.612 qpair failed and we were unable to recover it. 00:37:35.612 [2024-11-19 08:01:27.360774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.612 [2024-11-19 08:01:27.360808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.612 qpair failed and we were unable to recover it. 00:37:35.612 [2024-11-19 08:01:27.360922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.612 [2024-11-19 08:01:27.360956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.612 qpair failed and we were unable to recover it. 00:37:35.612 [2024-11-19 08:01:27.361119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.612 [2024-11-19 08:01:27.361157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.612 qpair failed and we were unable to recover it. 00:37:35.612 [2024-11-19 08:01:27.361322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.612 [2024-11-19 08:01:27.361375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.612 qpair failed and we were unable to recover it. 00:37:35.612 [2024-11-19 08:01:27.361523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.612 [2024-11-19 08:01:27.361561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.612 qpair failed and we were unable to recover it. 00:37:35.612 [2024-11-19 08:01:27.361706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.612 [2024-11-19 08:01:27.361761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.612 qpair failed and we were unable to recover it. 00:37:35.612 [2024-11-19 08:01:27.361903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.612 [2024-11-19 08:01:27.361938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.613 qpair failed and we were unable to recover it. 00:37:35.613 [2024-11-19 08:01:27.362173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.613 [2024-11-19 08:01:27.362228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.613 qpair failed and we were unable to recover it. 00:37:35.613 [2024-11-19 08:01:27.362419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.613 [2024-11-19 08:01:27.362460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.613 qpair failed and we were unable to recover it. 00:37:35.613 [2024-11-19 08:01:27.362591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.613 [2024-11-19 08:01:27.362632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.613 qpair failed and we were unable to recover it. 00:37:35.613 [2024-11-19 08:01:27.362783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.613 [2024-11-19 08:01:27.362819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.613 qpair failed and we were unable to recover it. 00:37:35.613 [2024-11-19 08:01:27.363009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.613 [2024-11-19 08:01:27.363057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.613 qpair failed and we were unable to recover it. 00:37:35.613 [2024-11-19 08:01:27.363291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.613 [2024-11-19 08:01:27.363344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.613 qpair failed and we were unable to recover it. 00:37:35.613 [2024-11-19 08:01:27.363476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.613 [2024-11-19 08:01:27.363515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.613 qpair failed and we were unable to recover it. 00:37:35.613 [2024-11-19 08:01:27.363651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.613 [2024-11-19 08:01:27.363686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.613 qpair failed and we were unable to recover it. 00:37:35.613 [2024-11-19 08:01:27.363839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.613 [2024-11-19 08:01:27.363872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.613 qpair failed and we were unable to recover it. 00:37:35.613 [2024-11-19 08:01:27.364027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.613 [2024-11-19 08:01:27.364089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.613 qpair failed and we were unable to recover it. 00:37:35.613 [2024-11-19 08:01:27.364346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.613 [2024-11-19 08:01:27.364403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.613 qpair failed and we were unable to recover it. 00:37:35.613 [2024-11-19 08:01:27.364553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.613 [2024-11-19 08:01:27.364592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.613 qpair failed and we were unable to recover it. 00:37:35.613 [2024-11-19 08:01:27.364755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.613 [2024-11-19 08:01:27.364791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.613 qpair failed and we were unable to recover it. 00:37:35.613 [2024-11-19 08:01:27.364925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.613 [2024-11-19 08:01:27.364994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.613 qpair failed and we were unable to recover it. 00:37:35.613 [2024-11-19 08:01:27.365234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.613 [2024-11-19 08:01:27.365306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.613 qpair failed and we were unable to recover it. 00:37:35.613 [2024-11-19 08:01:27.365445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.613 [2024-11-19 08:01:27.365480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.613 qpair failed and we were unable to recover it. 00:37:35.613 [2024-11-19 08:01:27.365641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.613 [2024-11-19 08:01:27.365679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.613 qpair failed and we were unable to recover it. 00:37:35.613 [2024-11-19 08:01:27.365881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.613 [2024-11-19 08:01:27.365915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.613 qpair failed and we were unable to recover it. 00:37:35.613 [2024-11-19 08:01:27.366108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.613 [2024-11-19 08:01:27.366164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.613 qpair failed and we were unable to recover it. 00:37:35.613 [2024-11-19 08:01:27.366396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.613 [2024-11-19 08:01:27.366454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.613 qpair failed and we were unable to recover it. 00:37:35.613 [2024-11-19 08:01:27.366633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.613 [2024-11-19 08:01:27.366677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.613 qpair failed and we were unable to recover it. 00:37:35.613 [2024-11-19 08:01:27.366821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.613 [2024-11-19 08:01:27.366857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.613 qpair failed and we were unable to recover it. 00:37:35.613 [2024-11-19 08:01:27.366990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.613 [2024-11-19 08:01:27.367024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.613 qpair failed and we were unable to recover it. 00:37:35.613 [2024-11-19 08:01:27.367196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.613 [2024-11-19 08:01:27.367234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.613 qpair failed and we were unable to recover it. 00:37:35.613 [2024-11-19 08:01:27.367420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.613 [2024-11-19 08:01:27.367477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.613 qpair failed and we were unable to recover it. 00:37:35.613 [2024-11-19 08:01:27.367626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.613 [2024-11-19 08:01:27.367664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.613 qpair failed and we were unable to recover it. 00:37:35.613 [2024-11-19 08:01:27.367851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.613 [2024-11-19 08:01:27.367900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.613 qpair failed and we were unable to recover it. 00:37:35.613 [2024-11-19 08:01:27.368104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.613 [2024-11-19 08:01:27.368164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.613 qpair failed and we were unable to recover it. 00:37:35.613 [2024-11-19 08:01:27.368370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.613 [2024-11-19 08:01:27.368427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.613 qpair failed and we were unable to recover it. 00:37:35.614 [2024-11-19 08:01:27.368572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.614 [2024-11-19 08:01:27.368610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.614 qpair failed and we were unable to recover it. 00:37:35.614 [2024-11-19 08:01:27.368760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.614 [2024-11-19 08:01:27.368795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.614 qpair failed and we were unable to recover it. 00:37:35.614 [2024-11-19 08:01:27.368907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.614 [2024-11-19 08:01:27.368941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.614 qpair failed and we were unable to recover it. 00:37:35.614 [2024-11-19 08:01:27.369085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.614 [2024-11-19 08:01:27.369119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.614 qpair failed and we were unable to recover it. 00:37:35.614 [2024-11-19 08:01:27.369248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.614 [2024-11-19 08:01:27.369347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.614 qpair failed and we were unable to recover it. 00:37:35.614 [2024-11-19 08:01:27.369514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.614 [2024-11-19 08:01:27.369552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.614 qpair failed and we were unable to recover it. 00:37:35.614 [2024-11-19 08:01:27.369737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.614 [2024-11-19 08:01:27.369772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.614 qpair failed and we were unable to recover it. 00:37:35.614 [2024-11-19 08:01:27.369935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.614 [2024-11-19 08:01:27.369973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.614 qpair failed and we were unable to recover it. 00:37:35.614 [2024-11-19 08:01:27.370167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.614 [2024-11-19 08:01:27.370221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.614 qpair failed and we were unable to recover it. 00:37:35.614 [2024-11-19 08:01:27.370407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.614 [2024-11-19 08:01:27.370455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.614 qpair failed and we were unable to recover it. 00:37:35.614 [2024-11-19 08:01:27.370578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.614 [2024-11-19 08:01:27.370612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.614 qpair failed and we were unable to recover it. 00:37:35.614 [2024-11-19 08:01:27.370814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.614 [2024-11-19 08:01:27.370864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.614 qpair failed and we were unable to recover it. 00:37:35.614 [2024-11-19 08:01:27.371063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.614 [2024-11-19 08:01:27.371104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.614 qpair failed and we were unable to recover it. 00:37:35.614 [2024-11-19 08:01:27.371256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.614 [2024-11-19 08:01:27.371295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.614 qpair failed and we were unable to recover it. 00:37:35.614 [2024-11-19 08:01:27.371520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.614 [2024-11-19 08:01:27.371587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.614 qpair failed and we were unable to recover it. 00:37:35.614 [2024-11-19 08:01:27.371760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.614 [2024-11-19 08:01:27.371798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.614 qpair failed and we were unable to recover it. 00:37:35.614 [2024-11-19 08:01:27.371900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.614 [2024-11-19 08:01:27.371934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.614 qpair failed and we were unable to recover it. 00:37:35.614 [2024-11-19 08:01:27.372108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.614 [2024-11-19 08:01:27.372165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.614 qpair failed and we were unable to recover it. 00:37:35.614 [2024-11-19 08:01:27.372297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.614 [2024-11-19 08:01:27.372350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.614 qpair failed and we were unable to recover it. 00:37:35.614 [2024-11-19 08:01:27.372493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.614 [2024-11-19 08:01:27.372528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.614 qpair failed and we were unable to recover it. 00:37:35.614 [2024-11-19 08:01:27.372644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.614 [2024-11-19 08:01:27.372701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.614 qpair failed and we were unable to recover it. 00:37:35.614 [2024-11-19 08:01:27.372855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.614 [2024-11-19 08:01:27.372892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.614 qpair failed and we were unable to recover it. 00:37:35.614 [2024-11-19 08:01:27.373054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.614 [2024-11-19 08:01:27.373102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.614 qpair failed and we were unable to recover it. 00:37:35.614 [2024-11-19 08:01:27.373251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.614 [2024-11-19 08:01:27.373286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.614 qpair failed and we were unable to recover it. 00:37:35.614 [2024-11-19 08:01:27.373393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.614 [2024-11-19 08:01:27.373428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.614 qpair failed and we were unable to recover it. 00:37:35.614 [2024-11-19 08:01:27.373569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.614 [2024-11-19 08:01:27.373603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.614 qpair failed and we were unable to recover it. 00:37:35.614 [2024-11-19 08:01:27.373754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.614 [2024-11-19 08:01:27.373793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.614 qpair failed and we were unable to recover it. 00:37:35.614 [2024-11-19 08:01:27.373912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.614 [2024-11-19 08:01:27.373948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.614 qpair failed and we were unable to recover it. 00:37:35.614 [2024-11-19 08:01:27.374069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.614 [2024-11-19 08:01:27.374106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.614 qpair failed and we were unable to recover it. 00:37:35.614 [2024-11-19 08:01:27.374246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.614 [2024-11-19 08:01:27.374282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.614 qpair failed and we were unable to recover it. 00:37:35.614 [2024-11-19 08:01:27.374415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.614 [2024-11-19 08:01:27.374449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.614 qpair failed and we were unable to recover it. 00:37:35.614 [2024-11-19 08:01:27.374557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.614 [2024-11-19 08:01:27.374596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.614 qpair failed and we were unable to recover it. 00:37:35.614 [2024-11-19 08:01:27.374712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.614 [2024-11-19 08:01:27.374748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.614 qpair failed and we were unable to recover it. 00:37:35.614 [2024-11-19 08:01:27.374884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.614 [2024-11-19 08:01:27.374934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.614 qpair failed and we were unable to recover it. 00:37:35.614 [2024-11-19 08:01:27.375063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.614 [2024-11-19 08:01:27.375102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.614 qpair failed and we were unable to recover it. 00:37:35.615 [2024-11-19 08:01:27.375245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.615 [2024-11-19 08:01:27.375301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.615 qpair failed and we were unable to recover it. 00:37:35.615 [2024-11-19 08:01:27.375438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.615 [2024-11-19 08:01:27.375491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.615 qpair failed and we were unable to recover it. 00:37:35.615 [2024-11-19 08:01:27.375636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.615 [2024-11-19 08:01:27.375673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.615 qpair failed and we were unable to recover it. 00:37:35.615 [2024-11-19 08:01:27.375838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.615 [2024-11-19 08:01:27.375879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.615 qpair failed and we were unable to recover it. 00:37:35.615 [2024-11-19 08:01:27.376084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.615 [2024-11-19 08:01:27.376125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.615 qpair failed and we were unable to recover it. 00:37:35.615 [2024-11-19 08:01:27.376281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.615 [2024-11-19 08:01:27.376341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.615 qpair failed and we were unable to recover it. 00:37:35.615 [2024-11-19 08:01:27.376459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.615 [2024-11-19 08:01:27.376510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.615 qpair failed and we were unable to recover it. 00:37:35.615 [2024-11-19 08:01:27.376622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.615 [2024-11-19 08:01:27.376658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.615 qpair failed and we were unable to recover it. 00:37:35.615 [2024-11-19 08:01:27.376783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.615 [2024-11-19 08:01:27.376820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.615 qpair failed and we were unable to recover it. 00:37:35.615 [2024-11-19 08:01:27.377023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.615 [2024-11-19 08:01:27.377077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.615 qpair failed and we were unable to recover it. 00:37:35.615 [2024-11-19 08:01:27.377249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.615 [2024-11-19 08:01:27.377303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.615 qpair failed and we were unable to recover it. 00:37:35.615 [2024-11-19 08:01:27.377492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.615 [2024-11-19 08:01:27.377551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.615 qpair failed and we were unable to recover it. 00:37:35.615 [2024-11-19 08:01:27.377668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.615 [2024-11-19 08:01:27.377714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.615 qpair failed and we were unable to recover it. 00:37:35.615 [2024-11-19 08:01:27.377853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.615 [2024-11-19 08:01:27.377887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.615 qpair failed and we were unable to recover it. 00:37:35.615 [2024-11-19 08:01:27.378050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.615 [2024-11-19 08:01:27.378089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.615 qpair failed and we were unable to recover it. 00:37:35.615 [2024-11-19 08:01:27.378240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.615 [2024-11-19 08:01:27.378299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.615 qpair failed and we were unable to recover it. 00:37:35.615 [2024-11-19 08:01:27.378487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.615 [2024-11-19 08:01:27.378553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.615 qpair failed and we were unable to recover it. 00:37:35.615 [2024-11-19 08:01:27.378670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.615 [2024-11-19 08:01:27.378718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.615 qpair failed and we were unable to recover it. 00:37:35.615 [2024-11-19 08:01:27.378910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.615 [2024-11-19 08:01:27.378945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.615 qpair failed and we were unable to recover it. 00:37:35.615 [2024-11-19 08:01:27.379083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.615 [2024-11-19 08:01:27.379152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.615 qpair failed and we were unable to recover it. 00:37:35.615 [2024-11-19 08:01:27.379336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.615 [2024-11-19 08:01:27.379394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.615 qpair failed and we were unable to recover it. 00:37:35.615 [2024-11-19 08:01:27.379599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.615 [2024-11-19 08:01:27.379647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.615 qpair failed and we were unable to recover it. 00:37:35.615 [2024-11-19 08:01:27.379786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.615 [2024-11-19 08:01:27.379833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.615 qpair failed and we were unable to recover it. 00:37:35.615 [2024-11-19 08:01:27.379967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.615 [2024-11-19 08:01:27.380017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.615 qpair failed and we were unable to recover it. 00:37:35.615 [2024-11-19 08:01:27.380193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.615 [2024-11-19 08:01:27.380234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.615 qpair failed and we were unable to recover it. 00:37:35.615 [2024-11-19 08:01:27.380360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.615 [2024-11-19 08:01:27.380412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.615 qpair failed and we were unable to recover it. 00:37:35.615 [2024-11-19 08:01:27.380588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.615 [2024-11-19 08:01:27.380623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.615 qpair failed and we were unable to recover it. 00:37:35.615 [2024-11-19 08:01:27.380757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.615 [2024-11-19 08:01:27.380793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.615 qpair failed and we were unable to recover it. 00:37:35.615 [2024-11-19 08:01:27.380927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.615 [2024-11-19 08:01:27.380960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.615 qpair failed and we were unable to recover it. 00:37:35.615 [2024-11-19 08:01:27.381085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.615 [2024-11-19 08:01:27.381143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.615 qpair failed and we were unable to recover it. 00:37:35.615 [2024-11-19 08:01:27.381267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.615 [2024-11-19 08:01:27.381305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.615 qpair failed and we were unable to recover it. 00:37:35.615 [2024-11-19 08:01:27.381453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.615 [2024-11-19 08:01:27.381490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.615 qpair failed and we were unable to recover it. 00:37:35.615 [2024-11-19 08:01:27.381613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.615 [2024-11-19 08:01:27.381650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.615 qpair failed and we were unable to recover it. 00:37:35.615 [2024-11-19 08:01:27.381792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.615 [2024-11-19 08:01:27.381827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.615 qpair failed and we were unable to recover it. 00:37:35.615 [2024-11-19 08:01:27.381937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.615 [2024-11-19 08:01:27.381971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.616 qpair failed and we were unable to recover it. 00:37:35.616 [2024-11-19 08:01:27.382086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.616 [2024-11-19 08:01:27.382122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.616 qpair failed and we were unable to recover it. 00:37:35.616 [2024-11-19 08:01:27.382263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.616 [2024-11-19 08:01:27.382301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.616 qpair failed and we were unable to recover it. 00:37:35.616 [2024-11-19 08:01:27.382528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.616 [2024-11-19 08:01:27.382562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.616 qpair failed and we were unable to recover it. 00:37:35.616 [2024-11-19 08:01:27.382670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.616 [2024-11-19 08:01:27.382712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.616 qpair failed and we were unable to recover it. 00:37:35.616 [2024-11-19 08:01:27.382852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.616 [2024-11-19 08:01:27.382886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.616 qpair failed and we were unable to recover it. 00:37:35.616 [2024-11-19 08:01:27.383016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.616 [2024-11-19 08:01:27.383053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.616 qpair failed and we were unable to recover it. 00:37:35.616 [2024-11-19 08:01:27.383169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.616 [2024-11-19 08:01:27.383207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.616 qpair failed and we were unable to recover it. 00:37:35.616 [2024-11-19 08:01:27.383320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.616 [2024-11-19 08:01:27.383358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.616 qpair failed and we were unable to recover it. 00:37:35.616 [2024-11-19 08:01:27.383502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.616 [2024-11-19 08:01:27.383539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.616 qpair failed and we were unable to recover it. 00:37:35.616 [2024-11-19 08:01:27.383697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.616 [2024-11-19 08:01:27.383752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.616 qpair failed and we were unable to recover it. 00:37:35.616 [2024-11-19 08:01:27.383869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.616 [2024-11-19 08:01:27.383903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.616 qpair failed and we were unable to recover it. 00:37:35.616 [2024-11-19 08:01:27.384107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.616 [2024-11-19 08:01:27.384144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.616 qpair failed and we were unable to recover it. 00:37:35.616 [2024-11-19 08:01:27.384297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.616 [2024-11-19 08:01:27.384361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.616 qpair failed and we were unable to recover it. 00:37:35.616 [2024-11-19 08:01:27.384486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.616 [2024-11-19 08:01:27.384537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.616 qpair failed and we were unable to recover it. 00:37:35.616 [2024-11-19 08:01:27.384665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.616 [2024-11-19 08:01:27.384709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.616 qpair failed and we were unable to recover it. 00:37:35.616 [2024-11-19 08:01:27.384853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.616 [2024-11-19 08:01:27.384890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.616 qpair failed and we were unable to recover it. 00:37:35.616 [2024-11-19 08:01:27.385011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.616 [2024-11-19 08:01:27.385060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.616 qpair failed and we were unable to recover it. 00:37:35.616 [2024-11-19 08:01:27.385232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.616 [2024-11-19 08:01:27.385299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.616 qpair failed and we were unable to recover it. 00:37:35.616 [2024-11-19 08:01:27.385467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.616 [2024-11-19 08:01:27.385521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.616 qpair failed and we were unable to recover it. 00:37:35.616 [2024-11-19 08:01:27.385753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.616 [2024-11-19 08:01:27.385789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.616 qpair failed and we were unable to recover it. 00:37:35.616 [2024-11-19 08:01:27.385926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.616 [2024-11-19 08:01:27.385960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.616 qpair failed and we were unable to recover it. 00:37:35.616 [2024-11-19 08:01:27.386064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.616 [2024-11-19 08:01:27.386099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.616 qpair failed and we were unable to recover it. 00:37:35.616 [2024-11-19 08:01:27.386215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.616 [2024-11-19 08:01:27.386249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.616 qpair failed and we were unable to recover it. 00:37:35.616 [2024-11-19 08:01:27.386388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.616 [2024-11-19 08:01:27.386421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.616 qpair failed and we were unable to recover it. 00:37:35.616 [2024-11-19 08:01:27.386557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.616 [2024-11-19 08:01:27.386590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.616 qpair failed and we were unable to recover it. 00:37:35.616 [2024-11-19 08:01:27.386722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.616 [2024-11-19 08:01:27.386756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.616 qpair failed and we were unable to recover it. 00:37:35.616 [2024-11-19 08:01:27.386881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.616 [2024-11-19 08:01:27.386919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.616 qpair failed and we were unable to recover it. 00:37:35.616 [2024-11-19 08:01:27.387061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.616 [2024-11-19 08:01:27.387100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.616 qpair failed and we were unable to recover it. 00:37:35.616 [2024-11-19 08:01:27.387279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.616 [2024-11-19 08:01:27.387322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.616 qpair failed and we were unable to recover it. 00:37:35.616 [2024-11-19 08:01:27.387450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.616 [2024-11-19 08:01:27.387488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.616 qpair failed and we were unable to recover it. 00:37:35.616 [2024-11-19 08:01:27.387649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.616 [2024-11-19 08:01:27.387685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.616 qpair failed and we were unable to recover it. 00:37:35.616 [2024-11-19 08:01:27.387828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.616 [2024-11-19 08:01:27.387881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.616 qpair failed and we were unable to recover it. 00:37:35.616 [2024-11-19 08:01:27.388027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.616 [2024-11-19 08:01:27.388080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.616 qpair failed and we were unable to recover it. 00:37:35.616 [2024-11-19 08:01:27.388208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.616 [2024-11-19 08:01:27.388261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.616 qpair failed and we were unable to recover it. 00:37:35.617 [2024-11-19 08:01:27.388404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.617 [2024-11-19 08:01:27.388438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.617 qpair failed and we were unable to recover it. 00:37:35.617 [2024-11-19 08:01:27.388549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.617 [2024-11-19 08:01:27.388584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.617 qpair failed and we were unable to recover it. 00:37:35.617 [2024-11-19 08:01:27.388685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.617 [2024-11-19 08:01:27.388727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.617 qpair failed and we were unable to recover it. 00:37:35.617 [2024-11-19 08:01:27.388889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.617 [2024-11-19 08:01:27.388938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.617 qpair failed and we were unable to recover it. 00:37:35.617 [2024-11-19 08:01:27.389109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.617 [2024-11-19 08:01:27.389145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.617 qpair failed and we were unable to recover it. 00:37:35.617 [2024-11-19 08:01:27.389270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.617 [2024-11-19 08:01:27.389305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.617 qpair failed and we were unable to recover it. 00:37:35.617 [2024-11-19 08:01:27.389452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.617 [2024-11-19 08:01:27.389487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.617 qpair failed and we were unable to recover it. 00:37:35.617 [2024-11-19 08:01:27.389623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.617 [2024-11-19 08:01:27.389658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.617 qpair failed and we were unable to recover it. 00:37:35.617 [2024-11-19 08:01:27.389803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.617 [2024-11-19 08:01:27.389841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.617 qpair failed and we were unable to recover it. 00:37:35.617 [2024-11-19 08:01:27.389965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.617 [2024-11-19 08:01:27.390005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.617 qpair failed and we were unable to recover it. 00:37:35.617 [2024-11-19 08:01:27.390155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.617 [2024-11-19 08:01:27.390192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.617 qpair failed and we were unable to recover it. 00:37:35.617 [2024-11-19 08:01:27.390341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.617 [2024-11-19 08:01:27.390395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.617 qpair failed and we were unable to recover it. 00:37:35.617 [2024-11-19 08:01:27.390505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.617 [2024-11-19 08:01:27.390541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.617 qpair failed and we were unable to recover it. 00:37:35.617 [2024-11-19 08:01:27.390658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.617 [2024-11-19 08:01:27.390698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.617 qpair failed and we were unable to recover it. 00:37:35.617 [2024-11-19 08:01:27.390834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.617 [2024-11-19 08:01:27.390868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.617 qpair failed and we were unable to recover it. 00:37:35.617 [2024-11-19 08:01:27.391018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.617 [2024-11-19 08:01:27.391055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.617 qpair failed and we were unable to recover it. 00:37:35.617 [2024-11-19 08:01:27.391178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.617 [2024-11-19 08:01:27.391216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.617 qpair failed and we were unable to recover it. 00:37:35.617 [2024-11-19 08:01:27.391353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.617 [2024-11-19 08:01:27.391387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.617 qpair failed and we were unable to recover it. 00:37:35.617 [2024-11-19 08:01:27.391565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.617 [2024-11-19 08:01:27.391601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.617 qpair failed and we were unable to recover it. 00:37:35.617 [2024-11-19 08:01:27.391722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.617 [2024-11-19 08:01:27.391758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.617 qpair failed and we were unable to recover it. 00:37:35.617 [2024-11-19 08:01:27.391908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.617 [2024-11-19 08:01:27.391961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.617 qpair failed and we were unable to recover it. 00:37:35.617 [2024-11-19 08:01:27.392119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.617 [2024-11-19 08:01:27.392210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.617 qpair failed and we were unable to recover it. 00:37:35.617 [2024-11-19 08:01:27.392391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.617 [2024-11-19 08:01:27.392448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.617 qpair failed and we were unable to recover it. 00:37:35.617 [2024-11-19 08:01:27.392570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.617 [2024-11-19 08:01:27.392606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.617 qpair failed and we were unable to recover it. 00:37:35.617 [2024-11-19 08:01:27.392760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.617 [2024-11-19 08:01:27.392809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.617 qpair failed and we were unable to recover it. 00:37:35.617 [2024-11-19 08:01:27.392955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.617 [2024-11-19 08:01:27.393003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.617 qpair failed and we were unable to recover it. 00:37:35.617 [2024-11-19 08:01:27.393167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.617 [2024-11-19 08:01:27.393254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.617 qpair failed and we were unable to recover it. 00:37:35.617 [2024-11-19 08:01:27.393406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.617 [2024-11-19 08:01:27.393471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.617 qpair failed and we were unable to recover it. 00:37:35.617 [2024-11-19 08:01:27.393623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.617 [2024-11-19 08:01:27.393658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.617 qpair failed and we were unable to recover it. 00:37:35.617 [2024-11-19 08:01:27.393783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.617 [2024-11-19 08:01:27.393819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.617 qpair failed and we were unable to recover it. 00:37:35.617 [2024-11-19 08:01:27.393931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.617 [2024-11-19 08:01:27.393966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.617 qpair failed and we were unable to recover it. 00:37:35.617 [2024-11-19 08:01:27.394112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.617 [2024-11-19 08:01:27.394166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.617 qpair failed and we were unable to recover it. 00:37:35.617 [2024-11-19 08:01:27.394377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.617 [2024-11-19 08:01:27.394417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.617 qpair failed and we were unable to recover it. 00:37:35.617 [2024-11-19 08:01:27.394537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.617 [2024-11-19 08:01:27.394576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.617 qpair failed and we were unable to recover it. 00:37:35.617 [2024-11-19 08:01:27.394722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.617 [2024-11-19 08:01:27.394763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.618 qpair failed and we were unable to recover it. 00:37:35.618 [2024-11-19 08:01:27.394901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.618 [2024-11-19 08:01:27.394950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.618 qpair failed and we were unable to recover it. 00:37:35.618 [2024-11-19 08:01:27.395139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.618 [2024-11-19 08:01:27.395207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.618 qpair failed and we were unable to recover it. 00:37:35.618 [2024-11-19 08:01:27.395317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.618 [2024-11-19 08:01:27.395352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.618 qpair failed and we were unable to recover it. 00:37:35.618 [2024-11-19 08:01:27.395491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.618 [2024-11-19 08:01:27.395525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.618 qpair failed and we were unable to recover it. 00:37:35.618 [2024-11-19 08:01:27.395643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.618 [2024-11-19 08:01:27.395679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.618 qpair failed and we were unable to recover it. 00:37:35.618 [2024-11-19 08:01:27.395801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.618 [2024-11-19 08:01:27.395838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.618 qpair failed and we were unable to recover it. 00:37:35.618 [2024-11-19 08:01:27.395965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.618 [2024-11-19 08:01:27.396014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.618 qpair failed and we were unable to recover it. 00:37:35.618 [2024-11-19 08:01:27.396159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.618 [2024-11-19 08:01:27.396194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.618 qpair failed and we were unable to recover it. 00:37:35.618 [2024-11-19 08:01:27.396334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.618 [2024-11-19 08:01:27.396401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.618 qpair failed and we were unable to recover it. 00:37:35.618 [2024-11-19 08:01:27.396543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.618 [2024-11-19 08:01:27.396578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.618 qpair failed and we were unable to recover it. 00:37:35.618 [2024-11-19 08:01:27.396725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.618 [2024-11-19 08:01:27.396782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.618 qpair failed and we were unable to recover it. 00:37:35.618 [2024-11-19 08:01:27.396905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.618 [2024-11-19 08:01:27.396943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.618 qpair failed and we were unable to recover it. 00:37:35.618 [2024-11-19 08:01:27.397092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.618 [2024-11-19 08:01:27.397130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.618 qpair failed and we were unable to recover it. 00:37:35.618 [2024-11-19 08:01:27.397259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.618 [2024-11-19 08:01:27.397299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.618 qpair failed and we were unable to recover it. 00:37:35.618 [2024-11-19 08:01:27.397485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.618 [2024-11-19 08:01:27.397542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.618 qpair failed and we were unable to recover it. 00:37:35.618 [2024-11-19 08:01:27.397675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.618 [2024-11-19 08:01:27.397730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.618 qpair failed and we were unable to recover it. 00:37:35.618 [2024-11-19 08:01:27.397884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.618 [2024-11-19 08:01:27.397940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.618 qpair failed and we were unable to recover it. 00:37:35.618 [2024-11-19 08:01:27.398072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.618 [2024-11-19 08:01:27.398111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.618 qpair failed and we were unable to recover it. 00:37:35.618 [2024-11-19 08:01:27.398251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.618 [2024-11-19 08:01:27.398289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.618 qpair failed and we were unable to recover it. 00:37:35.618 [2024-11-19 08:01:27.398420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.618 [2024-11-19 08:01:27.398454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.618 qpair failed and we were unable to recover it. 00:37:35.618 [2024-11-19 08:01:27.398594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.618 [2024-11-19 08:01:27.398632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.618 qpair failed and we were unable to recover it. 00:37:35.618 [2024-11-19 08:01:27.398769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.618 [2024-11-19 08:01:27.398805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.618 qpair failed and we were unable to recover it. 00:37:35.618 [2024-11-19 08:01:27.398939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.618 [2024-11-19 08:01:27.398987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.618 qpair failed and we were unable to recover it. 00:37:35.618 [2024-11-19 08:01:27.399129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.618 [2024-11-19 08:01:27.399169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.618 qpair failed and we were unable to recover it. 00:37:35.618 [2024-11-19 08:01:27.399320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.618 [2024-11-19 08:01:27.399359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.618 qpair failed and we were unable to recover it. 00:37:35.618 [2024-11-19 08:01:27.399511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.618 [2024-11-19 08:01:27.399549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.618 qpair failed and we were unable to recover it. 00:37:35.618 [2024-11-19 08:01:27.399704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.618 [2024-11-19 08:01:27.399758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.618 qpair failed and we were unable to recover it. 00:37:35.618 [2024-11-19 08:01:27.399926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.618 [2024-11-19 08:01:27.399982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.618 qpair failed and we were unable to recover it. 00:37:35.618 [2024-11-19 08:01:27.400144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.618 [2024-11-19 08:01:27.400199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.618 qpair failed and we were unable to recover it. 00:37:35.618 [2024-11-19 08:01:27.400424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.618 [2024-11-19 08:01:27.400476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.618 qpair failed and we were unable to recover it. 00:37:35.619 [2024-11-19 08:01:27.400620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.619 [2024-11-19 08:01:27.400655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.619 qpair failed and we were unable to recover it. 00:37:35.619 [2024-11-19 08:01:27.400801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.619 [2024-11-19 08:01:27.400855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.619 qpair failed and we were unable to recover it. 00:37:35.619 [2024-11-19 08:01:27.400982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.619 [2024-11-19 08:01:27.401022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.619 qpair failed and we were unable to recover it. 00:37:35.619 [2024-11-19 08:01:27.401153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.619 [2024-11-19 08:01:27.401192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.619 qpair failed and we were unable to recover it. 00:37:35.619 [2024-11-19 08:01:27.401342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.619 [2024-11-19 08:01:27.401381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.619 qpair failed and we were unable to recover it. 00:37:35.619 [2024-11-19 08:01:27.401556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.619 [2024-11-19 08:01:27.401594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.619 qpair failed and we were unable to recover it. 00:37:35.619 [2024-11-19 08:01:27.401731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.619 [2024-11-19 08:01:27.401766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.619 qpair failed and we were unable to recover it. 00:37:35.619 [2024-11-19 08:01:27.401893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.619 [2024-11-19 08:01:27.401931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.619 qpair failed and we were unable to recover it. 00:37:35.619 [2024-11-19 08:01:27.402075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.619 [2024-11-19 08:01:27.402112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.619 qpair failed and we were unable to recover it. 00:37:35.619 [2024-11-19 08:01:27.402233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.619 [2024-11-19 08:01:27.402276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.619 qpair failed and we were unable to recover it. 00:37:35.619 [2024-11-19 08:01:27.402483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.619 [2024-11-19 08:01:27.402536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.619 qpair failed and we were unable to recover it. 00:37:35.619 [2024-11-19 08:01:27.402700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.619 [2024-11-19 08:01:27.402749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.619 qpair failed and we were unable to recover it. 00:37:35.619 [2024-11-19 08:01:27.402874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.619 [2024-11-19 08:01:27.402912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.619 qpair failed and we were unable to recover it. 00:37:35.619 [2024-11-19 08:01:27.403059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.619 [2024-11-19 08:01:27.403095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.619 qpair failed and we were unable to recover it. 00:37:35.619 [2024-11-19 08:01:27.403260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.619 [2024-11-19 08:01:27.403323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.619 qpair failed and we were unable to recover it. 00:37:35.619 [2024-11-19 08:01:27.403544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.619 [2024-11-19 08:01:27.403583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.619 qpair failed and we were unable to recover it. 00:37:35.619 [2024-11-19 08:01:27.403705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.619 [2024-11-19 08:01:27.403761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.619 qpair failed and we were unable to recover it. 00:37:35.619 [2024-11-19 08:01:27.403870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.619 [2024-11-19 08:01:27.403906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.619 qpair failed and we were unable to recover it. 00:37:35.619 [2024-11-19 08:01:27.404039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.619 [2024-11-19 08:01:27.404078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.619 qpair failed and we were unable to recover it. 00:37:35.619 [2024-11-19 08:01:27.404242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.619 [2024-11-19 08:01:27.404296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.619 qpair failed and we were unable to recover it. 00:37:35.619 [2024-11-19 08:01:27.404444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.619 [2024-11-19 08:01:27.404478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.619 qpair failed and we were unable to recover it. 00:37:35.619 [2024-11-19 08:01:27.404584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.619 [2024-11-19 08:01:27.404618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.619 qpair failed and we were unable to recover it. 00:37:35.619 [2024-11-19 08:01:27.404754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.619 [2024-11-19 08:01:27.404804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.619 qpair failed and we were unable to recover it. 00:37:35.619 [2024-11-19 08:01:27.404963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.619 [2024-11-19 08:01:27.405001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.619 qpair failed and we were unable to recover it. 00:37:35.619 [2024-11-19 08:01:27.405170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.619 [2024-11-19 08:01:27.405204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.619 qpair failed and we were unable to recover it. 00:37:35.619 [2024-11-19 08:01:27.405359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.619 [2024-11-19 08:01:27.405425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.619 qpair failed and we were unable to recover it. 00:37:35.619 [2024-11-19 08:01:27.405597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.619 [2024-11-19 08:01:27.405636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.619 qpair failed and we were unable to recover it. 00:37:35.619 [2024-11-19 08:01:27.405794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.619 [2024-11-19 08:01:27.405842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.619 qpair failed and we were unable to recover it. 00:37:35.619 [2024-11-19 08:01:27.405999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.619 [2024-11-19 08:01:27.406040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.619 qpair failed and we were unable to recover it. 00:37:35.619 [2024-11-19 08:01:27.406189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.619 [2024-11-19 08:01:27.406243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.619 qpair failed and we were unable to recover it. 00:37:35.619 [2024-11-19 08:01:27.406429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.619 [2024-11-19 08:01:27.406491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.619 qpair failed and we were unable to recover it. 00:37:35.619 [2024-11-19 08:01:27.406660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.619 [2024-11-19 08:01:27.406706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.619 qpair failed and we were unable to recover it. 00:37:35.619 [2024-11-19 08:01:27.406879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.619 [2024-11-19 08:01:27.406914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.619 qpair failed and we were unable to recover it. 00:37:35.619 [2024-11-19 08:01:27.407118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.619 [2024-11-19 08:01:27.407177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.619 qpair failed and we were unable to recover it. 00:37:35.619 [2024-11-19 08:01:27.407397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.619 [2024-11-19 08:01:27.407432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.620 qpair failed and we were unable to recover it. 00:37:35.620 [2024-11-19 08:01:27.407550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.620 [2024-11-19 08:01:27.407584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.620 qpair failed and we were unable to recover it. 00:37:35.620 [2024-11-19 08:01:27.407818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.620 [2024-11-19 08:01:27.407855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.620 qpair failed and we were unable to recover it. 00:37:35.620 [2024-11-19 08:01:27.407988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.620 [2024-11-19 08:01:27.408037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.620 qpair failed and we were unable to recover it. 00:37:35.620 [2024-11-19 08:01:27.408159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.620 [2024-11-19 08:01:27.408196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.620 qpair failed and we were unable to recover it. 00:37:35.620 [2024-11-19 08:01:27.408314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.620 [2024-11-19 08:01:27.408348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.620 qpair failed and we were unable to recover it. 00:37:35.620 [2024-11-19 08:01:27.408447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.620 [2024-11-19 08:01:27.408481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.620 qpair failed and we were unable to recover it. 00:37:35.620 [2024-11-19 08:01:27.408642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.620 [2024-11-19 08:01:27.408676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.620 qpair failed and we were unable to recover it. 00:37:35.620 [2024-11-19 08:01:27.408787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.620 [2024-11-19 08:01:27.408823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.620 qpair failed and we were unable to recover it. 00:37:35.620 [2024-11-19 08:01:27.408966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.620 [2024-11-19 08:01:27.409007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.620 qpair failed and we were unable to recover it. 00:37:35.620 [2024-11-19 08:01:27.409153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.620 [2024-11-19 08:01:27.409192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.620 qpair failed and we were unable to recover it. 00:37:35.620 [2024-11-19 08:01:27.409313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.620 [2024-11-19 08:01:27.409352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.620 qpair failed and we were unable to recover it. 00:37:35.620 [2024-11-19 08:01:27.409522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.620 [2024-11-19 08:01:27.409558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.620 qpair failed and we were unable to recover it. 00:37:35.620 [2024-11-19 08:01:27.409679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.620 [2024-11-19 08:01:27.409723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.620 qpair failed and we were unable to recover it. 00:37:35.620 [2024-11-19 08:01:27.409860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.620 [2024-11-19 08:01:27.409895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.620 qpair failed and we were unable to recover it. 00:37:35.620 [2024-11-19 08:01:27.410018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.620 [2024-11-19 08:01:27.410064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.620 qpair failed and we were unable to recover it. 00:37:35.620 [2024-11-19 08:01:27.410198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.620 [2024-11-19 08:01:27.410238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.620 qpair failed and we were unable to recover it. 00:37:35.620 [2024-11-19 08:01:27.410392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.620 [2024-11-19 08:01:27.410431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.620 qpair failed and we were unable to recover it. 00:37:35.620 [2024-11-19 08:01:27.410594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.620 [2024-11-19 08:01:27.410630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.620 qpair failed and we were unable to recover it. 00:37:35.620 [2024-11-19 08:01:27.410754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.620 [2024-11-19 08:01:27.410790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.620 qpair failed and we were unable to recover it. 00:37:35.620 [2024-11-19 08:01:27.410923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.620 [2024-11-19 08:01:27.410977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.620 qpair failed and we were unable to recover it. 00:37:35.620 [2024-11-19 08:01:27.411080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.620 [2024-11-19 08:01:27.411116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.620 qpair failed and we were unable to recover it. 00:37:35.620 [2024-11-19 08:01:27.411315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.620 [2024-11-19 08:01:27.411349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.620 qpair failed and we were unable to recover it. 00:37:35.620 [2024-11-19 08:01:27.411485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.620 [2024-11-19 08:01:27.411520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.620 qpair failed and we were unable to recover it. 00:37:35.620 [2024-11-19 08:01:27.411637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.620 [2024-11-19 08:01:27.411672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.620 qpair failed and we were unable to recover it. 00:37:35.620 [2024-11-19 08:01:27.411857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.620 [2024-11-19 08:01:27.411910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.620 qpair failed and we were unable to recover it. 00:37:35.620 [2024-11-19 08:01:27.412055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.620 [2024-11-19 08:01:27.412107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.620 qpair failed and we were unable to recover it. 00:37:35.620 [2024-11-19 08:01:27.412266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.620 [2024-11-19 08:01:27.412307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.620 qpair failed and we were unable to recover it. 00:37:35.620 [2024-11-19 08:01:27.412451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.620 [2024-11-19 08:01:27.412510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.620 qpair failed and we were unable to recover it. 00:37:35.620 [2024-11-19 08:01:27.412636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.620 [2024-11-19 08:01:27.412673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.620 qpair failed and we were unable to recover it. 00:37:35.620 [2024-11-19 08:01:27.412832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.620 [2024-11-19 08:01:27.412888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.620 qpair failed and we were unable to recover it. 00:37:35.620 [2024-11-19 08:01:27.413090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.620 [2024-11-19 08:01:27.413167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.620 qpair failed and we were unable to recover it. 00:37:35.620 [2024-11-19 08:01:27.413317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.620 [2024-11-19 08:01:27.413371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.620 qpair failed and we were unable to recover it. 00:37:35.620 [2024-11-19 08:01:27.413503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.620 [2024-11-19 08:01:27.413543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.620 qpair failed and we were unable to recover it. 00:37:35.620 [2024-11-19 08:01:27.413704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.620 [2024-11-19 08:01:27.413740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.620 qpair failed and we were unable to recover it. 00:37:35.620 [2024-11-19 08:01:27.413872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.621 [2024-11-19 08:01:27.413920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.621 qpair failed and we were unable to recover it. 00:37:35.621 [2024-11-19 08:01:27.414037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.621 [2024-11-19 08:01:27.414073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.621 qpair failed and we were unable to recover it. 00:37:35.621 [2024-11-19 08:01:27.414247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.621 [2024-11-19 08:01:27.414305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.621 qpair failed and we were unable to recover it. 00:37:35.621 [2024-11-19 08:01:27.414420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.621 [2024-11-19 08:01:27.414459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.621 qpair failed and we were unable to recover it. 00:37:35.621 [2024-11-19 08:01:27.414589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.621 [2024-11-19 08:01:27.414642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.621 qpair failed and we were unable to recover it. 00:37:35.621 [2024-11-19 08:01:27.414819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.621 [2024-11-19 08:01:27.414855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.621 qpair failed and we were unable to recover it. 00:37:35.621 [2024-11-19 08:01:27.414985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.621 [2024-11-19 08:01:27.415027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.621 qpair failed and we were unable to recover it. 00:37:35.621 [2024-11-19 08:01:27.415209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.621 [2024-11-19 08:01:27.415273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.621 qpair failed and we were unable to recover it. 00:37:35.621 [2024-11-19 08:01:27.415400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.621 [2024-11-19 08:01:27.415438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.621 qpair failed and we were unable to recover it. 00:37:35.621 [2024-11-19 08:01:27.415568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.621 [2024-11-19 08:01:27.415602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.621 qpair failed and we were unable to recover it. 00:37:35.621 [2024-11-19 08:01:27.415713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.621 [2024-11-19 08:01:27.415747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.621 qpair failed and we were unable to recover it. 00:37:35.621 [2024-11-19 08:01:27.415887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.621 [2024-11-19 08:01:27.415922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.621 qpair failed and we were unable to recover it. 00:37:35.621 [2024-11-19 08:01:27.416048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.621 [2024-11-19 08:01:27.416087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.621 qpair failed and we were unable to recover it. 00:37:35.621 [2024-11-19 08:01:27.416215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.621 [2024-11-19 08:01:27.416253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.621 qpair failed and we were unable to recover it. 00:37:35.621 [2024-11-19 08:01:27.416368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.621 [2024-11-19 08:01:27.416407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.621 qpair failed and we were unable to recover it. 00:37:35.621 [2024-11-19 08:01:27.416526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.621 [2024-11-19 08:01:27.416564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.621 qpair failed and we were unable to recover it. 00:37:35.621 [2024-11-19 08:01:27.416716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.621 [2024-11-19 08:01:27.416769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.621 qpair failed and we were unable to recover it. 00:37:35.621 [2024-11-19 08:01:27.416916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.621 [2024-11-19 08:01:27.416981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.621 qpair failed and we were unable to recover it. 00:37:35.621 [2024-11-19 08:01:27.417160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.621 [2024-11-19 08:01:27.417201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.621 qpair failed and we were unable to recover it. 00:37:35.621 [2024-11-19 08:01:27.417354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.621 [2024-11-19 08:01:27.417392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.621 qpair failed and we were unable to recover it. 00:37:35.621 [2024-11-19 08:01:27.417520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.621 [2024-11-19 08:01:27.417564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.621 qpair failed and we were unable to recover it. 00:37:35.621 [2024-11-19 08:01:27.417723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.621 [2024-11-19 08:01:27.417757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.621 qpair failed and we were unable to recover it. 00:37:35.621 [2024-11-19 08:01:27.417869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.621 [2024-11-19 08:01:27.417904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.621 qpair failed and we were unable to recover it. 00:37:35.621 [2024-11-19 08:01:27.418018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.621 [2024-11-19 08:01:27.418054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.621 qpair failed and we were unable to recover it. 00:37:35.621 [2024-11-19 08:01:27.418220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.621 [2024-11-19 08:01:27.418287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.621 qpair failed and we were unable to recover it. 00:37:35.621 [2024-11-19 08:01:27.418452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.621 [2024-11-19 08:01:27.418491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.621 qpair failed and we were unable to recover it. 00:37:35.621 [2024-11-19 08:01:27.418662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.621 [2024-11-19 08:01:27.418706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.621 qpair failed and we were unable to recover it. 00:37:35.621 [2024-11-19 08:01:27.418884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.621 [2024-11-19 08:01:27.418918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.621 qpair failed and we were unable to recover it. 00:37:35.621 [2024-11-19 08:01:27.419046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.621 [2024-11-19 08:01:27.419084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.621 qpair failed and we were unable to recover it. 00:37:35.621 [2024-11-19 08:01:27.419192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.621 [2024-11-19 08:01:27.419229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.621 qpair failed and we were unable to recover it. 00:37:35.621 [2024-11-19 08:01:27.419372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.621 [2024-11-19 08:01:27.419411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.621 qpair failed and we were unable to recover it. 00:37:35.621 [2024-11-19 08:01:27.419544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.621 [2024-11-19 08:01:27.419582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.621 qpair failed and we were unable to recover it. 00:37:35.621 [2024-11-19 08:01:27.419769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.621 [2024-11-19 08:01:27.419819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.621 qpair failed and we were unable to recover it. 00:37:35.621 [2024-11-19 08:01:27.419948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.621 [2024-11-19 08:01:27.419986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.621 qpair failed and we were unable to recover it. 00:37:35.621 [2024-11-19 08:01:27.420147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.621 [2024-11-19 08:01:27.420208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.621 qpair failed and we were unable to recover it. 00:37:35.622 [2024-11-19 08:01:27.420377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.622 [2024-11-19 08:01:27.420415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.622 qpair failed and we were unable to recover it. 00:37:35.622 [2024-11-19 08:01:27.420588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.622 [2024-11-19 08:01:27.420625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.622 qpair failed and we were unable to recover it. 00:37:35.622 [2024-11-19 08:01:27.420764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.622 [2024-11-19 08:01:27.420799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.622 qpair failed and we were unable to recover it. 00:37:35.622 [2024-11-19 08:01:27.420974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.622 [2024-11-19 08:01:27.421008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.622 qpair failed and we were unable to recover it. 00:37:35.622 [2024-11-19 08:01:27.421159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.622 [2024-11-19 08:01:27.421210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.622 qpair failed and we were unable to recover it. 00:37:35.622 [2024-11-19 08:01:27.421333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.622 [2024-11-19 08:01:27.421386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.622 qpair failed and we were unable to recover it. 00:37:35.622 [2024-11-19 08:01:27.421518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.622 [2024-11-19 08:01:27.421556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.622 qpair failed and we were unable to recover it. 00:37:35.622 [2024-11-19 08:01:27.421702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.622 [2024-11-19 08:01:27.421773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.622 qpair failed and we were unable to recover it. 00:37:35.622 [2024-11-19 08:01:27.421924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.622 [2024-11-19 08:01:27.421977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.622 qpair failed and we were unable to recover it. 00:37:35.622 [2024-11-19 08:01:27.422113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.622 [2024-11-19 08:01:27.422165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.622 qpair failed and we were unable to recover it. 00:37:35.622 [2024-11-19 08:01:27.422336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.622 [2024-11-19 08:01:27.422391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.622 qpair failed and we were unable to recover it. 00:37:35.622 [2024-11-19 08:01:27.422553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.622 [2024-11-19 08:01:27.422587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.622 qpair failed and we were unable to recover it. 00:37:35.622 [2024-11-19 08:01:27.422707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.622 [2024-11-19 08:01:27.422741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.622 qpair failed and we were unable to recover it. 00:37:35.622 [2024-11-19 08:01:27.422880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.622 [2024-11-19 08:01:27.422915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.622 qpair failed and we were unable to recover it. 00:37:35.622 [2024-11-19 08:01:27.423075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.622 [2024-11-19 08:01:27.423113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.622 qpair failed and we were unable to recover it. 00:37:35.622 [2024-11-19 08:01:27.423307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.622 [2024-11-19 08:01:27.423345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.622 qpair failed and we were unable to recover it. 00:37:35.622 [2024-11-19 08:01:27.423476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.622 [2024-11-19 08:01:27.423514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.622 qpair failed and we were unable to recover it. 00:37:35.622 [2024-11-19 08:01:27.423639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.622 [2024-11-19 08:01:27.423677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.622 qpair failed and we were unable to recover it. 00:37:35.622 [2024-11-19 08:01:27.423842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.622 [2024-11-19 08:01:27.423877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.622 qpair failed and we were unable to recover it. 00:37:35.622 [2024-11-19 08:01:27.423984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.622 [2024-11-19 08:01:27.424035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.622 qpair failed and we were unable to recover it. 00:37:35.622 [2024-11-19 08:01:27.424152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.622 [2024-11-19 08:01:27.424190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.622 qpair failed and we were unable to recover it. 00:37:35.622 [2024-11-19 08:01:27.424383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.622 [2024-11-19 08:01:27.424420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.622 qpair failed and we were unable to recover it. 00:37:35.622 [2024-11-19 08:01:27.424535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.622 [2024-11-19 08:01:27.424573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.622 qpair failed and we were unable to recover it. 00:37:35.622 [2024-11-19 08:01:27.424704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.622 [2024-11-19 08:01:27.424758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.622 qpair failed and we were unable to recover it. 00:37:35.622 [2024-11-19 08:01:27.424865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.622 [2024-11-19 08:01:27.424899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.622 qpair failed and we were unable to recover it. 00:37:35.622 [2024-11-19 08:01:27.424999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.622 [2024-11-19 08:01:27.425036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.622 qpair failed and we were unable to recover it. 00:37:35.622 [2024-11-19 08:01:27.425192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.622 [2024-11-19 08:01:27.425229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.622 qpair failed and we were unable to recover it. 00:37:35.622 [2024-11-19 08:01:27.425363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.622 [2024-11-19 08:01:27.425416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.622 qpair failed and we were unable to recover it. 00:37:35.622 [2024-11-19 08:01:27.425565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.622 [2024-11-19 08:01:27.425604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.622 qpair failed and we were unable to recover it. 00:37:35.622 [2024-11-19 08:01:27.425765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.622 [2024-11-19 08:01:27.425815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.622 qpair failed and we were unable to recover it. 00:37:35.622 [2024-11-19 08:01:27.425993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.622 [2024-11-19 08:01:27.426031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.622 qpair failed and we were unable to recover it. 00:37:35.622 [2024-11-19 08:01:27.426171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.622 [2024-11-19 08:01:27.426206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.622 qpair failed and we were unable to recover it. 00:37:35.622 [2024-11-19 08:01:27.426318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.622 [2024-11-19 08:01:27.426353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.622 qpair failed and we were unable to recover it. 00:37:35.622 [2024-11-19 08:01:27.426482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.622 [2024-11-19 08:01:27.426531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.622 qpair failed and we were unable to recover it. 00:37:35.622 [2024-11-19 08:01:27.426650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.622 [2024-11-19 08:01:27.426684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.623 qpair failed and we were unable to recover it. 00:37:35.623 [2024-11-19 08:01:27.426797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.623 [2024-11-19 08:01:27.426833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.623 qpair failed and we were unable to recover it. 00:37:35.623 [2024-11-19 08:01:27.427118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.623 [2024-11-19 08:01:27.427158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.623 qpair failed and we were unable to recover it. 00:37:35.623 [2024-11-19 08:01:27.427295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.623 [2024-11-19 08:01:27.427348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.623 qpair failed and we were unable to recover it. 00:37:35.623 [2024-11-19 08:01:27.427527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.623 [2024-11-19 08:01:27.427565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.623 qpair failed and we were unable to recover it. 00:37:35.623 [2024-11-19 08:01:27.427704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.623 [2024-11-19 08:01:27.427758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.623 qpair failed and we were unable to recover it. 00:37:35.623 [2024-11-19 08:01:27.427878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.623 [2024-11-19 08:01:27.427927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.623 qpair failed and we were unable to recover it. 00:37:35.623 [2024-11-19 08:01:27.428066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.623 [2024-11-19 08:01:27.428122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.623 qpair failed and we were unable to recover it. 00:37:35.623 [2024-11-19 08:01:27.428235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.623 [2024-11-19 08:01:27.428293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.623 qpair failed and we were unable to recover it. 00:37:35.623 [2024-11-19 08:01:27.428433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.623 [2024-11-19 08:01:27.428488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.623 qpair failed and we were unable to recover it. 00:37:35.623 [2024-11-19 08:01:27.428596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.623 [2024-11-19 08:01:27.428631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.623 qpair failed and we were unable to recover it. 00:37:35.623 [2024-11-19 08:01:27.428759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.623 [2024-11-19 08:01:27.428795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.623 qpair failed and we were unable to recover it. 00:37:35.623 [2024-11-19 08:01:27.428931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.623 [2024-11-19 08:01:27.428966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.623 qpair failed and we were unable to recover it. 00:37:35.623 [2024-11-19 08:01:27.429105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.623 [2024-11-19 08:01:27.429139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.623 qpair failed and we were unable to recover it. 00:37:35.623 [2024-11-19 08:01:27.429249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.623 [2024-11-19 08:01:27.429284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.623 qpair failed and we were unable to recover it. 00:37:35.623 [2024-11-19 08:01:27.429430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.623 [2024-11-19 08:01:27.429466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.623 qpair failed and we were unable to recover it. 00:37:35.623 [2024-11-19 08:01:27.429576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.623 [2024-11-19 08:01:27.429611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.623 qpair failed and we were unable to recover it. 00:37:35.623 [2024-11-19 08:01:27.429718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.623 [2024-11-19 08:01:27.429753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.623 qpair failed and we were unable to recover it. 00:37:35.623 [2024-11-19 08:01:27.429873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.623 [2024-11-19 08:01:27.429907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.623 qpair failed and we were unable to recover it. 00:37:35.623 [2024-11-19 08:01:27.430034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.623 [2024-11-19 08:01:27.430082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.623 qpair failed and we were unable to recover it. 00:37:35.623 [2024-11-19 08:01:27.430224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.623 [2024-11-19 08:01:27.430260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.623 qpair failed and we were unable to recover it. 00:37:35.623 [2024-11-19 08:01:27.430417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.623 [2024-11-19 08:01:27.430470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.623 qpair failed and we were unable to recover it. 00:37:35.623 [2024-11-19 08:01:27.430581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.623 [2024-11-19 08:01:27.430616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.623 qpair failed and we were unable to recover it. 00:37:35.623 [2024-11-19 08:01:27.430791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.623 [2024-11-19 08:01:27.430846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.623 qpair failed and we were unable to recover it. 00:37:35.623 [2024-11-19 08:01:27.431002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.623 [2024-11-19 08:01:27.431042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.623 qpair failed and we were unable to recover it. 00:37:35.623 [2024-11-19 08:01:27.431210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.623 [2024-11-19 08:01:27.431269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.623 qpair failed and we were unable to recover it. 00:37:35.623 [2024-11-19 08:01:27.431437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.623 [2024-11-19 08:01:27.431493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.623 qpair failed and we were unable to recover it. 00:37:35.623 [2024-11-19 08:01:27.431609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.623 [2024-11-19 08:01:27.431648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.623 qpair failed and we were unable to recover it. 00:37:35.623 [2024-11-19 08:01:27.431779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.623 [2024-11-19 08:01:27.431815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.623 qpair failed and we were unable to recover it. 00:37:35.623 [2024-11-19 08:01:27.431979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.624 [2024-11-19 08:01:27.432017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.624 qpair failed and we were unable to recover it. 00:37:35.624 [2024-11-19 08:01:27.432132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.624 [2024-11-19 08:01:27.432170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.624 qpair failed and we were unable to recover it. 00:37:35.624 [2024-11-19 08:01:27.432283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.624 [2024-11-19 08:01:27.432328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.624 qpair failed and we were unable to recover it. 00:37:35.624 [2024-11-19 08:01:27.432498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.624 [2024-11-19 08:01:27.432536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.624 qpair failed and we were unable to recover it. 00:37:35.624 [2024-11-19 08:01:27.432654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.624 [2024-11-19 08:01:27.432707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.624 qpair failed and we were unable to recover it. 00:37:35.624 [2024-11-19 08:01:27.432867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.624 [2024-11-19 08:01:27.432903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.624 qpair failed and we were unable to recover it. 00:37:35.624 [2024-11-19 08:01:27.433084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.624 [2024-11-19 08:01:27.433139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.624 qpair failed and we were unable to recover it. 00:37:35.624 [2024-11-19 08:01:27.433269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.624 [2024-11-19 08:01:27.433322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.624 qpair failed and we were unable to recover it. 00:37:35.624 [2024-11-19 08:01:27.433472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.624 [2024-11-19 08:01:27.433525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.624 qpair failed and we were unable to recover it. 00:37:35.624 [2024-11-19 08:01:27.433662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.624 [2024-11-19 08:01:27.433704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.624 qpair failed and we were unable to recover it. 00:37:35.624 [2024-11-19 08:01:27.433858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.624 [2024-11-19 08:01:27.433912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.624 qpair failed and we were unable to recover it. 00:37:35.624 [2024-11-19 08:01:27.434039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.624 [2024-11-19 08:01:27.434092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.624 qpair failed and we were unable to recover it. 00:37:35.624 [2024-11-19 08:01:27.434227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.624 [2024-11-19 08:01:27.434260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.624 qpair failed and we were unable to recover it. 00:37:35.624 [2024-11-19 08:01:27.434365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.624 [2024-11-19 08:01:27.434399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.624 qpair failed and we were unable to recover it. 00:37:35.624 [2024-11-19 08:01:27.434547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.624 [2024-11-19 08:01:27.434583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.624 qpair failed and we were unable to recover it. 00:37:35.624 [2024-11-19 08:01:27.434710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.624 [2024-11-19 08:01:27.434780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.624 qpair failed and we were unable to recover it. 00:37:35.624 [2024-11-19 08:01:27.434946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.624 [2024-11-19 08:01:27.435001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.624 qpair failed and we were unable to recover it. 00:37:35.624 [2024-11-19 08:01:27.435163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.624 [2024-11-19 08:01:27.435202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.624 qpair failed and we were unable to recover it. 00:37:35.624 [2024-11-19 08:01:27.435361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.624 [2024-11-19 08:01:27.435395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.624 qpair failed and we were unable to recover it. 00:37:35.624 [2024-11-19 08:01:27.435533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.624 [2024-11-19 08:01:27.435567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.624 qpair failed and we were unable to recover it. 00:37:35.624 [2024-11-19 08:01:27.435706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.624 [2024-11-19 08:01:27.435742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.624 qpair failed and we were unable to recover it. 00:37:35.624 [2024-11-19 08:01:27.435896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.624 [2024-11-19 08:01:27.435949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.624 qpair failed and we were unable to recover it. 00:37:35.624 [2024-11-19 08:01:27.436102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.624 [2024-11-19 08:01:27.436152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.624 qpair failed and we were unable to recover it. 00:37:35.624 [2024-11-19 08:01:27.436280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.624 [2024-11-19 08:01:27.436332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.624 qpair failed and we were unable to recover it. 00:37:35.624 [2024-11-19 08:01:27.436446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.624 [2024-11-19 08:01:27.436494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.624 qpair failed and we were unable to recover it. 00:37:35.624 [2024-11-19 08:01:27.436611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.624 [2024-11-19 08:01:27.436648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.624 qpair failed and we were unable to recover it. 00:37:35.624 [2024-11-19 08:01:27.436790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.624 [2024-11-19 08:01:27.436832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.624 qpair failed and we were unable to recover it. 00:37:35.624 [2024-11-19 08:01:27.437058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.624 [2024-11-19 08:01:27.437096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.624 qpair failed and we were unable to recover it. 00:37:35.624 [2024-11-19 08:01:27.437276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.624 [2024-11-19 08:01:27.437315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.624 qpair failed and we were unable to recover it. 00:37:35.624 [2024-11-19 08:01:27.437471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.624 [2024-11-19 08:01:27.437509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.624 qpair failed and we were unable to recover it. 00:37:35.624 [2024-11-19 08:01:27.437646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.624 [2024-11-19 08:01:27.437680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.624 qpair failed and we were unable to recover it. 00:37:35.624 [2024-11-19 08:01:27.437808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.624 [2024-11-19 08:01:27.437843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.624 qpair failed and we were unable to recover it. 00:37:35.624 [2024-11-19 08:01:27.437942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.624 [2024-11-19 08:01:27.437976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.624 qpair failed and we were unable to recover it. 00:37:35.624 [2024-11-19 08:01:27.438136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.624 [2024-11-19 08:01:27.438190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.624 qpair failed and we were unable to recover it. 00:37:35.624 [2024-11-19 08:01:27.438354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.624 [2024-11-19 08:01:27.438406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.624 qpair failed and we were unable to recover it. 00:37:35.625 [2024-11-19 08:01:27.438522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.625 [2024-11-19 08:01:27.438559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.625 qpair failed and we were unable to recover it. 00:37:35.625 [2024-11-19 08:01:27.438704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.625 [2024-11-19 08:01:27.438740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.625 qpair failed and we were unable to recover it. 00:37:35.625 [2024-11-19 08:01:27.438894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.625 [2024-11-19 08:01:27.438942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.625 qpair failed and we were unable to recover it. 00:37:35.625 [2024-11-19 08:01:27.439098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.625 [2024-11-19 08:01:27.439153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.625 qpair failed and we were unable to recover it. 00:37:35.625 [2024-11-19 08:01:27.439317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.625 [2024-11-19 08:01:27.439355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.625 qpair failed and we were unable to recover it. 00:37:35.625 [2024-11-19 08:01:27.439465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.625 [2024-11-19 08:01:27.439502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.625 qpair failed and we were unable to recover it. 00:37:35.625 [2024-11-19 08:01:27.439651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.625 [2024-11-19 08:01:27.439685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.625 qpair failed and we were unable to recover it. 00:37:35.625 [2024-11-19 08:01:27.439803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.625 [2024-11-19 08:01:27.439842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.625 qpair failed and we were unable to recover it. 00:37:35.625 [2024-11-19 08:01:27.439952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.625 [2024-11-19 08:01:27.440005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.625 qpair failed and we were unable to recover it. 00:37:35.625 [2024-11-19 08:01:27.440175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.625 [2024-11-19 08:01:27.440210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.625 qpair failed and we were unable to recover it. 00:37:35.625 [2024-11-19 08:01:27.440319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.625 [2024-11-19 08:01:27.440355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.625 qpair failed and we were unable to recover it. 00:37:35.625 [2024-11-19 08:01:27.440499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.625 [2024-11-19 08:01:27.440534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.625 qpair failed and we were unable to recover it. 00:37:35.625 [2024-11-19 08:01:27.440674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.625 [2024-11-19 08:01:27.440730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.625 qpair failed and we were unable to recover it. 00:37:35.625 [2024-11-19 08:01:27.440864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.625 [2024-11-19 08:01:27.440913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.625 qpair failed and we were unable to recover it. 00:37:35.625 [2024-11-19 08:01:27.441033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.625 [2024-11-19 08:01:27.441068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.625 qpair failed and we were unable to recover it. 00:37:35.625 [2024-11-19 08:01:27.441225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.625 [2024-11-19 08:01:27.441260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.625 qpair failed and we were unable to recover it. 00:37:35.625 [2024-11-19 08:01:27.441454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.625 [2024-11-19 08:01:27.441488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.625 qpair failed and we were unable to recover it. 00:37:35.625 [2024-11-19 08:01:27.441605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.625 [2024-11-19 08:01:27.441640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.625 qpair failed and we were unable to recover it. 00:37:35.625 [2024-11-19 08:01:27.441788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.625 [2024-11-19 08:01:27.441838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.625 qpair failed and we were unable to recover it. 00:37:35.625 [2024-11-19 08:01:27.441959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.625 [2024-11-19 08:01:27.441997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.625 qpair failed and we were unable to recover it. 00:37:35.625 [2024-11-19 08:01:27.442110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.625 [2024-11-19 08:01:27.442162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.625 qpair failed and we were unable to recover it. 00:37:35.625 [2024-11-19 08:01:27.442286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.625 [2024-11-19 08:01:27.442323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.625 qpair failed and we were unable to recover it. 00:37:35.625 [2024-11-19 08:01:27.442501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.625 [2024-11-19 08:01:27.442567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.625 qpair failed and we were unable to recover it. 00:37:35.625 [2024-11-19 08:01:27.442680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.625 [2024-11-19 08:01:27.442724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.625 qpair failed and we were unable to recover it. 00:37:35.625 [2024-11-19 08:01:27.442890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.625 [2024-11-19 08:01:27.442925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.625 qpair failed and we were unable to recover it. 00:37:35.625 [2024-11-19 08:01:27.443047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.625 [2024-11-19 08:01:27.443081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.625 qpair failed and we were unable to recover it. 00:37:35.625 [2024-11-19 08:01:27.443219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.625 [2024-11-19 08:01:27.443253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.625 qpair failed and we were unable to recover it. 00:37:35.625 [2024-11-19 08:01:27.443378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.625 [2024-11-19 08:01:27.443429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.625 qpair failed and we were unable to recover it. 00:37:35.625 [2024-11-19 08:01:27.443560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.625 [2024-11-19 08:01:27.443593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.625 qpair failed and we were unable to recover it. 00:37:35.625 [2024-11-19 08:01:27.443709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.625 [2024-11-19 08:01:27.443742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.625 qpair failed and we were unable to recover it. 00:37:35.625 [2024-11-19 08:01:27.443896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.625 [2024-11-19 08:01:27.443932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.625 qpair failed and we were unable to recover it. 00:37:35.625 [2024-11-19 08:01:27.444075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.625 [2024-11-19 08:01:27.444110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.625 qpair failed and we were unable to recover it. 00:37:35.625 [2024-11-19 08:01:27.444219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.625 [2024-11-19 08:01:27.444270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.625 qpair failed and we were unable to recover it. 00:37:35.625 [2024-11-19 08:01:27.444380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.625 [2024-11-19 08:01:27.444414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.625 qpair failed and we were unable to recover it. 00:37:35.625 [2024-11-19 08:01:27.444542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.625 [2024-11-19 08:01:27.444590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.625 qpair failed and we were unable to recover it. 00:37:35.625 [2024-11-19 08:01:27.444710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.625 [2024-11-19 08:01:27.444748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.625 qpair failed and we were unable to recover it. 00:37:35.625 [2024-11-19 08:01:27.444915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.626 [2024-11-19 08:01:27.444963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.626 qpair failed and we were unable to recover it. 00:37:35.626 [2024-11-19 08:01:27.445106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.626 [2024-11-19 08:01:27.445141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.626 qpair failed and we were unable to recover it. 00:37:35.626 [2024-11-19 08:01:27.445269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.626 [2024-11-19 08:01:27.445319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.626 qpair failed and we were unable to recover it. 00:37:35.626 [2024-11-19 08:01:27.445468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.626 [2024-11-19 08:01:27.445505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.626 qpair failed and we were unable to recover it. 00:37:35.626 [2024-11-19 08:01:27.445625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.626 [2024-11-19 08:01:27.445661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.626 qpair failed and we were unable to recover it. 00:37:35.626 [2024-11-19 08:01:27.445802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.626 [2024-11-19 08:01:27.445838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.626 qpair failed and we were unable to recover it. 00:37:35.626 [2024-11-19 08:01:27.445973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.626 [2024-11-19 08:01:27.446010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.626 qpair failed and we were unable to recover it. 00:37:35.626 [2024-11-19 08:01:27.446129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.626 [2024-11-19 08:01:27.446165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.626 qpair failed and we were unable to recover it. 00:37:35.626 [2024-11-19 08:01:27.446270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.626 [2024-11-19 08:01:27.446304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.626 qpair failed and we were unable to recover it. 00:37:35.626 [2024-11-19 08:01:27.446445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.626 [2024-11-19 08:01:27.446480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.626 qpair failed and we were unable to recover it. 00:37:35.626 [2024-11-19 08:01:27.446632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.626 [2024-11-19 08:01:27.446702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.626 qpair failed and we were unable to recover it. 00:37:35.626 [2024-11-19 08:01:27.446840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.626 [2024-11-19 08:01:27.446884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.626 qpair failed and we were unable to recover it. 00:37:35.626 [2024-11-19 08:01:27.446993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.626 [2024-11-19 08:01:27.447029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.626 qpair failed and we were unable to recover it. 00:37:35.626 [2024-11-19 08:01:27.447138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.626 [2024-11-19 08:01:27.447173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.626 qpair failed and we were unable to recover it. 00:37:35.626 [2024-11-19 08:01:27.447308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.626 [2024-11-19 08:01:27.447344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.626 qpair failed and we were unable to recover it. 00:37:35.626 [2024-11-19 08:01:27.447507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.626 [2024-11-19 08:01:27.447542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.626 qpair failed and we were unable to recover it. 00:37:35.626 [2024-11-19 08:01:27.447653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.626 [2024-11-19 08:01:27.447697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.626 qpair failed and we were unable to recover it. 00:37:35.626 [2024-11-19 08:01:27.447810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.626 [2024-11-19 08:01:27.447845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.626 qpair failed and we were unable to recover it. 00:37:35.626 [2024-11-19 08:01:27.447976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.626 [2024-11-19 08:01:27.448010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.626 qpair failed and we were unable to recover it. 00:37:35.626 [2024-11-19 08:01:27.448145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.626 [2024-11-19 08:01:27.448179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.626 qpair failed and we were unable to recover it. 00:37:35.626 [2024-11-19 08:01:27.448322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.626 [2024-11-19 08:01:27.448358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.626 qpair failed and we were unable to recover it. 00:37:35.626 [2024-11-19 08:01:27.448499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.626 [2024-11-19 08:01:27.448535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.626 qpair failed and we were unable to recover it. 00:37:35.626 [2024-11-19 08:01:27.448710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.626 [2024-11-19 08:01:27.448758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.626 qpair failed and we were unable to recover it. 00:37:35.626 [2024-11-19 08:01:27.448924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.626 [2024-11-19 08:01:27.448974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.626 qpair failed and we were unable to recover it. 00:37:35.626 [2024-11-19 08:01:27.449120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.626 [2024-11-19 08:01:27.449157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.626 qpair failed and we were unable to recover it. 00:37:35.626 [2024-11-19 08:01:27.449298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.626 [2024-11-19 08:01:27.449333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.626 qpair failed and we were unable to recover it. 00:37:35.626 [2024-11-19 08:01:27.449470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.626 [2024-11-19 08:01:27.449506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.626 qpair failed and we were unable to recover it. 00:37:35.626 [2024-11-19 08:01:27.449618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.626 [2024-11-19 08:01:27.449654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.626 qpair failed and we were unable to recover it. 00:37:35.626 [2024-11-19 08:01:27.449809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.626 [2024-11-19 08:01:27.449845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.626 qpair failed and we were unable to recover it. 00:37:35.626 [2024-11-19 08:01:27.449952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.626 [2024-11-19 08:01:27.449986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.626 qpair failed and we were unable to recover it. 00:37:35.626 [2024-11-19 08:01:27.450094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.626 [2024-11-19 08:01:27.450128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.626 qpair failed and we were unable to recover it. 00:37:35.626 [2024-11-19 08:01:27.450278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.626 [2024-11-19 08:01:27.450327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.626 qpair failed and we were unable to recover it. 00:37:35.626 [2024-11-19 08:01:27.450479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.626 [2024-11-19 08:01:27.450515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.626 qpair failed and we were unable to recover it. 00:37:35.626 [2024-11-19 08:01:27.450664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.626 [2024-11-19 08:01:27.450708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.626 qpair failed and we were unable to recover it. 00:37:35.626 [2024-11-19 08:01:27.450818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.626 [2024-11-19 08:01:27.450853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.626 qpair failed and we were unable to recover it. 00:37:35.626 [2024-11-19 08:01:27.451007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.626 [2024-11-19 08:01:27.451055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.626 qpair failed and we were unable to recover it. 00:37:35.626 [2024-11-19 08:01:27.451168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.626 [2024-11-19 08:01:27.451204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.626 qpair failed and we were unable to recover it. 00:37:35.626 [2024-11-19 08:01:27.451323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.626 [2024-11-19 08:01:27.451359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.626 qpair failed and we were unable to recover it. 00:37:35.626 [2024-11-19 08:01:27.451475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.626 [2024-11-19 08:01:27.451510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.627 qpair failed and we were unable to recover it. 00:37:35.627 [2024-11-19 08:01:27.451638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.627 [2024-11-19 08:01:27.451673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.627 qpair failed and we were unable to recover it. 00:37:35.627 [2024-11-19 08:01:27.451803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.627 [2024-11-19 08:01:27.451839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.627 qpair failed and we were unable to recover it. 00:37:35.627 [2024-11-19 08:01:27.451973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.627 [2024-11-19 08:01:27.452008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.627 qpair failed and we were unable to recover it. 00:37:35.627 [2024-11-19 08:01:27.452142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.627 [2024-11-19 08:01:27.452178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.627 qpair failed and we were unable to recover it. 00:37:35.627 [2024-11-19 08:01:27.452287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.627 [2024-11-19 08:01:27.452322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.627 qpair failed and we were unable to recover it. 00:37:35.627 [2024-11-19 08:01:27.452442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.627 [2024-11-19 08:01:27.452478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.627 qpair failed and we were unable to recover it. 00:37:35.627 [2024-11-19 08:01:27.452603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.627 [2024-11-19 08:01:27.452652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.627 qpair failed and we were unable to recover it. 00:37:35.627 [2024-11-19 08:01:27.452802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.627 [2024-11-19 08:01:27.452850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.627 qpair failed and we were unable to recover it. 00:37:35.627 [2024-11-19 08:01:27.452984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.627 [2024-11-19 08:01:27.453020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.627 qpair failed and we were unable to recover it. 00:37:35.627 [2024-11-19 08:01:27.453151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.627 [2024-11-19 08:01:27.453196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.627 qpair failed and we were unable to recover it. 00:37:35.627 [2024-11-19 08:01:27.453360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.627 [2024-11-19 08:01:27.453394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.627 qpair failed and we were unable to recover it. 00:37:35.627 [2024-11-19 08:01:27.453511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.627 [2024-11-19 08:01:27.453546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.627 qpair failed and we were unable to recover it. 00:37:35.627 [2024-11-19 08:01:27.453684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.627 [2024-11-19 08:01:27.453732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.627 qpair failed and we were unable to recover it. 00:37:35.627 [2024-11-19 08:01:27.453862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.627 [2024-11-19 08:01:27.453898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.627 qpair failed and we were unable to recover it. 00:37:35.627 [2024-11-19 08:01:27.454058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.627 [2024-11-19 08:01:27.454093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.627 qpair failed and we were unable to recover it. 00:37:35.627 [2024-11-19 08:01:27.454281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.627 [2024-11-19 08:01:27.454335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.627 qpair failed and we were unable to recover it. 00:37:35.627 [2024-11-19 08:01:27.454470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.627 [2024-11-19 08:01:27.454524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.627 qpair failed and we were unable to recover it. 00:37:35.627 [2024-11-19 08:01:27.454679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.627 [2024-11-19 08:01:27.454733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.627 qpair failed and we were unable to recover it. 00:37:35.627 [2024-11-19 08:01:27.454852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.627 [2024-11-19 08:01:27.454888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.627 qpair failed and we were unable to recover it. 00:37:35.627 [2024-11-19 08:01:27.455003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.627 [2024-11-19 08:01:27.455038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.627 qpair failed and we were unable to recover it. 00:37:35.627 [2024-11-19 08:01:27.455178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.627 [2024-11-19 08:01:27.455214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.627 qpair failed and we were unable to recover it. 00:37:35.627 [2024-11-19 08:01:27.455355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.627 [2024-11-19 08:01:27.455394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.627 qpair failed and we were unable to recover it. 00:37:35.627 [2024-11-19 08:01:27.455549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.627 [2024-11-19 08:01:27.455588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.627 qpair failed and we were unable to recover it. 00:37:35.627 [2024-11-19 08:01:27.455728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.627 [2024-11-19 08:01:27.455764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.627 qpair failed and we were unable to recover it. 00:37:35.627 [2024-11-19 08:01:27.455894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.627 [2024-11-19 08:01:27.455928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.627 qpair failed and we were unable to recover it. 00:37:35.627 [2024-11-19 08:01:27.456064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.627 [2024-11-19 08:01:27.456098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.627 qpair failed and we were unable to recover it. 00:37:35.627 [2024-11-19 08:01:27.456217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.627 [2024-11-19 08:01:27.456271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.627 qpair failed and we were unable to recover it. 00:37:35.627 [2024-11-19 08:01:27.456406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.627 [2024-11-19 08:01:27.456441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.627 qpair failed and we were unable to recover it. 00:37:35.627 [2024-11-19 08:01:27.456618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.627 [2024-11-19 08:01:27.456666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.627 qpair failed and we were unable to recover it. 00:37:35.627 [2024-11-19 08:01:27.456831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.627 [2024-11-19 08:01:27.456867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.627 qpair failed and we were unable to recover it. 00:37:35.627 [2024-11-19 08:01:27.457003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.627 [2024-11-19 08:01:27.457038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.627 qpair failed and we were unable to recover it. 00:37:35.627 [2024-11-19 08:01:27.457173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.627 [2024-11-19 08:01:27.457209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.627 qpair failed and we were unable to recover it. 00:37:35.627 [2024-11-19 08:01:27.457321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.627 [2024-11-19 08:01:27.457358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.627 qpair failed and we were unable to recover it. 00:37:35.627 [2024-11-19 08:01:27.457483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.627 [2024-11-19 08:01:27.457519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.627 qpair failed and we were unable to recover it. 00:37:35.627 [2024-11-19 08:01:27.457654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.627 [2024-11-19 08:01:27.457699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.627 qpair failed and we were unable to recover it. 00:37:35.627 [2024-11-19 08:01:27.457830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.627 [2024-11-19 08:01:27.457865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.627 qpair failed and we were unable to recover it. 00:37:35.627 [2024-11-19 08:01:27.457991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.627 [2024-11-19 08:01:27.458040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.627 qpair failed and we were unable to recover it. 00:37:35.627 [2024-11-19 08:01:27.458205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.628 [2024-11-19 08:01:27.458245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.628 qpair failed and we were unable to recover it. 00:37:35.628 [2024-11-19 08:01:27.458363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.628 [2024-11-19 08:01:27.458401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.628 qpair failed and we were unable to recover it. 00:37:35.628 [2024-11-19 08:01:27.458568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.628 [2024-11-19 08:01:27.458604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.628 qpair failed and we were unable to recover it. 00:37:35.628 [2024-11-19 08:01:27.458726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.628 [2024-11-19 08:01:27.458763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.628 qpair failed and we were unable to recover it. 00:37:35.628 [2024-11-19 08:01:27.458868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.628 [2024-11-19 08:01:27.458903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.628 qpair failed and we were unable to recover it. 00:37:35.628 [2024-11-19 08:01:27.459066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.628 [2024-11-19 08:01:27.459120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.628 qpair failed and we were unable to recover it. 00:37:35.628 [2024-11-19 08:01:27.459256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.628 [2024-11-19 08:01:27.459347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.628 qpair failed and we were unable to recover it. 00:37:35.628 [2024-11-19 08:01:27.459478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.628 [2024-11-19 08:01:27.459513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.628 qpair failed and we were unable to recover it. 00:37:35.628 [2024-11-19 08:01:27.459684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.628 [2024-11-19 08:01:27.459727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.628 qpair failed and we were unable to recover it. 00:37:35.628 [2024-11-19 08:01:27.459872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.628 [2024-11-19 08:01:27.459906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.628 qpair failed and we were unable to recover it. 00:37:35.628 [2024-11-19 08:01:27.460061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.628 [2024-11-19 08:01:27.460110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.628 qpair failed and we were unable to recover it. 00:37:35.628 [2024-11-19 08:01:27.460256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.628 [2024-11-19 08:01:27.460296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.628 qpair failed and we were unable to recover it. 00:37:35.628 [2024-11-19 08:01:27.460509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.628 [2024-11-19 08:01:27.460545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.628 qpair failed and we were unable to recover it. 00:37:35.628 [2024-11-19 08:01:27.460676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.628 [2024-11-19 08:01:27.460718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.628 qpair failed and we were unable to recover it. 00:37:35.628 [2024-11-19 08:01:27.460864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.628 [2024-11-19 08:01:27.460917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.628 qpair failed and we were unable to recover it. 00:37:35.628 [2024-11-19 08:01:27.461078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.628 [2024-11-19 08:01:27.461136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.628 qpair failed and we were unable to recover it. 00:37:35.628 [2024-11-19 08:01:27.461317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.628 [2024-11-19 08:01:27.461355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.628 qpair failed and we were unable to recover it. 00:37:35.628 [2024-11-19 08:01:27.461496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.628 [2024-11-19 08:01:27.461534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.628 qpair failed and we were unable to recover it. 00:37:35.628 [2024-11-19 08:01:27.461704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.628 [2024-11-19 08:01:27.461741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.628 qpair failed and we were unable to recover it. 00:37:35.628 [2024-11-19 08:01:27.461869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.628 [2024-11-19 08:01:27.461917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.628 qpair failed and we were unable to recover it. 00:37:35.628 [2024-11-19 08:01:27.462053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.628 [2024-11-19 08:01:27.462092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.628 qpair failed and we were unable to recover it. 00:37:35.628 [2024-11-19 08:01:27.462344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.628 [2024-11-19 08:01:27.462402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.628 qpair failed and we were unable to recover it. 00:37:35.628 [2024-11-19 08:01:27.462543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.628 [2024-11-19 08:01:27.462578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.628 qpair failed and we were unable to recover it. 00:37:35.628 [2024-11-19 08:01:27.462714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.628 [2024-11-19 08:01:27.462750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.628 qpair failed and we were unable to recover it. 00:37:35.628 [2024-11-19 08:01:27.462902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.628 [2024-11-19 08:01:27.462951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.628 qpair failed and we were unable to recover it. 00:37:35.628 [2024-11-19 08:01:27.463129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.628 [2024-11-19 08:01:27.463170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.628 qpair failed and we were unable to recover it. 00:37:35.628 [2024-11-19 08:01:27.463334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.628 [2024-11-19 08:01:27.463405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.628 qpair failed and we were unable to recover it. 00:37:35.628 [2024-11-19 08:01:27.463543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.628 [2024-11-19 08:01:27.463580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.628 qpair failed and we were unable to recover it. 00:37:35.628 [2024-11-19 08:01:27.463697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.628 [2024-11-19 08:01:27.463731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.628 qpair failed and we were unable to recover it. 00:37:35.628 [2024-11-19 08:01:27.463848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.628 [2024-11-19 08:01:27.463882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.628 qpair failed and we were unable to recover it. 00:37:35.628 [2024-11-19 08:01:27.464071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.628 [2024-11-19 08:01:27.464125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.628 qpair failed and we were unable to recover it. 00:37:35.628 [2024-11-19 08:01:27.464314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.628 [2024-11-19 08:01:27.464367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.628 qpair failed and we were unable to recover it. 00:37:35.628 [2024-11-19 08:01:27.464504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.628 [2024-11-19 08:01:27.464541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.628 qpair failed and we were unable to recover it. 00:37:35.628 [2024-11-19 08:01:27.464677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.628 [2024-11-19 08:01:27.464718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.629 qpair failed and we were unable to recover it. 00:37:35.629 [2024-11-19 08:01:27.464851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.629 [2024-11-19 08:01:27.464900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.629 qpair failed and we were unable to recover it. 00:37:35.629 [2024-11-19 08:01:27.465132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.629 [2024-11-19 08:01:27.465185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.629 qpair failed and we were unable to recover it. 00:37:35.629 [2024-11-19 08:01:27.465347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.629 [2024-11-19 08:01:27.465398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.629 qpair failed and we were unable to recover it. 00:37:35.629 [2024-11-19 08:01:27.465499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.629 [2024-11-19 08:01:27.465533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.629 qpair failed and we were unable to recover it. 00:37:35.629 [2024-11-19 08:01:27.465702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.629 [2024-11-19 08:01:27.465738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.629 qpair failed and we were unable to recover it. 00:37:35.629 [2024-11-19 08:01:27.465875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.629 [2024-11-19 08:01:27.465910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.629 qpair failed and we were unable to recover it. 00:37:35.629 [2024-11-19 08:01:27.466031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.629 [2024-11-19 08:01:27.466085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.629 qpair failed and we were unable to recover it. 00:37:35.629 [2024-11-19 08:01:27.466318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.629 [2024-11-19 08:01:27.466353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.629 qpair failed and we were unable to recover it. 00:37:35.629 [2024-11-19 08:01:27.466515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.629 [2024-11-19 08:01:27.466553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.629 qpair failed and we were unable to recover it. 00:37:35.629 [2024-11-19 08:01:27.466667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.629 [2024-11-19 08:01:27.466711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.629 qpair failed and we were unable to recover it. 00:37:35.629 [2024-11-19 08:01:27.466824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.629 [2024-11-19 08:01:27.466858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.629 qpair failed and we were unable to recover it. 00:37:35.629 [2024-11-19 08:01:27.466986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.629 [2024-11-19 08:01:27.467024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.629 qpair failed and we were unable to recover it. 00:37:35.629 [2024-11-19 08:01:27.467238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.629 [2024-11-19 08:01:27.467294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.629 qpair failed and we were unable to recover it. 00:37:35.629 [2024-11-19 08:01:27.467450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.629 [2024-11-19 08:01:27.467495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.629 qpair failed and we were unable to recover it. 00:37:35.629 [2024-11-19 08:01:27.467603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.629 [2024-11-19 08:01:27.467638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.629 qpair failed and we were unable to recover it. 00:37:35.629 [2024-11-19 08:01:27.467768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.629 [2024-11-19 08:01:27.467805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.629 qpair failed and we were unable to recover it. 00:37:35.629 [2024-11-19 08:01:27.468024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.629 [2024-11-19 08:01:27.468077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.629 qpair failed and we were unable to recover it. 00:37:35.629 [2024-11-19 08:01:27.468237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.629 [2024-11-19 08:01:27.468271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.629 qpair failed and we were unable to recover it. 00:37:35.629 [2024-11-19 08:01:27.468404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.629 [2024-11-19 08:01:27.468439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.629 qpair failed and we were unable to recover it. 00:37:35.629 [2024-11-19 08:01:27.468544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.629 [2024-11-19 08:01:27.468579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.629 qpair failed and we were unable to recover it. 00:37:35.629 [2024-11-19 08:01:27.468791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.629 [2024-11-19 08:01:27.468826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.629 qpair failed and we were unable to recover it. 00:37:35.629 [2024-11-19 08:01:27.468976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.629 [2024-11-19 08:01:27.469029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.629 qpair failed and we were unable to recover it. 00:37:35.629 [2024-11-19 08:01:27.469186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.629 [2024-11-19 08:01:27.469223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.629 qpair failed and we were unable to recover it. 00:37:35.629 [2024-11-19 08:01:27.469329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.629 [2024-11-19 08:01:27.469364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.629 qpair failed and we were unable to recover it. 00:37:35.629 [2024-11-19 08:01:27.469502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.629 [2024-11-19 08:01:27.469537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.629 qpair failed and we were unable to recover it. 00:37:35.629 [2024-11-19 08:01:27.469644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.629 [2024-11-19 08:01:27.469677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.629 qpair failed and we were unable to recover it. 00:37:35.629 [2024-11-19 08:01:27.469834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.629 [2024-11-19 08:01:27.469869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.629 qpair failed and we were unable to recover it. 00:37:35.629 [2024-11-19 08:01:27.469993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.629 [2024-11-19 08:01:27.470045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.629 qpair failed and we were unable to recover it. 00:37:35.629 [2024-11-19 08:01:27.470150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.629 [2024-11-19 08:01:27.470184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.629 qpair failed and we were unable to recover it. 00:37:35.629 [2024-11-19 08:01:27.470295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.629 [2024-11-19 08:01:27.470329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.629 qpair failed and we were unable to recover it. 00:37:35.629 [2024-11-19 08:01:27.470445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.629 [2024-11-19 08:01:27.470481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.629 qpair failed and we were unable to recover it. 00:37:35.629 [2024-11-19 08:01:27.470598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.629 [2024-11-19 08:01:27.470633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.629 qpair failed and we were unable to recover it. 00:37:35.629 [2024-11-19 08:01:27.470774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.629 [2024-11-19 08:01:27.470823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.629 qpair failed and we were unable to recover it. 00:37:35.629 [2024-11-19 08:01:27.470988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.629 [2024-11-19 08:01:27.471023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.629 qpair failed and we were unable to recover it. 00:37:35.629 [2024-11-19 08:01:27.471146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.629 [2024-11-19 08:01:27.471180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.629 qpair failed and we were unable to recover it. 00:37:35.629 [2024-11-19 08:01:27.471291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.629 [2024-11-19 08:01:27.471325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.629 qpair failed and we were unable to recover it. 00:37:35.629 [2024-11-19 08:01:27.471455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.629 [2024-11-19 08:01:27.471489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.629 qpair failed and we were unable to recover it. 00:37:35.629 [2024-11-19 08:01:27.471590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.630 [2024-11-19 08:01:27.471624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.630 qpair failed and we were unable to recover it. 00:37:35.630 [2024-11-19 08:01:27.471728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.630 [2024-11-19 08:01:27.471762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.630 qpair failed and we were unable to recover it. 00:37:35.630 [2024-11-19 08:01:27.471871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.630 [2024-11-19 08:01:27.471906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.630 qpair failed and we were unable to recover it. 00:37:35.630 [2024-11-19 08:01:27.472089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.630 [2024-11-19 08:01:27.472140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.630 qpair failed and we were unable to recover it. 00:37:35.630 [2024-11-19 08:01:27.472251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.630 [2024-11-19 08:01:27.472285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.630 qpair failed and we were unable to recover it. 00:37:35.630 [2024-11-19 08:01:27.472425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.630 [2024-11-19 08:01:27.472464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.630 qpair failed and we were unable to recover it. 00:37:35.630 [2024-11-19 08:01:27.472580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.630 [2024-11-19 08:01:27.472634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.630 qpair failed and we were unable to recover it. 00:37:35.630 [2024-11-19 08:01:27.472776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.630 [2024-11-19 08:01:27.472812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.630 qpair failed and we were unable to recover it. 00:37:35.630 [2024-11-19 08:01:27.472957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.630 [2024-11-19 08:01:27.472996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.630 qpair failed and we were unable to recover it. 00:37:35.630 [2024-11-19 08:01:27.473155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.630 [2024-11-19 08:01:27.473189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.630 qpair failed and we were unable to recover it. 00:37:35.630 [2024-11-19 08:01:27.473298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.630 [2024-11-19 08:01:27.473340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.630 qpair failed and we were unable to recover it. 00:37:35.630 [2024-11-19 08:01:27.473449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.630 [2024-11-19 08:01:27.473484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.630 qpair failed and we were unable to recover it. 00:37:35.630 [2024-11-19 08:01:27.473670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.630 [2024-11-19 08:01:27.473715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.630 qpair failed and we were unable to recover it. 00:37:35.630 [2024-11-19 08:01:27.473845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.630 [2024-11-19 08:01:27.473879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.630 qpair failed and we were unable to recover it. 00:37:35.630 [2024-11-19 08:01:27.474018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.630 [2024-11-19 08:01:27.474056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.630 qpair failed and we were unable to recover it. 00:37:35.630 [2024-11-19 08:01:27.474210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.630 [2024-11-19 08:01:27.474243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.630 qpair failed and we were unable to recover it. 00:37:35.630 [2024-11-19 08:01:27.474346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.630 [2024-11-19 08:01:27.474380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.630 qpair failed and we were unable to recover it. 00:37:35.630 [2024-11-19 08:01:27.474501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.630 [2024-11-19 08:01:27.474553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.630 qpair failed and we were unable to recover it. 00:37:35.630 [2024-11-19 08:01:27.474695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.630 [2024-11-19 08:01:27.474730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.630 qpair failed and we were unable to recover it. 00:37:35.630 [2024-11-19 08:01:27.474836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.630 [2024-11-19 08:01:27.474870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.630 qpair failed and we were unable to recover it. 00:37:35.630 [2024-11-19 08:01:27.474985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.630 [2024-11-19 08:01:27.475020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.630 qpair failed and we were unable to recover it. 00:37:35.630 [2024-11-19 08:01:27.475130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.630 [2024-11-19 08:01:27.475164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.630 qpair failed and we were unable to recover it. 00:37:35.630 [2024-11-19 08:01:27.475276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.630 [2024-11-19 08:01:27.475310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.630 qpair failed and we were unable to recover it. 00:37:35.630 [2024-11-19 08:01:27.475432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.630 [2024-11-19 08:01:27.475467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.630 qpair failed and we were unable to recover it. 00:37:35.630 [2024-11-19 08:01:27.475611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.630 [2024-11-19 08:01:27.475646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.630 qpair failed and we were unable to recover it. 00:37:35.630 [2024-11-19 08:01:27.475851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.630 [2024-11-19 08:01:27.475899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.630 qpair failed and we were unable to recover it. 00:37:35.630 [2024-11-19 08:01:27.476029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.630 [2024-11-19 08:01:27.476066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.630 qpair failed and we were unable to recover it. 00:37:35.630 [2024-11-19 08:01:27.476175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.630 [2024-11-19 08:01:27.476210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.630 qpair failed and we were unable to recover it. 00:37:35.630 [2024-11-19 08:01:27.476321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.630 [2024-11-19 08:01:27.476355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.630 qpair failed and we were unable to recover it. 00:37:35.630 [2024-11-19 08:01:27.476460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.630 [2024-11-19 08:01:27.476494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.630 qpair failed and we were unable to recover it. 00:37:35.630 [2024-11-19 08:01:27.476639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.630 [2024-11-19 08:01:27.476673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.630 qpair failed and we were unable to recover it. 00:37:35.630 [2024-11-19 08:01:27.476817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.630 [2024-11-19 08:01:27.476853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.630 qpair failed and we were unable to recover it. 00:37:35.630 [2024-11-19 08:01:27.476961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.630 [2024-11-19 08:01:27.476995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.630 qpair failed and we were unable to recover it. 00:37:35.630 [2024-11-19 08:01:27.477124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.630 [2024-11-19 08:01:27.477159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.630 qpair failed and we were unable to recover it. 00:37:35.630 [2024-11-19 08:01:27.477272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.630 [2024-11-19 08:01:27.477307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.630 qpair failed and we were unable to recover it. 00:37:35.630 [2024-11-19 08:01:27.477442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.630 [2024-11-19 08:01:27.477497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.630 qpair failed and we were unable to recover it. 00:37:35.630 [2024-11-19 08:01:27.477610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.630 [2024-11-19 08:01:27.477649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.630 qpair failed and we were unable to recover it. 00:37:35.630 [2024-11-19 08:01:27.477785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.630 [2024-11-19 08:01:27.477820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.630 qpair failed and we were unable to recover it. 00:37:35.630 [2024-11-19 08:01:27.478030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.630 [2024-11-19 08:01:27.478069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.630 qpair failed and we were unable to recover it. 00:37:35.630 [2024-11-19 08:01:27.478184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.630 [2024-11-19 08:01:27.478221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.630 qpair failed and we were unable to recover it. 00:37:35.630 [2024-11-19 08:01:27.478406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.630 [2024-11-19 08:01:27.478460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.630 qpair failed and we were unable to recover it. 00:37:35.631 [2024-11-19 08:01:27.478600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.631 [2024-11-19 08:01:27.478635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.631 qpair failed and we were unable to recover it. 00:37:35.631 [2024-11-19 08:01:27.478777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.631 [2024-11-19 08:01:27.478812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.631 qpair failed and we were unable to recover it. 00:37:35.631 [2024-11-19 08:01:27.478931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.631 [2024-11-19 08:01:27.478966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.631 qpair failed and we were unable to recover it. 00:37:35.631 [2024-11-19 08:01:27.479077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.631 [2024-11-19 08:01:27.479112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.631 qpair failed and we were unable to recover it. 00:37:35.631 [2024-11-19 08:01:27.479249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.631 [2024-11-19 08:01:27.479283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.631 qpair failed and we were unable to recover it. 00:37:35.631 [2024-11-19 08:01:27.479438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.631 [2024-11-19 08:01:27.479477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.631 qpair failed and we were unable to recover it. 00:37:35.631 [2024-11-19 08:01:27.479645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.631 [2024-11-19 08:01:27.479680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.631 qpair failed and we were unable to recover it. 00:37:35.631 [2024-11-19 08:01:27.479795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.631 [2024-11-19 08:01:27.479830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.631 qpair failed and we were unable to recover it. 00:37:35.631 [2024-11-19 08:01:27.479935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.631 [2024-11-19 08:01:27.479970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.631 qpair failed and we were unable to recover it. 00:37:35.631 [2024-11-19 08:01:27.480134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.631 [2024-11-19 08:01:27.480169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.631 qpair failed and we were unable to recover it. 00:37:35.631 [2024-11-19 08:01:27.480273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.631 [2024-11-19 08:01:27.480315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.631 qpair failed and we were unable to recover it. 00:37:35.631 [2024-11-19 08:01:27.480456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.631 [2024-11-19 08:01:27.480491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.631 qpair failed and we were unable to recover it. 00:37:35.631 [2024-11-19 08:01:27.480765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.631 [2024-11-19 08:01:27.480814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.631 qpair failed and we were unable to recover it. 00:37:35.631 [2024-11-19 08:01:27.480970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.631 [2024-11-19 08:01:27.481020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.631 qpair failed and we were unable to recover it. 00:37:35.631 [2024-11-19 08:01:27.481163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.631 [2024-11-19 08:01:27.481202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.631 qpair failed and we were unable to recover it. 00:37:35.631 [2024-11-19 08:01:27.481359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.631 [2024-11-19 08:01:27.481402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.631 qpair failed and we were unable to recover it. 00:37:35.631 [2024-11-19 08:01:27.481616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.631 [2024-11-19 08:01:27.481656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.631 qpair failed and we were unable to recover it. 00:37:35.631 [2024-11-19 08:01:27.481811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.631 [2024-11-19 08:01:27.481847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.631 qpair failed and we were unable to recover it. 00:37:35.631 [2024-11-19 08:01:27.481957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.631 [2024-11-19 08:01:27.481995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.631 qpair failed and we were unable to recover it. 00:37:35.631 [2024-11-19 08:01:27.482218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.631 [2024-11-19 08:01:27.482255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.631 qpair failed and we were unable to recover it. 00:37:35.631 [2024-11-19 08:01:27.482372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.631 [2024-11-19 08:01:27.482408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.631 qpair failed and we were unable to recover it. 00:37:35.631 [2024-11-19 08:01:27.482521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.631 [2024-11-19 08:01:27.482556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.631 qpair failed and we were unable to recover it. 00:37:35.631 [2024-11-19 08:01:27.482704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.631 [2024-11-19 08:01:27.482739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.631 qpair failed and we were unable to recover it. 00:37:35.631 [2024-11-19 08:01:27.482842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.631 [2024-11-19 08:01:27.482880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.631 qpair failed and we were unable to recover it. 00:37:35.631 [2024-11-19 08:01:27.483020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.631 [2024-11-19 08:01:27.483069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.631 qpair failed and we were unable to recover it. 00:37:35.631 [2024-11-19 08:01:27.483184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.631 [2024-11-19 08:01:27.483220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.631 qpair failed and we were unable to recover it. 00:37:35.631 [2024-11-19 08:01:27.483452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.631 [2024-11-19 08:01:27.483513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.631 qpair failed and we were unable to recover it. 00:37:35.631 [2024-11-19 08:01:27.483657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.631 [2024-11-19 08:01:27.483704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.631 qpair failed and we were unable to recover it. 00:37:35.631 [2024-11-19 08:01:27.483863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.631 [2024-11-19 08:01:27.483897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.631 qpair failed and we were unable to recover it. 00:37:35.631 [2024-11-19 08:01:27.484092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.631 [2024-11-19 08:01:27.484158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.631 qpair failed and we were unable to recover it. 00:37:35.631 [2024-11-19 08:01:27.484299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.631 [2024-11-19 08:01:27.484337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.631 qpair failed and we were unable to recover it. 00:37:35.631 [2024-11-19 08:01:27.484454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.631 [2024-11-19 08:01:27.484492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.631 qpair failed and we were unable to recover it. 00:37:35.631 [2024-11-19 08:01:27.484653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.631 [2024-11-19 08:01:27.484698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.631 qpair failed and we were unable to recover it. 00:37:35.631 [2024-11-19 08:01:27.484840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.631 [2024-11-19 08:01:27.484874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.631 qpair failed and we were unable to recover it. 00:37:35.631 [2024-11-19 08:01:27.484974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.631 [2024-11-19 08:01:27.485008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.631 qpair failed and we were unable to recover it. 00:37:35.631 [2024-11-19 08:01:27.485162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.631 [2024-11-19 08:01:27.485197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.631 qpair failed and we were unable to recover it. 00:37:35.631 [2024-11-19 08:01:27.485335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.631 [2024-11-19 08:01:27.485380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.631 qpair failed and we were unable to recover it. 00:37:35.631 [2024-11-19 08:01:27.485542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.631 [2024-11-19 08:01:27.485593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.631 qpair failed and we were unable to recover it. 00:37:35.631 [2024-11-19 08:01:27.485740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.631 [2024-11-19 08:01:27.485788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.631 qpair failed and we were unable to recover it. 00:37:35.631 [2024-11-19 08:01:27.485925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.631 [2024-11-19 08:01:27.485975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.631 qpair failed and we were unable to recover it. 00:37:35.631 [2024-11-19 08:01:27.486126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.631 [2024-11-19 08:01:27.486161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.632 qpair failed and we were unable to recover it. 00:37:35.632 [2024-11-19 08:01:27.486323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.632 [2024-11-19 08:01:27.486358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.632 qpair failed and we were unable to recover it. 00:37:35.916 [2024-11-19 08:01:27.486494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.916 [2024-11-19 08:01:27.486530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.916 qpair failed and we were unable to recover it. 00:37:35.916 [2024-11-19 08:01:27.486684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.916 [2024-11-19 08:01:27.486747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.916 qpair failed and we were unable to recover it. 00:37:35.916 [2024-11-19 08:01:27.486866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.916 [2024-11-19 08:01:27.486906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.916 qpair failed and we were unable to recover it. 00:37:35.916 [2024-11-19 08:01:27.487071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.916 [2024-11-19 08:01:27.487119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.916 qpair failed and we were unable to recover it. 00:37:35.916 [2024-11-19 08:01:27.487269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.916 [2024-11-19 08:01:27.487323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.916 qpair failed and we were unable to recover it. 00:37:35.916 [2024-11-19 08:01:27.487527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.916 [2024-11-19 08:01:27.487566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.916 qpair failed and we were unable to recover it. 00:37:35.916 [2024-11-19 08:01:27.487697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.916 [2024-11-19 08:01:27.487737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.916 qpair failed and we were unable to recover it. 00:37:35.916 [2024-11-19 08:01:27.487873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.916 [2024-11-19 08:01:27.487907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.916 qpair failed and we were unable to recover it. 00:37:35.916 [2024-11-19 08:01:27.488016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.916 [2024-11-19 08:01:27.488056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.916 qpair failed and we were unable to recover it. 00:37:35.916 [2024-11-19 08:01:27.488228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.916 [2024-11-19 08:01:27.488289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.916 qpair failed and we were unable to recover it. 00:37:35.916 [2024-11-19 08:01:27.488439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.916 [2024-11-19 08:01:27.488477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.916 qpair failed and we were unable to recover it. 00:37:35.916 [2024-11-19 08:01:27.488615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.916 [2024-11-19 08:01:27.488650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.916 qpair failed and we were unable to recover it. 00:37:35.916 [2024-11-19 08:01:27.488802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.916 [2024-11-19 08:01:27.488837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.916 qpair failed and we were unable to recover it. 00:37:35.916 [2024-11-19 08:01:27.488952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.916 [2024-11-19 08:01:27.488995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.916 qpair failed and we were unable to recover it. 00:37:35.916 [2024-11-19 08:01:27.489156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.916 [2024-11-19 08:01:27.489209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.916 qpair failed and we were unable to recover it. 00:37:35.916 [2024-11-19 08:01:27.489315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.916 [2024-11-19 08:01:27.489350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.916 qpair failed and we were unable to recover it. 00:37:35.916 [2024-11-19 08:01:27.489461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.916 [2024-11-19 08:01:27.489497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.916 qpair failed and we were unable to recover it. 00:37:35.916 [2024-11-19 08:01:27.489613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.916 [2024-11-19 08:01:27.489648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.916 qpair failed and we were unable to recover it. 00:37:35.916 [2024-11-19 08:01:27.489775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.916 [2024-11-19 08:01:27.489817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.916 qpair failed and we were unable to recover it. 00:37:35.916 [2024-11-19 08:01:27.489948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.916 [2024-11-19 08:01:27.489984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.916 qpair failed and we were unable to recover it. 00:37:35.916 [2024-11-19 08:01:27.490193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.917 [2024-11-19 08:01:27.490231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.917 qpair failed and we were unable to recover it. 00:37:35.917 [2024-11-19 08:01:27.490336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.917 [2024-11-19 08:01:27.490372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.917 qpair failed and we were unable to recover it. 00:37:35.917 [2024-11-19 08:01:27.490510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.917 [2024-11-19 08:01:27.490555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.917 qpair failed and we were unable to recover it. 00:37:35.917 [2024-11-19 08:01:27.490706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.917 [2024-11-19 08:01:27.490741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.917 qpair failed and we were unable to recover it. 00:37:35.917 [2024-11-19 08:01:27.490879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.917 [2024-11-19 08:01:27.490914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.917 qpair failed and we were unable to recover it. 00:37:35.917 [2024-11-19 08:01:27.491023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.917 [2024-11-19 08:01:27.491057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.917 qpair failed and we were unable to recover it. 00:37:35.917 [2024-11-19 08:01:27.491210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.917 [2024-11-19 08:01:27.491248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.917 qpair failed and we were unable to recover it. 00:37:35.917 [2024-11-19 08:01:27.491467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.917 [2024-11-19 08:01:27.491506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.917 qpair failed and we were unable to recover it. 00:37:35.917 [2024-11-19 08:01:27.491635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.917 [2024-11-19 08:01:27.491669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.917 qpair failed and we were unable to recover it. 00:37:35.917 [2024-11-19 08:01:27.491841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.917 [2024-11-19 08:01:27.491878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.917 qpair failed and we were unable to recover it. 00:37:35.917 [2024-11-19 08:01:27.492028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.917 [2024-11-19 08:01:27.492081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.917 qpair failed and we were unable to recover it. 00:37:35.917 [2024-11-19 08:01:27.492190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.917 [2024-11-19 08:01:27.492225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.917 qpair failed and we were unable to recover it. 00:37:35.917 [2024-11-19 08:01:27.492379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.917 [2024-11-19 08:01:27.492415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.917 qpair failed and we were unable to recover it. 00:37:35.917 [2024-11-19 08:01:27.492596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.917 [2024-11-19 08:01:27.492631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.917 qpair failed and we were unable to recover it. 00:37:35.917 [2024-11-19 08:01:27.492772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.917 [2024-11-19 08:01:27.492807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.917 qpair failed and we were unable to recover it. 00:37:35.917 [2024-11-19 08:01:27.492919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.917 [2024-11-19 08:01:27.492954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.917 qpair failed and we were unable to recover it. 00:37:35.917 [2024-11-19 08:01:27.493185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.917 [2024-11-19 08:01:27.493222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.917 qpair failed and we were unable to recover it. 00:37:35.917 [2024-11-19 08:01:27.493387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.917 [2024-11-19 08:01:27.493430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.917 qpair failed and we were unable to recover it. 00:37:35.917 [2024-11-19 08:01:27.493568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.917 [2024-11-19 08:01:27.493604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.917 qpair failed and we were unable to recover it. 00:37:35.917 [2024-11-19 08:01:27.493716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.917 [2024-11-19 08:01:27.493751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.917 qpair failed and we were unable to recover it. 00:37:35.917 [2024-11-19 08:01:27.493896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.917 [2024-11-19 08:01:27.493951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.917 qpair failed and we were unable to recover it. 00:37:35.917 [2024-11-19 08:01:27.494137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.917 [2024-11-19 08:01:27.494195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.917 qpair failed and we were unable to recover it. 00:37:35.917 [2024-11-19 08:01:27.494311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.917 [2024-11-19 08:01:27.494346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.917 qpair failed and we were unable to recover it. 00:37:35.917 [2024-11-19 08:01:27.494496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.917 [2024-11-19 08:01:27.494530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.917 qpair failed and we were unable to recover it. 00:37:35.917 [2024-11-19 08:01:27.494627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.917 [2024-11-19 08:01:27.494661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.917 qpair failed and we were unable to recover it. 00:37:35.917 [2024-11-19 08:01:27.494803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.917 [2024-11-19 08:01:27.494840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.917 qpair failed and we were unable to recover it. 00:37:35.917 [2024-11-19 08:01:27.494976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.917 [2024-11-19 08:01:27.495016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.917 qpair failed and we were unable to recover it. 00:37:35.917 [2024-11-19 08:01:27.495190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.917 [2024-11-19 08:01:27.495224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.917 qpair failed and we were unable to recover it. 00:37:35.917 [2024-11-19 08:01:27.495349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.917 [2024-11-19 08:01:27.495389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.917 qpair failed and we were unable to recover it. 00:37:35.917 [2024-11-19 08:01:27.495492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.917 [2024-11-19 08:01:27.495526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.917 qpair failed and we were unable to recover it. 00:37:35.917 [2024-11-19 08:01:27.495667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.917 [2024-11-19 08:01:27.495712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.917 qpair failed and we were unable to recover it. 00:37:35.917 [2024-11-19 08:01:27.495830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.917 [2024-11-19 08:01:27.495866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.917 qpair failed and we were unable to recover it. 00:37:35.917 [2024-11-19 08:01:27.496028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.917 [2024-11-19 08:01:27.496081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.917 qpair failed and we were unable to recover it. 00:37:35.917 [2024-11-19 08:01:27.496218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.917 [2024-11-19 08:01:27.496253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.917 qpair failed and we were unable to recover it. 00:37:35.917 [2024-11-19 08:01:27.496391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.917 [2024-11-19 08:01:27.496425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.917 qpair failed and we were unable to recover it. 00:37:35.917 [2024-11-19 08:01:27.496562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.917 [2024-11-19 08:01:27.496597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.917 qpair failed and we were unable to recover it. 00:37:35.917 [2024-11-19 08:01:27.496837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.917 [2024-11-19 08:01:27.496887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.917 qpair failed and we were unable to recover it. 00:37:35.917 [2024-11-19 08:01:27.497045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.918 [2024-11-19 08:01:27.497084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.918 qpair failed and we were unable to recover it. 00:37:35.918 [2024-11-19 08:01:27.497243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.918 [2024-11-19 08:01:27.497278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.918 qpair failed and we were unable to recover it. 00:37:35.918 [2024-11-19 08:01:27.497405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.918 [2024-11-19 08:01:27.497440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.918 qpair failed and we were unable to recover it. 00:37:35.918 [2024-11-19 08:01:27.497581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.918 [2024-11-19 08:01:27.497615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.918 qpair failed and we were unable to recover it. 00:37:35.918 [2024-11-19 08:01:27.497742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.918 [2024-11-19 08:01:27.497778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.918 qpair failed and we were unable to recover it. 00:37:35.918 [2024-11-19 08:01:27.497959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.918 [2024-11-19 08:01:27.498013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.918 qpair failed and we were unable to recover it. 00:37:35.918 [2024-11-19 08:01:27.498154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.918 [2024-11-19 08:01:27.498206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.918 qpair failed and we were unable to recover it. 00:37:35.918 [2024-11-19 08:01:27.498363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.918 [2024-11-19 08:01:27.498397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.918 qpair failed and we were unable to recover it. 00:37:35.918 [2024-11-19 08:01:27.498537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.918 [2024-11-19 08:01:27.498571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.918 qpair failed and we were unable to recover it. 00:37:35.918 [2024-11-19 08:01:27.498695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.918 [2024-11-19 08:01:27.498756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.918 qpair failed and we were unable to recover it. 00:37:35.918 [2024-11-19 08:01:27.498897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.918 [2024-11-19 08:01:27.498935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.918 qpair failed and we were unable to recover it. 00:37:35.918 [2024-11-19 08:01:27.499051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.918 [2024-11-19 08:01:27.499086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.918 qpair failed and we were unable to recover it. 00:37:35.918 [2024-11-19 08:01:27.499256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.918 [2024-11-19 08:01:27.499292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.918 qpair failed and we were unable to recover it. 00:37:35.918 [2024-11-19 08:01:27.499446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.918 [2024-11-19 08:01:27.499482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.918 qpair failed and we were unable to recover it. 00:37:35.918 [2024-11-19 08:01:27.499708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.918 [2024-11-19 08:01:27.499744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.918 qpair failed and we were unable to recover it. 00:37:35.918 [2024-11-19 08:01:27.499855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.918 [2024-11-19 08:01:27.499891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.918 qpair failed and we were unable to recover it. 00:37:35.918 [2024-11-19 08:01:27.500075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.918 [2024-11-19 08:01:27.500134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.918 qpair failed and we were unable to recover it. 00:37:35.918 [2024-11-19 08:01:27.500353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.918 [2024-11-19 08:01:27.500405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.918 qpair failed and we were unable to recover it. 00:37:35.918 [2024-11-19 08:01:27.500553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.918 [2024-11-19 08:01:27.500590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.918 qpair failed and we were unable to recover it. 00:37:35.918 [2024-11-19 08:01:27.500705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.918 [2024-11-19 08:01:27.500743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.918 qpair failed and we were unable to recover it. 00:37:35.918 [2024-11-19 08:01:27.500879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.918 [2024-11-19 08:01:27.500915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.918 qpair failed and we were unable to recover it. 00:37:35.918 [2024-11-19 08:01:27.501102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.918 [2024-11-19 08:01:27.501155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.918 qpair failed and we were unable to recover it. 00:37:35.918 [2024-11-19 08:01:27.501270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.918 [2024-11-19 08:01:27.501305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.918 qpair failed and we were unable to recover it. 00:37:35.918 [2024-11-19 08:01:27.501458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.918 [2024-11-19 08:01:27.501493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.918 qpair failed and we were unable to recover it. 00:37:35.918 [2024-11-19 08:01:27.501632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.918 [2024-11-19 08:01:27.501669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.918 qpair failed and we were unable to recover it. 00:37:35.918 [2024-11-19 08:01:27.501836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.918 [2024-11-19 08:01:27.501885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.918 qpair failed and we were unable to recover it. 00:37:35.918 [2024-11-19 08:01:27.502037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.918 [2024-11-19 08:01:27.502086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.918 qpair failed and we were unable to recover it. 00:37:35.918 [2024-11-19 08:01:27.502206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.918 [2024-11-19 08:01:27.502244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.918 qpair failed and we were unable to recover it. 00:37:35.918 [2024-11-19 08:01:27.502447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.918 [2024-11-19 08:01:27.502511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.918 qpair failed and we were unable to recover it. 00:37:35.918 [2024-11-19 08:01:27.502650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.918 [2024-11-19 08:01:27.502685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.918 qpair failed and we were unable to recover it. 00:37:35.918 [2024-11-19 08:01:27.502825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.918 [2024-11-19 08:01:27.502859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.918 qpair failed and we were unable to recover it. 00:37:35.918 [2024-11-19 08:01:27.503010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.918 [2024-11-19 08:01:27.503050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.918 qpair failed and we were unable to recover it. 00:37:35.918 [2024-11-19 08:01:27.503213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.918 [2024-11-19 08:01:27.503248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.918 qpair failed and we were unable to recover it. 00:37:35.918 [2024-11-19 08:01:27.503383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.918 [2024-11-19 08:01:27.503418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.918 qpair failed and we were unable to recover it. 00:37:35.918 [2024-11-19 08:01:27.503580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.918 [2024-11-19 08:01:27.503628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.918 qpair failed and we were unable to recover it. 00:37:35.918 [2024-11-19 08:01:27.503764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.918 [2024-11-19 08:01:27.503802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.918 qpair failed and we were unable to recover it. 00:37:35.918 [2024-11-19 08:01:27.503933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.918 [2024-11-19 08:01:27.503970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.918 qpair failed and we were unable to recover it. 00:37:35.918 [2024-11-19 08:01:27.504085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.918 [2024-11-19 08:01:27.504121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.919 qpair failed and we were unable to recover it. 00:37:35.919 [2024-11-19 08:01:27.504287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.919 [2024-11-19 08:01:27.504322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.919 qpair failed and we were unable to recover it. 00:37:35.919 [2024-11-19 08:01:27.504425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.919 [2024-11-19 08:01:27.504460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.919 qpair failed and we were unable to recover it. 00:37:35.919 [2024-11-19 08:01:27.504582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.919 [2024-11-19 08:01:27.504622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.919 qpair failed and we were unable to recover it. 00:37:35.919 [2024-11-19 08:01:27.504776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.919 [2024-11-19 08:01:27.504814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.919 qpair failed and we were unable to recover it. 00:37:35.919 [2024-11-19 08:01:27.504937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.919 [2024-11-19 08:01:27.504986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.919 qpair failed and we were unable to recover it. 00:37:35.919 [2024-11-19 08:01:27.505098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.919 [2024-11-19 08:01:27.505154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.919 qpair failed and we were unable to recover it. 00:37:35.919 [2024-11-19 08:01:27.505304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.919 [2024-11-19 08:01:27.505343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.919 qpair failed and we were unable to recover it. 00:37:35.919 [2024-11-19 08:01:27.505545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.919 [2024-11-19 08:01:27.505583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.919 qpair failed and we were unable to recover it. 00:37:35.919 [2024-11-19 08:01:27.505735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.919 [2024-11-19 08:01:27.505785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.919 qpair failed and we were unable to recover it. 00:37:35.919 [2024-11-19 08:01:27.505899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.919 [2024-11-19 08:01:27.505935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.919 qpair failed and we were unable to recover it. 00:37:35.919 [2024-11-19 08:01:27.506072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.919 [2024-11-19 08:01:27.506108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.919 qpair failed and we were unable to recover it. 00:37:35.919 [2024-11-19 08:01:27.506218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.919 [2024-11-19 08:01:27.506253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.919 qpair failed and we were unable to recover it. 00:37:35.919 [2024-11-19 08:01:27.506404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.919 [2024-11-19 08:01:27.506450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.919 qpair failed and we were unable to recover it. 00:37:35.919 [2024-11-19 08:01:27.506632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.919 [2024-11-19 08:01:27.506670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.919 qpair failed and we were unable to recover it. 00:37:35.919 [2024-11-19 08:01:27.506793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.919 [2024-11-19 08:01:27.506829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.919 qpair failed and we were unable to recover it. 00:37:35.919 [2024-11-19 08:01:27.506941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.919 [2024-11-19 08:01:27.506979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.919 qpair failed and we were unable to recover it. 00:37:35.919 [2024-11-19 08:01:27.507094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.919 [2024-11-19 08:01:27.507129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.919 qpair failed and we were unable to recover it. 00:37:35.919 [2024-11-19 08:01:27.507258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.919 [2024-11-19 08:01:27.507292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.919 qpair failed and we were unable to recover it. 00:37:35.919 [2024-11-19 08:01:27.507427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.919 [2024-11-19 08:01:27.507462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.919 qpair failed and we were unable to recover it. 00:37:35.919 [2024-11-19 08:01:27.507582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.919 [2024-11-19 08:01:27.507621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.919 qpair failed and we were unable to recover it. 00:37:35.919 [2024-11-19 08:01:27.507810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.919 [2024-11-19 08:01:27.507846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.919 qpair failed and we were unable to recover it. 00:37:35.919 [2024-11-19 08:01:27.507959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.919 [2024-11-19 08:01:27.507994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.919 qpair failed and we were unable to recover it. 00:37:35.919 [2024-11-19 08:01:27.508118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.919 [2024-11-19 08:01:27.508156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.919 qpair failed and we were unable to recover it. 00:37:35.919 [2024-11-19 08:01:27.508286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.919 [2024-11-19 08:01:27.508339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.919 qpair failed and we were unable to recover it. 00:37:35.919 [2024-11-19 08:01:27.508462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.919 [2024-11-19 08:01:27.508500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.919 qpair failed and we were unable to recover it. 00:37:35.919 [2024-11-19 08:01:27.508612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.919 [2024-11-19 08:01:27.508649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.919 qpair failed and we were unable to recover it. 00:37:35.919 [2024-11-19 08:01:27.508835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.919 [2024-11-19 08:01:27.508884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.919 qpair failed and we were unable to recover it. 00:37:35.919 [2024-11-19 08:01:27.509003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.919 [2024-11-19 08:01:27.509049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.919 qpair failed and we were unable to recover it. 00:37:35.919 [2024-11-19 08:01:27.509161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.919 [2024-11-19 08:01:27.509195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.919 qpair failed and we were unable to recover it. 00:37:35.919 [2024-11-19 08:01:27.509329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.919 [2024-11-19 08:01:27.509363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.919 qpair failed and we were unable to recover it. 00:37:35.919 [2024-11-19 08:01:27.509540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.919 [2024-11-19 08:01:27.509574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.919 qpair failed and we were unable to recover it. 00:37:35.919 [2024-11-19 08:01:27.509679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.919 [2024-11-19 08:01:27.509721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.919 qpair failed and we were unable to recover it. 00:37:35.919 [2024-11-19 08:01:27.509832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.919 [2024-11-19 08:01:27.509867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.919 qpair failed and we were unable to recover it. 00:37:35.919 [2024-11-19 08:01:27.510030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.919 [2024-11-19 08:01:27.510085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.919 qpair failed and we were unable to recover it. 00:37:35.919 [2024-11-19 08:01:27.510207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.919 [2024-11-19 08:01:27.510245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.919 qpair failed and we were unable to recover it. 00:37:35.919 [2024-11-19 08:01:27.510410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.919 [2024-11-19 08:01:27.510445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.919 qpair failed and we were unable to recover it. 00:37:35.919 [2024-11-19 08:01:27.510558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.919 [2024-11-19 08:01:27.510593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.919 qpair failed and we were unable to recover it. 00:37:35.920 [2024-11-19 08:01:27.510711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.920 [2024-11-19 08:01:27.510746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.920 qpair failed and we were unable to recover it. 00:37:35.920 [2024-11-19 08:01:27.510883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.920 [2024-11-19 08:01:27.510918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.920 qpair failed and we were unable to recover it. 00:37:35.920 [2024-11-19 08:01:27.511023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.920 [2024-11-19 08:01:27.511076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.920 qpair failed and we were unable to recover it. 00:37:35.920 [2024-11-19 08:01:27.511215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.920 [2024-11-19 08:01:27.511249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.920 qpair failed and we were unable to recover it. 00:37:35.920 [2024-11-19 08:01:27.511375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.920 [2024-11-19 08:01:27.511414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.920 qpair failed and we were unable to recover it. 00:37:35.920 [2024-11-19 08:01:27.511524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.920 [2024-11-19 08:01:27.511558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.920 qpair failed and we were unable to recover it. 00:37:35.920 [2024-11-19 08:01:27.511661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.920 [2024-11-19 08:01:27.511703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.920 qpair failed and we were unable to recover it. 00:37:35.920 [2024-11-19 08:01:27.511839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.920 [2024-11-19 08:01:27.511873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.920 qpair failed and we were unable to recover it. 00:37:35.920 [2024-11-19 08:01:27.512007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.920 [2024-11-19 08:01:27.512042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.920 qpair failed and we were unable to recover it. 00:37:35.920 [2024-11-19 08:01:27.512148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.920 [2024-11-19 08:01:27.512183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.920 qpair failed and we were unable to recover it. 00:37:35.920 [2024-11-19 08:01:27.512351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.920 [2024-11-19 08:01:27.512404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.920 qpair failed and we were unable to recover it. 00:37:35.920 [2024-11-19 08:01:27.512537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.920 [2024-11-19 08:01:27.512571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.920 qpair failed and we were unable to recover it. 00:37:35.920 [2024-11-19 08:01:27.512675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.920 [2024-11-19 08:01:27.512721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.920 qpair failed and we were unable to recover it. 00:37:35.920 [2024-11-19 08:01:27.512863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.920 [2024-11-19 08:01:27.512898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.920 qpair failed and we were unable to recover it. 00:37:35.920 [2024-11-19 08:01:27.513048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.920 [2024-11-19 08:01:27.513087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.920 qpair failed and we were unable to recover it. 00:37:35.920 [2024-11-19 08:01:27.513222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.920 [2024-11-19 08:01:27.513256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.920 qpair failed and we were unable to recover it. 00:37:35.920 [2024-11-19 08:01:27.513394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.920 [2024-11-19 08:01:27.513427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.920 qpair failed and we were unable to recover it. 00:37:35.920 [2024-11-19 08:01:27.513586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.920 [2024-11-19 08:01:27.513620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.920 qpair failed and we were unable to recover it. 00:37:35.920 [2024-11-19 08:01:27.513784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.920 [2024-11-19 08:01:27.513834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.920 qpair failed and we were unable to recover it. 00:37:35.920 [2024-11-19 08:01:27.514004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.920 [2024-11-19 08:01:27.514071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.920 qpair failed and we were unable to recover it. 00:37:35.920 [2024-11-19 08:01:27.514226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.920 [2024-11-19 08:01:27.514261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.920 qpair failed and we were unable to recover it. 00:37:35.920 [2024-11-19 08:01:27.514437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.920 [2024-11-19 08:01:27.514472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.920 qpair failed and we were unable to recover it. 00:37:35.920 [2024-11-19 08:01:27.514625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.920 [2024-11-19 08:01:27.514660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.920 qpair failed and we were unable to recover it. 00:37:35.920 [2024-11-19 08:01:27.514789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.920 [2024-11-19 08:01:27.514823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.920 qpair failed and we were unable to recover it. 00:37:35.920 [2024-11-19 08:01:27.514941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.920 [2024-11-19 08:01:27.514975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.920 qpair failed and we were unable to recover it. 00:37:35.920 [2024-11-19 08:01:27.515110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.920 [2024-11-19 08:01:27.515145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.920 qpair failed and we were unable to recover it. 00:37:35.920 [2024-11-19 08:01:27.515299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.920 [2024-11-19 08:01:27.515337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.920 qpair failed and we were unable to recover it. 00:37:35.920 [2024-11-19 08:01:27.515499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.920 [2024-11-19 08:01:27.515537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.920 qpair failed and we were unable to recover it. 00:37:35.920 [2024-11-19 08:01:27.515732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.920 [2024-11-19 08:01:27.515782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.920 qpair failed and we were unable to recover it. 00:37:35.920 [2024-11-19 08:01:27.515962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.920 [2024-11-19 08:01:27.516012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.920 qpair failed and we were unable to recover it. 00:37:35.920 [2024-11-19 08:01:27.516191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.920 [2024-11-19 08:01:27.516226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.920 qpair failed and we were unable to recover it. 00:37:35.920 [2024-11-19 08:01:27.516370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.920 [2024-11-19 08:01:27.516404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.920 qpair failed and we were unable to recover it. 00:37:35.920 [2024-11-19 08:01:27.516537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.920 [2024-11-19 08:01:27.516572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.920 qpair failed and we were unable to recover it. 00:37:35.920 [2024-11-19 08:01:27.516674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.920 [2024-11-19 08:01:27.516716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.920 qpair failed and we were unable to recover it. 00:37:35.920 [2024-11-19 08:01:27.516868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.920 [2024-11-19 08:01:27.516916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.920 qpair failed and we were unable to recover it. 00:37:35.920 [2024-11-19 08:01:27.517044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.920 [2024-11-19 08:01:27.517084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.920 qpair failed and we were unable to recover it. 00:37:35.920 [2024-11-19 08:01:27.517241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.921 [2024-11-19 08:01:27.517284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.921 qpair failed and we were unable to recover it. 00:37:35.921 [2024-11-19 08:01:27.517399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.921 [2024-11-19 08:01:27.517435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.921 qpair failed and we were unable to recover it. 00:37:35.921 [2024-11-19 08:01:27.517550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.921 [2024-11-19 08:01:27.517590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.921 qpair failed and we were unable to recover it. 00:37:35.921 [2024-11-19 08:01:27.517705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.921 [2024-11-19 08:01:27.517740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.921 qpair failed and we were unable to recover it. 00:37:35.921 [2024-11-19 08:01:27.517872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.921 [2024-11-19 08:01:27.517907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.921 qpair failed and we were unable to recover it. 00:37:35.921 [2024-11-19 08:01:27.518012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.921 [2024-11-19 08:01:27.518047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.921 qpair failed and we were unable to recover it. 00:37:35.921 [2024-11-19 08:01:27.518154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.921 [2024-11-19 08:01:27.518216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.921 qpair failed and we were unable to recover it. 00:37:35.921 [2024-11-19 08:01:27.518368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.921 [2024-11-19 08:01:27.518407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.921 qpair failed and we were unable to recover it. 00:37:35.921 [2024-11-19 08:01:27.518554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.921 [2024-11-19 08:01:27.518591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.921 qpair failed and we were unable to recover it. 00:37:35.921 [2024-11-19 08:01:27.518709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.921 [2024-11-19 08:01:27.518748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.921 qpair failed and we were unable to recover it. 00:37:35.921 [2024-11-19 08:01:27.518857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.921 [2024-11-19 08:01:27.518893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.921 qpair failed and we were unable to recover it. 00:37:35.921 [2024-11-19 08:01:27.519048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.921 [2024-11-19 08:01:27.519085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.921 qpair failed and we were unable to recover it. 00:37:35.921 [2024-11-19 08:01:27.519234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.921 [2024-11-19 08:01:27.519270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.921 qpair failed and we were unable to recover it. 00:37:35.921 [2024-11-19 08:01:27.519456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.921 [2024-11-19 08:01:27.519524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.921 qpair failed and we were unable to recover it. 00:37:35.921 [2024-11-19 08:01:27.519662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.921 [2024-11-19 08:01:27.519711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.921 qpair failed and we were unable to recover it. 00:37:35.921 [2024-11-19 08:01:27.519864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.921 [2024-11-19 08:01:27.519899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.921 qpair failed and we were unable to recover it. 00:37:35.921 [2024-11-19 08:01:27.520003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.921 [2024-11-19 08:01:27.520055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.921 qpair failed and we were unable to recover it. 00:37:35.921 [2024-11-19 08:01:27.520174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.921 [2024-11-19 08:01:27.520213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.921 qpair failed and we were unable to recover it. 00:37:35.921 [2024-11-19 08:01:27.520363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.921 [2024-11-19 08:01:27.520401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.921 qpair failed and we were unable to recover it. 00:37:35.921 [2024-11-19 08:01:27.520547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.921 [2024-11-19 08:01:27.520586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.921 qpair failed and we were unable to recover it. 00:37:35.921 [2024-11-19 08:01:27.520751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.921 [2024-11-19 08:01:27.520786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.921 qpair failed and we were unable to recover it. 00:37:35.921 [2024-11-19 08:01:27.520947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.921 [2024-11-19 08:01:27.520985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.921 qpair failed and we were unable to recover it. 00:37:35.921 [2024-11-19 08:01:27.521104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.921 [2024-11-19 08:01:27.521141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.921 qpair failed and we were unable to recover it. 00:37:35.921 [2024-11-19 08:01:27.521288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.921 [2024-11-19 08:01:27.521325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.921 qpair failed and we were unable to recover it. 00:37:35.921 [2024-11-19 08:01:27.521511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.921 [2024-11-19 08:01:27.521580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.921 qpair failed and we were unable to recover it. 00:37:35.921 [2024-11-19 08:01:27.521734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.921 [2024-11-19 08:01:27.521772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.921 qpair failed and we were unable to recover it. 00:37:35.921 [2024-11-19 08:01:27.521905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.921 [2024-11-19 08:01:27.521941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.921 qpair failed and we were unable to recover it. 00:37:35.921 [2024-11-19 08:01:27.522137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.921 [2024-11-19 08:01:27.522176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.921 qpair failed and we were unable to recover it. 00:37:35.921 [2024-11-19 08:01:27.522407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.921 [2024-11-19 08:01:27.522446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.921 qpair failed and we were unable to recover it. 00:37:35.921 [2024-11-19 08:01:27.522604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.921 [2024-11-19 08:01:27.522642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.921 qpair failed and we were unable to recover it. 00:37:35.921 [2024-11-19 08:01:27.522795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.921 [2024-11-19 08:01:27.522830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.921 qpair failed and we were unable to recover it. 00:37:35.921 [2024-11-19 08:01:27.522952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.922 [2024-11-19 08:01:27.523001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.922 qpair failed and we were unable to recover it. 00:37:35.922 [2024-11-19 08:01:27.523162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.922 [2024-11-19 08:01:27.523215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.922 qpair failed and we were unable to recover it. 00:37:35.922 [2024-11-19 08:01:27.523357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.922 [2024-11-19 08:01:27.523409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.922 qpair failed and we were unable to recover it. 00:37:35.922 [2024-11-19 08:01:27.523513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.922 [2024-11-19 08:01:27.523549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.922 qpair failed and we were unable to recover it. 00:37:35.922 [2024-11-19 08:01:27.523684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.922 [2024-11-19 08:01:27.523730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.922 qpair failed and we were unable to recover it. 00:37:35.922 [2024-11-19 08:01:27.523861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.922 [2024-11-19 08:01:27.523915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.922 qpair failed and we were unable to recover it. 00:37:35.922 [2024-11-19 08:01:27.524079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.922 [2024-11-19 08:01:27.524114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.922 qpair failed and we were unable to recover it. 00:37:35.922 [2024-11-19 08:01:27.524255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.922 [2024-11-19 08:01:27.524290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.922 qpair failed and we were unable to recover it. 00:37:35.922 [2024-11-19 08:01:27.524423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.922 [2024-11-19 08:01:27.524471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.922 qpair failed and we were unable to recover it. 00:37:35.922 [2024-11-19 08:01:27.524593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.922 [2024-11-19 08:01:27.524634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.922 qpair failed and we were unable to recover it. 00:37:35.922 [2024-11-19 08:01:27.524761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.922 [2024-11-19 08:01:27.524797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.922 qpair failed and we were unable to recover it. 00:37:35.922 [2024-11-19 08:01:27.524903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.922 [2024-11-19 08:01:27.524936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.922 qpair failed and we were unable to recover it. 00:37:35.922 [2024-11-19 08:01:27.525078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.922 [2024-11-19 08:01:27.525143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.922 qpair failed and we were unable to recover it. 00:37:35.922 [2024-11-19 08:01:27.525265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.922 [2024-11-19 08:01:27.525304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.922 qpair failed and we were unable to recover it. 00:37:35.922 [2024-11-19 08:01:27.525443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.922 [2024-11-19 08:01:27.525481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.922 qpair failed and we were unable to recover it. 00:37:35.922 [2024-11-19 08:01:27.525607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.922 [2024-11-19 08:01:27.525642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.922 qpair failed and we were unable to recover it. 00:37:35.922 [2024-11-19 08:01:27.525798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.922 [2024-11-19 08:01:27.525848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.922 qpair failed and we were unable to recover it. 00:37:35.922 [2024-11-19 08:01:27.525967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.922 [2024-11-19 08:01:27.526002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.922 qpair failed and we were unable to recover it. 00:37:35.922 [2024-11-19 08:01:27.526159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.922 [2024-11-19 08:01:27.526197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.922 qpair failed and we were unable to recover it. 00:37:35.922 [2024-11-19 08:01:27.526309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.922 [2024-11-19 08:01:27.526348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.922 qpair failed and we were unable to recover it. 00:37:35.922 [2024-11-19 08:01:27.526469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.922 [2024-11-19 08:01:27.526507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.922 qpair failed and we were unable to recover it. 00:37:35.922 [2024-11-19 08:01:27.526678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.922 [2024-11-19 08:01:27.526723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.922 qpair failed and we were unable to recover it. 00:37:35.922 [2024-11-19 08:01:27.526863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.922 [2024-11-19 08:01:27.526899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.922 qpair failed and we were unable to recover it. 00:37:35.922 [2024-11-19 08:01:27.527071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.922 [2024-11-19 08:01:27.527129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.922 qpair failed and we were unable to recover it. 00:37:35.922 [2024-11-19 08:01:27.527270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.922 [2024-11-19 08:01:27.527308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.922 qpair failed and we were unable to recover it. 00:37:35.922 [2024-11-19 08:01:27.527450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.922 [2024-11-19 08:01:27.527488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.922 qpair failed and we were unable to recover it. 00:37:35.922 [2024-11-19 08:01:27.527631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.922 [2024-11-19 08:01:27.527669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.922 qpair failed and we were unable to recover it. 00:37:35.922 [2024-11-19 08:01:27.527808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.922 [2024-11-19 08:01:27.527843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.922 qpair failed and we were unable to recover it. 00:37:35.922 [2024-11-19 08:01:27.528060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.922 [2024-11-19 08:01:27.528096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.922 qpair failed and we were unable to recover it. 00:37:35.922 [2024-11-19 08:01:27.528219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.922 [2024-11-19 08:01:27.528258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.922 qpair failed and we were unable to recover it. 00:37:35.922 [2024-11-19 08:01:27.528432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.922 [2024-11-19 08:01:27.528486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.922 qpair failed and we were unable to recover it. 00:37:35.922 [2024-11-19 08:01:27.528598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.922 [2024-11-19 08:01:27.528633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.922 qpair failed and we were unable to recover it. 00:37:35.922 [2024-11-19 08:01:27.528811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.922 [2024-11-19 08:01:27.528865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.922 qpair failed and we were unable to recover it. 00:37:35.922 [2024-11-19 08:01:27.529013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.922 [2024-11-19 08:01:27.529051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.922 qpair failed and we were unable to recover it. 00:37:35.922 [2024-11-19 08:01:27.529220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.922 [2024-11-19 08:01:27.529277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.922 qpair failed and we were unable to recover it. 00:37:35.922 [2024-11-19 08:01:27.529412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.922 [2024-11-19 08:01:27.529450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.922 qpair failed and we were unable to recover it. 00:37:35.922 [2024-11-19 08:01:27.529580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.922 [2024-11-19 08:01:27.529623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.922 qpair failed and we were unable to recover it. 00:37:35.922 [2024-11-19 08:01:27.529774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.922 [2024-11-19 08:01:27.529812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.922 qpair failed and we were unable to recover it. 00:37:35.922 [2024-11-19 08:01:27.529976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.923 [2024-11-19 08:01:27.530017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.923 qpair failed and we were unable to recover it. 00:37:35.923 [2024-11-19 08:01:27.530271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.923 [2024-11-19 08:01:27.530332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.923 qpair failed and we were unable to recover it. 00:37:35.923 [2024-11-19 08:01:27.530511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.923 [2024-11-19 08:01:27.530607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.923 qpair failed and we were unable to recover it. 00:37:35.923 [2024-11-19 08:01:27.530755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.923 [2024-11-19 08:01:27.530791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.923 qpair failed and we were unable to recover it. 00:37:35.923 [2024-11-19 08:01:27.530914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.923 [2024-11-19 08:01:27.530969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.923 qpair failed and we were unable to recover it. 00:37:35.923 [2024-11-19 08:01:27.531129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.923 [2024-11-19 08:01:27.531190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.923 qpair failed and we were unable to recover it. 00:37:35.923 [2024-11-19 08:01:27.531345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.923 [2024-11-19 08:01:27.531399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.923 qpair failed and we were unable to recover it. 00:37:35.923 [2024-11-19 08:01:27.531537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.923 [2024-11-19 08:01:27.531572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.923 qpair failed and we were unable to recover it. 00:37:35.923 [2024-11-19 08:01:27.531785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.923 [2024-11-19 08:01:27.531820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.923 qpair failed and we were unable to recover it. 00:37:35.923 [2024-11-19 08:01:27.531979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.923 [2024-11-19 08:01:27.532013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.923 qpair failed and we were unable to recover it. 00:37:35.923 [2024-11-19 08:01:27.532134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.923 [2024-11-19 08:01:27.532169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.923 qpair failed and we were unable to recover it. 00:37:35.923 [2024-11-19 08:01:27.532307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.923 [2024-11-19 08:01:27.532346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.923 qpair failed and we were unable to recover it. 00:37:35.923 [2024-11-19 08:01:27.532478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.923 [2024-11-19 08:01:27.532514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.923 qpair failed and we were unable to recover it. 00:37:35.923 [2024-11-19 08:01:27.532639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.923 [2024-11-19 08:01:27.532699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.923 qpair failed and we were unable to recover it. 00:37:35.923 [2024-11-19 08:01:27.532865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.923 [2024-11-19 08:01:27.532918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.923 qpair failed and we were unable to recover it. 00:37:35.923 [2024-11-19 08:01:27.533053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.923 [2024-11-19 08:01:27.533094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.923 qpair failed and we were unable to recover it. 00:37:35.923 [2024-11-19 08:01:27.533246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.923 [2024-11-19 08:01:27.533298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.923 qpair failed and we were unable to recover it. 00:37:35.923 [2024-11-19 08:01:27.533428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.923 [2024-11-19 08:01:27.533480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.923 qpair failed and we were unable to recover it. 00:37:35.923 [2024-11-19 08:01:27.533664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.923 [2024-11-19 08:01:27.533723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.923 qpair failed and we were unable to recover it. 00:37:35.923 [2024-11-19 08:01:27.533878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.923 [2024-11-19 08:01:27.533919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.923 qpair failed and we were unable to recover it. 00:37:35.923 [2024-11-19 08:01:27.534078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.923 [2024-11-19 08:01:27.534118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.923 qpair failed and we were unable to recover it. 00:37:35.923 [2024-11-19 08:01:27.534275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.923 [2024-11-19 08:01:27.534315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.923 qpair failed and we were unable to recover it. 00:37:35.923 [2024-11-19 08:01:27.534441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.923 [2024-11-19 08:01:27.534497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.923 qpair failed and we were unable to recover it. 00:37:35.923 [2024-11-19 08:01:27.534726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.923 [2024-11-19 08:01:27.534761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.923 qpair failed and we were unable to recover it. 00:37:35.923 [2024-11-19 08:01:27.534941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.923 [2024-11-19 08:01:27.534996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.923 qpair failed and we were unable to recover it. 00:37:35.923 [2024-11-19 08:01:27.535157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.923 [2024-11-19 08:01:27.535213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.923 qpair failed and we were unable to recover it. 00:37:35.923 [2024-11-19 08:01:27.535340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.923 [2024-11-19 08:01:27.535375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.923 qpair failed and we were unable to recover it. 00:37:35.923 [2024-11-19 08:01:27.535490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.923 [2024-11-19 08:01:27.535526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.923 qpair failed and we were unable to recover it. 00:37:35.923 [2024-11-19 08:01:27.535679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.923 [2024-11-19 08:01:27.535737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.923 qpair failed and we were unable to recover it. 00:37:35.923 [2024-11-19 08:01:27.535878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.923 [2024-11-19 08:01:27.535916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.923 qpair failed and we were unable to recover it. 00:37:35.923 [2024-11-19 08:01:27.536108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.923 [2024-11-19 08:01:27.536147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.923 qpair failed and we were unable to recover it. 00:37:35.923 [2024-11-19 08:01:27.536271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.923 [2024-11-19 08:01:27.536310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.923 qpair failed and we were unable to recover it. 00:37:35.923 [2024-11-19 08:01:27.536476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.923 [2024-11-19 08:01:27.536530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.923 qpair failed and we were unable to recover it. 00:37:35.923 [2024-11-19 08:01:27.536734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.923 [2024-11-19 08:01:27.536770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.923 qpair failed and we were unable to recover it. 00:37:35.923 [2024-11-19 08:01:27.536949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.923 [2024-11-19 08:01:27.537003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.923 qpair failed and we were unable to recover it. 00:37:35.923 [2024-11-19 08:01:27.537194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.923 [2024-11-19 08:01:27.537260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.923 qpair failed and we were unable to recover it. 00:37:35.923 [2024-11-19 08:01:27.537425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.923 [2024-11-19 08:01:27.537487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.923 qpair failed and we were unable to recover it. 00:37:35.923 [2024-11-19 08:01:27.537612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.924 [2024-11-19 08:01:27.537666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.924 qpair failed and we were unable to recover it. 00:37:35.924 [2024-11-19 08:01:27.537812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.924 [2024-11-19 08:01:27.537861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.924 qpair failed and we were unable to recover it. 00:37:35.924 [2024-11-19 08:01:27.537987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.924 [2024-11-19 08:01:27.538056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.924 qpair failed and we were unable to recover it. 00:37:35.924 [2024-11-19 08:01:27.538200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.924 [2024-11-19 08:01:27.538259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.924 qpair failed and we were unable to recover it. 00:37:35.924 [2024-11-19 08:01:27.538475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.924 [2024-11-19 08:01:27.538510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.924 qpair failed and we were unable to recover it. 00:37:35.924 [2024-11-19 08:01:27.538649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.924 [2024-11-19 08:01:27.538684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.924 qpair failed and we were unable to recover it. 00:37:35.924 [2024-11-19 08:01:27.538803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.924 [2024-11-19 08:01:27.538838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.924 qpair failed and we were unable to recover it. 00:37:35.924 [2024-11-19 08:01:27.538939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.924 [2024-11-19 08:01:27.538976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.924 qpair failed and we were unable to recover it. 00:37:35.924 [2024-11-19 08:01:27.539116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.924 [2024-11-19 08:01:27.539151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.924 qpair failed and we were unable to recover it. 00:37:35.924 [2024-11-19 08:01:27.539303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.924 [2024-11-19 08:01:27.539338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.924 qpair failed and we were unable to recover it. 00:37:35.924 [2024-11-19 08:01:27.539504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.924 [2024-11-19 08:01:27.539545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.924 qpair failed and we were unable to recover it. 00:37:35.924 [2024-11-19 08:01:27.539706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.924 [2024-11-19 08:01:27.539765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.924 qpair failed and we were unable to recover it. 00:37:35.924 [2024-11-19 08:01:27.539910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.924 [2024-11-19 08:01:27.539987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.924 qpair failed and we were unable to recover it. 00:37:35.924 [2024-11-19 08:01:27.540153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.924 [2024-11-19 08:01:27.540207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.924 qpair failed and we were unable to recover it. 00:37:35.924 [2024-11-19 08:01:27.540385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.924 [2024-11-19 08:01:27.540448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.924 qpair failed and we were unable to recover it. 00:37:35.924 [2024-11-19 08:01:27.540592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.924 [2024-11-19 08:01:27.540627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.924 qpair failed and we were unable to recover it. 00:37:35.924 [2024-11-19 08:01:27.540808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.924 [2024-11-19 08:01:27.540857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.924 qpair failed and we were unable to recover it. 00:37:35.924 [2024-11-19 08:01:27.541008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.924 [2024-11-19 08:01:27.541045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.924 qpair failed and we were unable to recover it. 00:37:35.924 [2024-11-19 08:01:27.541178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.924 [2024-11-19 08:01:27.541213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.924 qpair failed and we were unable to recover it. 00:37:35.924 [2024-11-19 08:01:27.541319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.924 [2024-11-19 08:01:27.541353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.924 qpair failed and we were unable to recover it. 00:37:35.924 [2024-11-19 08:01:27.541457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.924 [2024-11-19 08:01:27.541491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.924 qpair failed and we were unable to recover it. 00:37:35.924 [2024-11-19 08:01:27.541587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.924 [2024-11-19 08:01:27.541621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.924 qpair failed and we were unable to recover it. 00:37:35.924 [2024-11-19 08:01:27.541758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.924 [2024-11-19 08:01:27.541795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.924 qpair failed and we were unable to recover it. 00:37:35.924 [2024-11-19 08:01:27.541901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.924 [2024-11-19 08:01:27.541935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.924 qpair failed and we were unable to recover it. 00:37:35.924 [2024-11-19 08:01:27.542077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.924 [2024-11-19 08:01:27.542112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.924 qpair failed and we were unable to recover it. 00:37:35.924 [2024-11-19 08:01:27.542287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.924 [2024-11-19 08:01:27.542322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.924 qpair failed and we were unable to recover it. 00:37:35.924 [2024-11-19 08:01:27.542427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.924 [2024-11-19 08:01:27.542462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.924 qpair failed and we were unable to recover it. 00:37:35.924 [2024-11-19 08:01:27.542569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.924 [2024-11-19 08:01:27.542607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.924 qpair failed and we were unable to recover it. 00:37:35.924 [2024-11-19 08:01:27.542738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.924 [2024-11-19 08:01:27.542774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.924 qpair failed and we were unable to recover it. 00:37:35.924 [2024-11-19 08:01:27.542931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.924 [2024-11-19 08:01:27.542981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.924 qpair failed and we were unable to recover it. 00:37:35.924 [2024-11-19 08:01:27.543124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.924 [2024-11-19 08:01:27.543183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.924 qpair failed and we were unable to recover it. 00:37:35.924 [2024-11-19 08:01:27.543293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.924 [2024-11-19 08:01:27.543328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.924 qpair failed and we were unable to recover it. 00:37:35.924 [2024-11-19 08:01:27.543466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.924 [2024-11-19 08:01:27.543501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.924 qpair failed and we were unable to recover it. 00:37:35.924 [2024-11-19 08:01:27.543609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.924 [2024-11-19 08:01:27.543643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.924 qpair failed and we were unable to recover it. 00:37:35.924 [2024-11-19 08:01:27.543769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.924 [2024-11-19 08:01:27.543804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.924 qpair failed and we were unable to recover it. 00:37:35.924 [2024-11-19 08:01:27.543917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.924 [2024-11-19 08:01:27.543952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.924 qpair failed and we were unable to recover it. 00:37:35.924 [2024-11-19 08:01:27.544066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.924 [2024-11-19 08:01:27.544100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.924 qpair failed and we were unable to recover it. 00:37:35.924 [2024-11-19 08:01:27.544230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.924 [2024-11-19 08:01:27.544264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.924 qpair failed and we were unable to recover it. 00:37:35.925 [2024-11-19 08:01:27.544396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.925 [2024-11-19 08:01:27.544430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.925 qpair failed and we were unable to recover it. 00:37:35.925 [2024-11-19 08:01:27.544578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.925 [2024-11-19 08:01:27.544627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.925 qpair failed and we were unable to recover it. 00:37:35.925 [2024-11-19 08:01:27.544771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.925 [2024-11-19 08:01:27.544820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.925 qpair failed and we were unable to recover it. 00:37:35.925 [2024-11-19 08:01:27.544950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.925 [2024-11-19 08:01:27.545004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.925 qpair failed and we were unable to recover it. 00:37:35.925 [2024-11-19 08:01:27.545182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.925 [2024-11-19 08:01:27.545236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.925 qpair failed and we were unable to recover it. 00:37:35.925 [2024-11-19 08:01:27.545367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.925 [2024-11-19 08:01:27.545421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.925 qpair failed and we were unable to recover it. 00:37:35.925 [2024-11-19 08:01:27.545563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.925 [2024-11-19 08:01:27.545597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.925 qpair failed and we were unable to recover it. 00:37:35.925 [2024-11-19 08:01:27.545736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.925 [2024-11-19 08:01:27.545772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.925 qpair failed and we were unable to recover it. 00:37:35.925 [2024-11-19 08:01:27.545905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.925 [2024-11-19 08:01:27.545945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.925 qpair failed and we were unable to recover it. 00:37:35.925 [2024-11-19 08:01:27.546114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.925 [2024-11-19 08:01:27.546153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.925 qpair failed and we were unable to recover it. 00:37:35.925 [2024-11-19 08:01:27.546292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.925 [2024-11-19 08:01:27.546327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.925 qpair failed and we were unable to recover it. 00:37:35.925 [2024-11-19 08:01:27.546470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.925 [2024-11-19 08:01:27.546504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.925 qpair failed and we were unable to recover it. 00:37:35.925 [2024-11-19 08:01:27.546649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.925 [2024-11-19 08:01:27.546684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.925 qpair failed and we were unable to recover it. 00:37:35.925 [2024-11-19 08:01:27.546813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.925 [2024-11-19 08:01:27.546847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.925 qpair failed and we were unable to recover it. 00:37:35.925 [2024-11-19 08:01:27.546955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.925 [2024-11-19 08:01:27.546991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.925 qpair failed and we were unable to recover it. 00:37:35.925 [2024-11-19 08:01:27.547129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.925 [2024-11-19 08:01:27.547163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.925 qpair failed and we were unable to recover it. 00:37:35.925 [2024-11-19 08:01:27.547271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.925 [2024-11-19 08:01:27.547310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.925 qpair failed and we were unable to recover it. 00:37:35.925 [2024-11-19 08:01:27.547454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.925 [2024-11-19 08:01:27.547489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.925 qpair failed and we were unable to recover it. 00:37:35.925 [2024-11-19 08:01:27.547628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.925 [2024-11-19 08:01:27.547666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.925 qpair failed and we were unable to recover it. 00:37:35.925 [2024-11-19 08:01:27.547816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.925 [2024-11-19 08:01:27.547864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.925 qpair failed and we were unable to recover it. 00:37:35.925 [2024-11-19 08:01:27.548026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.925 [2024-11-19 08:01:27.548066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.925 qpair failed and we were unable to recover it. 00:37:35.925 [2024-11-19 08:01:27.548280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.925 [2024-11-19 08:01:27.548340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.925 qpair failed and we were unable to recover it. 00:37:35.925 [2024-11-19 08:01:27.548464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.925 [2024-11-19 08:01:27.548503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.925 qpair failed and we were unable to recover it. 00:37:35.925 [2024-11-19 08:01:27.548630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.925 [2024-11-19 08:01:27.548670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.925 qpair failed and we were unable to recover it. 00:37:35.925 [2024-11-19 08:01:27.548818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.925 [2024-11-19 08:01:27.548853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.925 qpair failed and we were unable to recover it. 00:37:35.925 [2024-11-19 08:01:27.549035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.925 [2024-11-19 08:01:27.549089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.925 qpair failed and we were unable to recover it. 00:37:35.925 [2024-11-19 08:01:27.549293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.925 [2024-11-19 08:01:27.549351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.925 qpair failed and we were unable to recover it. 00:37:35.925 [2024-11-19 08:01:27.549457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.925 [2024-11-19 08:01:27.549492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.925 qpair failed and we were unable to recover it. 00:37:35.925 [2024-11-19 08:01:27.549661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.925 [2024-11-19 08:01:27.549724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.925 qpair failed and we were unable to recover it. 00:37:35.925 [2024-11-19 08:01:27.549879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.925 [2024-11-19 08:01:27.549921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.925 qpair failed and we were unable to recover it. 00:37:35.925 [2024-11-19 08:01:27.550126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.925 [2024-11-19 08:01:27.550207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.925 qpair failed and we were unable to recover it. 00:37:35.925 [2024-11-19 08:01:27.550460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.925 [2024-11-19 08:01:27.550520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.925 qpair failed and we were unable to recover it. 00:37:35.925 [2024-11-19 08:01:27.550637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.926 [2024-11-19 08:01:27.550675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.926 qpair failed and we were unable to recover it. 00:37:35.926 [2024-11-19 08:01:27.550825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.926 [2024-11-19 08:01:27.550859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.926 qpair failed and we were unable to recover it. 00:37:35.926 [2024-11-19 08:01:27.551047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.926 [2024-11-19 08:01:27.551110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.926 qpair failed and we were unable to recover it. 00:37:35.926 [2024-11-19 08:01:27.551304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.926 [2024-11-19 08:01:27.551365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.926 qpair failed and we were unable to recover it. 00:37:35.926 [2024-11-19 08:01:27.551482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.926 [2024-11-19 08:01:27.551521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.926 qpair failed and we were unable to recover it. 00:37:35.926 [2024-11-19 08:01:27.551700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.926 [2024-11-19 08:01:27.551737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.926 qpair failed and we were unable to recover it. 00:37:35.926 [2024-11-19 08:01:27.551882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.926 [2024-11-19 08:01:27.551937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.926 qpair failed and we were unable to recover it. 00:37:35.926 [2024-11-19 08:01:27.552118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.926 [2024-11-19 08:01:27.552185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.926 qpair failed and we were unable to recover it. 00:37:35.926 [2024-11-19 08:01:27.552325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.926 [2024-11-19 08:01:27.552365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.926 qpair failed and we were unable to recover it. 00:37:35.926 [2024-11-19 08:01:27.552576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.926 [2024-11-19 08:01:27.552615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.926 qpair failed and we were unable to recover it. 00:37:35.926 [2024-11-19 08:01:27.552742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.926 [2024-11-19 08:01:27.552778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.926 qpair failed and we were unable to recover it. 00:37:35.926 [2024-11-19 08:01:27.552940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.926 [2024-11-19 08:01:27.553007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.926 qpair failed and we were unable to recover it. 00:37:35.926 [2024-11-19 08:01:27.553218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.926 [2024-11-19 08:01:27.553258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.926 qpair failed and we were unable to recover it. 00:37:35.926 [2024-11-19 08:01:27.553396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.926 [2024-11-19 08:01:27.553431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.926 qpair failed and we were unable to recover it. 00:37:35.926 [2024-11-19 08:01:27.553574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.926 [2024-11-19 08:01:27.553611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.926 qpair failed and we were unable to recover it. 00:37:35.926 [2024-11-19 08:01:27.553754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.926 [2024-11-19 08:01:27.553789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.926 qpair failed and we were unable to recover it. 00:37:35.926 [2024-11-19 08:01:27.553926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.926 [2024-11-19 08:01:27.553962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.926 qpair failed and we were unable to recover it. 00:37:35.926 [2024-11-19 08:01:27.554094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.926 [2024-11-19 08:01:27.554132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.926 qpair failed and we were unable to recover it. 00:37:35.926 [2024-11-19 08:01:27.554333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.926 [2024-11-19 08:01:27.554371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.926 qpair failed and we were unable to recover it. 00:37:35.926 [2024-11-19 08:01:27.554513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.926 [2024-11-19 08:01:27.554551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.926 qpair failed and we were unable to recover it. 00:37:35.926 [2024-11-19 08:01:27.554685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.926 [2024-11-19 08:01:27.554726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.926 qpair failed and we were unable to recover it. 00:37:35.926 [2024-11-19 08:01:27.554874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.926 [2024-11-19 08:01:27.554924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.926 qpair failed and we were unable to recover it. 00:37:35.926 [2024-11-19 08:01:27.555077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.926 [2024-11-19 08:01:27.555137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.926 qpair failed and we were unable to recover it. 00:37:35.926 [2024-11-19 08:01:27.555261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.926 [2024-11-19 08:01:27.555298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.926 qpair failed and we were unable to recover it. 00:37:35.926 [2024-11-19 08:01:27.555458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.926 [2024-11-19 08:01:27.555504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.926 qpair failed and we were unable to recover it. 00:37:35.926 [2024-11-19 08:01:27.555636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.926 [2024-11-19 08:01:27.555672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.926 qpair failed and we were unable to recover it. 00:37:35.926 [2024-11-19 08:01:27.555818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.926 [2024-11-19 08:01:27.555867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.926 qpair failed and we were unable to recover it. 00:37:35.926 [2024-11-19 08:01:27.555984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.926 [2024-11-19 08:01:27.556020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.926 qpair failed and we were unable to recover it. 00:37:35.926 [2024-11-19 08:01:27.556197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.926 [2024-11-19 08:01:27.556257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.926 qpair failed and we were unable to recover it. 00:37:35.926 [2024-11-19 08:01:27.556435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.926 [2024-11-19 08:01:27.556494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.926 qpair failed and we were unable to recover it. 00:37:35.926 [2024-11-19 08:01:27.556611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.926 [2024-11-19 08:01:27.556648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.926 qpair failed and we were unable to recover it. 00:37:35.926 [2024-11-19 08:01:27.556803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.926 [2024-11-19 08:01:27.556843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.926 qpair failed and we were unable to recover it. 00:37:35.926 [2024-11-19 08:01:27.556997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.926 [2024-11-19 08:01:27.557052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.926 qpair failed and we were unable to recover it. 00:37:35.926 [2024-11-19 08:01:27.557199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.926 [2024-11-19 08:01:27.557235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.926 qpair failed and we were unable to recover it. 00:37:35.926 [2024-11-19 08:01:27.557386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.926 [2024-11-19 08:01:27.557440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.926 qpair failed and we were unable to recover it. 00:37:35.926 [2024-11-19 08:01:27.557573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.926 [2024-11-19 08:01:27.557609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.926 qpair failed and we were unable to recover it. 00:37:35.926 [2024-11-19 08:01:27.557747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.926 [2024-11-19 08:01:27.557782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.926 qpair failed and we were unable to recover it. 00:37:35.926 [2024-11-19 08:01:27.557931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.927 [2024-11-19 08:01:27.557966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.927 qpair failed and we were unable to recover it. 00:37:35.927 [2024-11-19 08:01:27.558108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.927 [2024-11-19 08:01:27.558142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.927 qpair failed and we were unable to recover it. 00:37:35.927 [2024-11-19 08:01:27.558245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.927 [2024-11-19 08:01:27.558279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.927 qpair failed and we were unable to recover it. 00:37:35.927 [2024-11-19 08:01:27.558383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.927 [2024-11-19 08:01:27.558417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.927 qpair failed and we were unable to recover it. 00:37:35.927 [2024-11-19 08:01:27.558550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.927 [2024-11-19 08:01:27.558584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.927 qpair failed and we were unable to recover it. 00:37:35.927 [2024-11-19 08:01:27.558697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.927 [2024-11-19 08:01:27.558733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.927 qpair failed and we were unable to recover it. 00:37:35.927 [2024-11-19 08:01:27.558866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.927 [2024-11-19 08:01:27.558900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.927 qpair failed and we were unable to recover it. 00:37:35.927 [2024-11-19 08:01:27.559006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.927 [2024-11-19 08:01:27.559058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.927 qpair failed and we were unable to recover it. 00:37:35.927 [2024-11-19 08:01:27.559207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.927 [2024-11-19 08:01:27.559244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.927 qpair failed and we were unable to recover it. 00:37:35.927 [2024-11-19 08:01:27.559375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.927 [2024-11-19 08:01:27.559410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.927 qpair failed and we were unable to recover it. 00:37:35.927 [2024-11-19 08:01:27.559553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.927 [2024-11-19 08:01:27.559588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.927 qpair failed and we were unable to recover it. 00:37:35.927 [2024-11-19 08:01:27.559740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.927 [2024-11-19 08:01:27.559778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.927 qpair failed and we were unable to recover it. 00:37:35.927 [2024-11-19 08:01:27.559944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.927 [2024-11-19 08:01:27.559997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.927 qpair failed and we were unable to recover it. 00:37:35.927 [2024-11-19 08:01:27.560107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.927 [2024-11-19 08:01:27.560141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.927 qpair failed and we were unable to recover it. 00:37:35.927 [2024-11-19 08:01:27.560276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.927 [2024-11-19 08:01:27.560315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.927 qpair failed and we were unable to recover it. 00:37:35.927 [2024-11-19 08:01:27.560469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.927 [2024-11-19 08:01:27.560504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.927 qpair failed and we were unable to recover it. 00:37:35.927 [2024-11-19 08:01:27.560646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.927 [2024-11-19 08:01:27.560700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.927 qpair failed and we were unable to recover it. 00:37:35.927 [2024-11-19 08:01:27.560823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.927 [2024-11-19 08:01:27.560860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.927 qpair failed and we were unable to recover it. 00:37:35.927 [2024-11-19 08:01:27.561022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.927 [2024-11-19 08:01:27.561060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.927 qpair failed and we were unable to recover it. 00:37:35.927 [2024-11-19 08:01:27.561234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.927 [2024-11-19 08:01:27.561296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.927 qpair failed and we were unable to recover it. 00:37:35.927 [2024-11-19 08:01:27.561469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.927 [2024-11-19 08:01:27.561530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.927 qpair failed and we were unable to recover it. 00:37:35.927 [2024-11-19 08:01:27.561705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.927 [2024-11-19 08:01:27.561741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.927 qpair failed and we were unable to recover it. 00:37:35.927 [2024-11-19 08:01:27.561877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.927 [2024-11-19 08:01:27.561911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.927 qpair failed and we were unable to recover it. 00:37:35.927 [2024-11-19 08:01:27.562024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.927 [2024-11-19 08:01:27.562075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.927 qpair failed and we were unable to recover it. 00:37:35.927 [2024-11-19 08:01:27.562265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.927 [2024-11-19 08:01:27.562320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.927 qpair failed and we were unable to recover it. 00:37:35.927 [2024-11-19 08:01:27.562482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.927 [2024-11-19 08:01:27.562522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.927 qpair failed and we were unable to recover it. 00:37:35.927 [2024-11-19 08:01:27.562670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.927 [2024-11-19 08:01:27.562750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.927 qpair failed and we were unable to recover it. 00:37:35.927 [2024-11-19 08:01:27.562923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.927 [2024-11-19 08:01:27.562982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.927 qpair failed and we were unable to recover it. 00:37:35.927 [2024-11-19 08:01:27.563120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.927 [2024-11-19 08:01:27.563173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.927 qpair failed and we were unable to recover it. 00:37:35.927 [2024-11-19 08:01:27.563321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.927 [2024-11-19 08:01:27.563384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.927 qpair failed and we were unable to recover it. 00:37:35.927 [2024-11-19 08:01:27.563561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.927 [2024-11-19 08:01:27.563599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.927 qpair failed and we were unable to recover it. 00:37:35.927 [2024-11-19 08:01:27.563731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.927 [2024-11-19 08:01:27.563766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.927 qpair failed and we were unable to recover it. 00:37:35.927 [2024-11-19 08:01:27.563899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.927 [2024-11-19 08:01:27.563934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.927 qpair failed and we were unable to recover it. 00:37:35.927 [2024-11-19 08:01:27.564075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.927 [2024-11-19 08:01:27.564113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.927 qpair failed and we were unable to recover it. 00:37:35.927 [2024-11-19 08:01:27.564317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.927 [2024-11-19 08:01:27.564355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.927 qpair failed and we were unable to recover it. 00:37:35.927 [2024-11-19 08:01:27.564507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.927 [2024-11-19 08:01:27.564545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.927 qpair failed and we were unable to recover it. 00:37:35.927 [2024-11-19 08:01:27.564714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.927 [2024-11-19 08:01:27.564765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.927 qpair failed and we were unable to recover it. 00:37:35.927 [2024-11-19 08:01:27.564893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.927 [2024-11-19 08:01:27.564931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.927 qpair failed and we were unable to recover it. 00:37:35.927 [2024-11-19 08:01:27.565070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.928 [2024-11-19 08:01:27.565107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.928 qpair failed and we were unable to recover it. 00:37:35.928 [2024-11-19 08:01:27.565230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.928 [2024-11-19 08:01:27.565269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.928 qpair failed and we were unable to recover it. 00:37:35.928 [2024-11-19 08:01:27.565423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.928 [2024-11-19 08:01:27.565461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.928 qpair failed and we were unable to recover it. 00:37:35.928 [2024-11-19 08:01:27.565628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.928 [2024-11-19 08:01:27.565682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.928 qpair failed and we were unable to recover it. 00:37:35.928 [2024-11-19 08:01:27.565872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.928 [2024-11-19 08:01:27.565922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.928 qpair failed and we were unable to recover it. 00:37:35.928 [2024-11-19 08:01:27.566057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.928 [2024-11-19 08:01:27.566099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.928 qpair failed and we were unable to recover it. 00:37:35.928 [2024-11-19 08:01:27.566231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.928 [2024-11-19 08:01:27.566274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.928 qpair failed and we were unable to recover it. 00:37:35.928 [2024-11-19 08:01:27.566397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.928 [2024-11-19 08:01:27.566437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.928 qpair failed and we were unable to recover it. 00:37:35.928 [2024-11-19 08:01:27.566588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.928 [2024-11-19 08:01:27.566636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.928 qpair failed and we were unable to recover it. 00:37:35.928 [2024-11-19 08:01:27.566784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.928 [2024-11-19 08:01:27.566820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.928 qpair failed and we were unable to recover it. 00:37:35.928 [2024-11-19 08:01:27.566934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.928 [2024-11-19 08:01:27.566986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.928 qpair failed and we were unable to recover it. 00:37:35.928 [2024-11-19 08:01:27.567216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.928 [2024-11-19 08:01:27.567275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.928 qpair failed and we were unable to recover it. 00:37:35.928 [2024-11-19 08:01:27.567419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.928 [2024-11-19 08:01:27.567458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.928 qpair failed and we were unable to recover it. 00:37:35.928 [2024-11-19 08:01:27.567591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.928 [2024-11-19 08:01:27.567630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.928 qpair failed and we were unable to recover it. 00:37:35.928 [2024-11-19 08:01:27.567781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.928 [2024-11-19 08:01:27.567816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.928 qpair failed and we were unable to recover it. 00:37:35.928 [2024-11-19 08:01:27.567936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.928 [2024-11-19 08:01:27.567971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.928 qpair failed and we were unable to recover it. 00:37:35.928 [2024-11-19 08:01:27.568114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.928 [2024-11-19 08:01:27.568150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.928 qpair failed and we were unable to recover it. 00:37:35.928 [2024-11-19 08:01:27.568310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.928 [2024-11-19 08:01:27.568356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.928 qpair failed and we were unable to recover it. 00:37:35.928 [2024-11-19 08:01:27.568489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.928 [2024-11-19 08:01:27.568528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.928 qpair failed and we were unable to recover it. 00:37:35.928 [2024-11-19 08:01:27.568681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.928 [2024-11-19 08:01:27.568746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.928 qpair failed and we were unable to recover it. 00:37:35.928 [2024-11-19 08:01:27.568883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.928 [2024-11-19 08:01:27.568919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.928 qpair failed and we were unable to recover it. 00:37:35.928 [2024-11-19 08:01:27.569111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.928 [2024-11-19 08:01:27.569161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.928 qpair failed and we were unable to recover it. 00:37:35.928 [2024-11-19 08:01:27.569327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.928 [2024-11-19 08:01:27.569381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.928 qpair failed and we were unable to recover it. 00:37:35.928 [2024-11-19 08:01:27.569536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.928 [2024-11-19 08:01:27.569590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.928 qpair failed and we were unable to recover it. 00:37:35.928 [2024-11-19 08:01:27.569723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.928 [2024-11-19 08:01:27.569759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.928 qpair failed and we were unable to recover it. 00:37:35.928 [2024-11-19 08:01:27.569946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.928 [2024-11-19 08:01:27.570000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.928 qpair failed and we were unable to recover it. 00:37:35.928 [2024-11-19 08:01:27.570152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.928 [2024-11-19 08:01:27.570205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.928 qpair failed and we were unable to recover it. 00:37:35.928 [2024-11-19 08:01:27.570438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.928 [2024-11-19 08:01:27.570498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.928 qpair failed and we were unable to recover it. 00:37:35.928 [2024-11-19 08:01:27.570655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.928 [2024-11-19 08:01:27.570696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.928 qpair failed and we were unable to recover it. 00:37:35.928 [2024-11-19 08:01:27.570832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.928 [2024-11-19 08:01:27.570873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.928 qpair failed and we were unable to recover it. 00:37:35.928 [2024-11-19 08:01:27.570989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.928 [2024-11-19 08:01:27.571042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.928 qpair failed and we were unable to recover it. 00:37:35.928 [2024-11-19 08:01:27.571299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.928 [2024-11-19 08:01:27.571357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.928 qpair failed and we were unable to recover it. 00:37:35.928 [2024-11-19 08:01:27.571507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.928 [2024-11-19 08:01:27.571547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.928 qpair failed and we were unable to recover it. 00:37:35.928 [2024-11-19 08:01:27.571672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.928 [2024-11-19 08:01:27.571730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.928 qpair failed and we were unable to recover it. 00:37:35.928 [2024-11-19 08:01:27.571926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.928 [2024-11-19 08:01:27.571985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.928 qpair failed and we were unable to recover it. 00:37:35.928 [2024-11-19 08:01:27.572192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.928 [2024-11-19 08:01:27.572263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.928 qpair failed and we were unable to recover it. 00:37:35.928 [2024-11-19 08:01:27.572410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.928 [2024-11-19 08:01:27.572480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.928 qpair failed and we were unable to recover it. 00:37:35.928 [2024-11-19 08:01:27.572604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.928 [2024-11-19 08:01:27.572643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.929 qpair failed and we were unable to recover it. 00:37:35.929 [2024-11-19 08:01:27.572840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.929 [2024-11-19 08:01:27.572876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.929 qpair failed and we were unable to recover it. 00:37:35.929 [2024-11-19 08:01:27.572993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.929 [2024-11-19 08:01:27.573030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.929 qpair failed and we were unable to recover it. 00:37:35.929 [2024-11-19 08:01:27.573195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.929 [2024-11-19 08:01:27.573230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.929 qpair failed and we were unable to recover it. 00:37:35.929 [2024-11-19 08:01:27.573362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.929 [2024-11-19 08:01:27.573416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.929 qpair failed and we were unable to recover it. 00:37:35.929 [2024-11-19 08:01:27.573587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.929 [2024-11-19 08:01:27.573621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.929 qpair failed and we were unable to recover it. 00:37:35.929 [2024-11-19 08:01:27.573750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.929 [2024-11-19 08:01:27.573786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.929 qpair failed and we were unable to recover it. 00:37:35.929 [2024-11-19 08:01:27.573919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.929 [2024-11-19 08:01:27.573979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.929 qpair failed and we were unable to recover it. 00:37:35.929 [2024-11-19 08:01:27.574102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.929 [2024-11-19 08:01:27.574157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.929 qpair failed and we were unable to recover it. 00:37:35.929 [2024-11-19 08:01:27.574296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.929 [2024-11-19 08:01:27.574331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.929 qpair failed and we were unable to recover it. 00:37:35.929 [2024-11-19 08:01:27.574470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.929 [2024-11-19 08:01:27.574505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.929 qpair failed and we were unable to recover it. 00:37:35.929 [2024-11-19 08:01:27.574638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.929 [2024-11-19 08:01:27.574673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.929 qpair failed and we were unable to recover it. 00:37:35.929 [2024-11-19 08:01:27.574838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.929 [2024-11-19 08:01:27.574887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.929 qpair failed and we were unable to recover it. 00:37:35.929 [2024-11-19 08:01:27.575055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.929 [2024-11-19 08:01:27.575091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.929 qpair failed and we were unable to recover it. 00:37:35.929 [2024-11-19 08:01:27.575226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.929 [2024-11-19 08:01:27.575261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.929 qpair failed and we were unable to recover it. 00:37:35.929 [2024-11-19 08:01:27.575495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.929 [2024-11-19 08:01:27.575551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.929 qpair failed and we were unable to recover it. 00:37:35.929 [2024-11-19 08:01:27.575705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.929 [2024-11-19 08:01:27.575756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.929 qpair failed and we were unable to recover it. 00:37:35.929 [2024-11-19 08:01:27.575920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.929 [2024-11-19 08:01:27.575985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.929 qpair failed and we were unable to recover it. 00:37:35.929 [2024-11-19 08:01:27.576120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.929 [2024-11-19 08:01:27.576174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.929 qpair failed and we were unable to recover it. 00:37:35.929 [2024-11-19 08:01:27.576317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.929 [2024-11-19 08:01:27.576385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.929 qpair failed and we were unable to recover it. 00:37:35.929 [2024-11-19 08:01:27.576523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.929 [2024-11-19 08:01:27.576558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.929 qpair failed and we were unable to recover it. 00:37:35.929 [2024-11-19 08:01:27.576738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.929 [2024-11-19 08:01:27.576773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.929 qpair failed and we were unable to recover it. 00:37:35.929 [2024-11-19 08:01:27.576933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.929 [2024-11-19 08:01:27.576966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.929 qpair failed and we were unable to recover it. 00:37:35.929 [2024-11-19 08:01:27.577094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.929 [2024-11-19 08:01:27.577132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.929 qpair failed and we were unable to recover it. 00:37:35.929 [2024-11-19 08:01:27.577298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.929 [2024-11-19 08:01:27.577336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.929 qpair failed and we were unable to recover it. 00:37:35.929 [2024-11-19 08:01:27.577473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.929 [2024-11-19 08:01:27.577510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.929 qpair failed and we were unable to recover it. 00:37:35.929 [2024-11-19 08:01:27.577626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.929 [2024-11-19 08:01:27.577666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.929 qpair failed and we were unable to recover it. 00:37:35.929 [2024-11-19 08:01:27.577819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.929 [2024-11-19 08:01:27.577868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.929 qpair failed and we were unable to recover it. 00:37:35.929 [2024-11-19 08:01:27.578061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.929 [2024-11-19 08:01:27.578115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.929 qpair failed and we were unable to recover it. 00:37:35.929 [2024-11-19 08:01:27.578247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.929 [2024-11-19 08:01:27.578289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.929 qpair failed and we were unable to recover it. 00:37:35.929 [2024-11-19 08:01:27.578446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.929 [2024-11-19 08:01:27.578486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.929 qpair failed and we were unable to recover it. 00:37:35.929 [2024-11-19 08:01:27.578668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.929 [2024-11-19 08:01:27.578712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.929 qpair failed and we were unable to recover it. 00:37:35.929 [2024-11-19 08:01:27.578854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.929 [2024-11-19 08:01:27.578889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.929 qpair failed and we were unable to recover it. 00:37:35.929 [2024-11-19 08:01:27.579004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.929 [2024-11-19 08:01:27.579057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.929 qpair failed and we were unable to recover it. 00:37:35.929 [2024-11-19 08:01:27.579203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.929 [2024-11-19 08:01:27.579242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.929 qpair failed and we were unable to recover it. 00:37:35.929 [2024-11-19 08:01:27.579416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.929 [2024-11-19 08:01:27.579474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.929 qpair failed and we were unable to recover it. 00:37:35.929 [2024-11-19 08:01:27.579604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.929 [2024-11-19 08:01:27.579648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.929 qpair failed and we were unable to recover it. 00:37:35.929 [2024-11-19 08:01:27.579796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.929 [2024-11-19 08:01:27.579835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.929 qpair failed and we were unable to recover it. 00:37:35.929 [2024-11-19 08:01:27.580017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.930 [2024-11-19 08:01:27.580071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.930 qpair failed and we were unable to recover it. 00:37:35.930 [2024-11-19 08:01:27.580208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.930 [2024-11-19 08:01:27.580251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.930 qpair failed and we were unable to recover it. 00:37:35.930 [2024-11-19 08:01:27.580404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.930 [2024-11-19 08:01:27.580444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.930 qpair failed and we were unable to recover it. 00:37:35.930 [2024-11-19 08:01:27.580616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.930 [2024-11-19 08:01:27.580655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.930 qpair failed and we were unable to recover it. 00:37:35.930 [2024-11-19 08:01:27.580823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.930 [2024-11-19 08:01:27.580872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.930 qpair failed and we were unable to recover it. 00:37:35.930 [2024-11-19 08:01:27.581044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.930 [2024-11-19 08:01:27.581104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.930 qpair failed and we were unable to recover it. 00:37:35.930 [2024-11-19 08:01:27.581290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.930 [2024-11-19 08:01:27.581354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.930 qpair failed and we were unable to recover it. 00:37:35.930 [2024-11-19 08:01:27.581473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.930 [2024-11-19 08:01:27.581511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.930 qpair failed and we were unable to recover it. 00:37:35.930 [2024-11-19 08:01:27.581648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.930 [2024-11-19 08:01:27.581683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.930 qpair failed and we were unable to recover it. 00:37:35.930 [2024-11-19 08:01:27.581826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.930 [2024-11-19 08:01:27.581860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.930 qpair failed and we were unable to recover it. 00:37:35.930 [2024-11-19 08:01:27.582016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.930 [2024-11-19 08:01:27.582054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.930 qpair failed and we were unable to recover it. 00:37:35.930 [2024-11-19 08:01:27.582210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.930 [2024-11-19 08:01:27.582262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.930 qpair failed and we were unable to recover it. 00:37:35.930 [2024-11-19 08:01:27.582467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.930 [2024-11-19 08:01:27.582505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.930 qpair failed and we were unable to recover it. 00:37:35.930 [2024-11-19 08:01:27.582659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.930 [2024-11-19 08:01:27.582709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.930 qpair failed and we were unable to recover it. 00:37:35.930 [2024-11-19 08:01:27.582849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.930 [2024-11-19 08:01:27.582897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.930 qpair failed and we were unable to recover it. 00:37:35.930 [2024-11-19 08:01:27.583043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.930 [2024-11-19 08:01:27.583085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.930 qpair failed and we were unable to recover it. 00:37:35.930 [2024-11-19 08:01:27.583278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.930 [2024-11-19 08:01:27.583340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.930 qpair failed and we were unable to recover it. 00:37:35.930 [2024-11-19 08:01:27.583499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.930 [2024-11-19 08:01:27.583537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.930 qpair failed and we were unable to recover it. 00:37:35.930 [2024-11-19 08:01:27.583717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.930 [2024-11-19 08:01:27.583766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.930 qpair failed and we were unable to recover it. 00:37:35.930 [2024-11-19 08:01:27.583894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.930 [2024-11-19 08:01:27.583932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.930 qpair failed and we were unable to recover it. 00:37:35.930 [2024-11-19 08:01:27.584061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.930 [2024-11-19 08:01:27.584116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.930 qpair failed and we were unable to recover it. 00:37:35.930 [2024-11-19 08:01:27.584226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.930 [2024-11-19 08:01:27.584266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.930 qpair failed and we were unable to recover it. 00:37:35.930 [2024-11-19 08:01:27.584369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.930 [2024-11-19 08:01:27.584403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.930 qpair failed and we were unable to recover it. 00:37:35.930 [2024-11-19 08:01:27.584573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.930 [2024-11-19 08:01:27.584607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.930 qpair failed and we were unable to recover it. 00:37:35.930 [2024-11-19 08:01:27.584713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.930 [2024-11-19 08:01:27.584750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.930 qpair failed and we were unable to recover it. 00:37:35.930 [2024-11-19 08:01:27.584859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.930 [2024-11-19 08:01:27.584894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.930 qpair failed and we were unable to recover it. 00:37:35.930 [2024-11-19 08:01:27.585069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.930 [2024-11-19 08:01:27.585117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.930 qpair failed and we were unable to recover it. 00:37:35.930 [2024-11-19 08:01:27.585299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.930 [2024-11-19 08:01:27.585358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.930 qpair failed and we were unable to recover it. 00:37:35.930 [2024-11-19 08:01:27.585504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.930 [2024-11-19 08:01:27.585564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.930 qpair failed and we were unable to recover it. 00:37:35.930 [2024-11-19 08:01:27.585705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.930 [2024-11-19 08:01:27.585758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.930 qpair failed and we were unable to recover it. 00:37:35.930 [2024-11-19 08:01:27.585885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.930 [2024-11-19 08:01:27.585924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.930 qpair failed and we were unable to recover it. 00:37:35.930 [2024-11-19 08:01:27.586074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.930 [2024-11-19 08:01:27.586113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.930 qpair failed and we were unable to recover it. 00:37:35.930 [2024-11-19 08:01:27.586265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.930 [2024-11-19 08:01:27.586304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.930 qpair failed and we were unable to recover it. 00:37:35.930 [2024-11-19 08:01:27.586489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.930 [2024-11-19 08:01:27.586559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.930 qpair failed and we were unable to recover it. 00:37:35.930 [2024-11-19 08:01:27.586666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.931 [2024-11-19 08:01:27.586712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.931 qpair failed and we were unable to recover it. 00:37:35.931 [2024-11-19 08:01:27.586870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.931 [2024-11-19 08:01:27.586922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.931 qpair failed and we were unable to recover it. 00:37:35.931 [2024-11-19 08:01:27.587043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.931 [2024-11-19 08:01:27.587096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.931 qpair failed and we were unable to recover it. 00:37:35.931 [2024-11-19 08:01:27.587228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.931 [2024-11-19 08:01:27.587280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.931 qpair failed and we were unable to recover it. 00:37:35.931 [2024-11-19 08:01:27.587418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.931 [2024-11-19 08:01:27.587454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.931 qpair failed and we were unable to recover it. 00:37:35.931 [2024-11-19 08:01:27.587559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.931 [2024-11-19 08:01:27.587594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.931 qpair failed and we were unable to recover it. 00:37:35.931 [2024-11-19 08:01:27.587761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.931 [2024-11-19 08:01:27.587809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.931 qpair failed and we were unable to recover it. 00:37:35.931 [2024-11-19 08:01:27.587939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.931 [2024-11-19 08:01:27.587987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.931 qpair failed and we were unable to recover it. 00:37:35.931 [2024-11-19 08:01:27.588116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.931 [2024-11-19 08:01:27.588153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.931 qpair failed and we were unable to recover it. 00:37:35.931 [2024-11-19 08:01:27.588267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.931 [2024-11-19 08:01:27.588303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.931 qpair failed and we were unable to recover it. 00:37:35.931 [2024-11-19 08:01:27.588442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.931 [2024-11-19 08:01:27.588477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.931 qpair failed and we were unable to recover it. 00:37:35.931 [2024-11-19 08:01:27.588587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.931 [2024-11-19 08:01:27.588621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.931 qpair failed and we were unable to recover it. 00:37:35.931 [2024-11-19 08:01:27.588792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.931 [2024-11-19 08:01:27.588847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.931 qpair failed and we were unable to recover it. 00:37:35.931 [2024-11-19 08:01:27.589019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.931 [2024-11-19 08:01:27.589061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.931 qpair failed and we were unable to recover it. 00:37:35.931 [2024-11-19 08:01:27.589200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.931 [2024-11-19 08:01:27.589238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.931 qpair failed and we were unable to recover it. 00:37:35.931 [2024-11-19 08:01:27.589363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.931 [2024-11-19 08:01:27.589401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.931 qpair failed and we were unable to recover it. 00:37:35.931 [2024-11-19 08:01:27.589555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.931 [2024-11-19 08:01:27.589589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.931 qpair failed and we were unable to recover it. 00:37:35.931 [2024-11-19 08:01:27.589704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.931 [2024-11-19 08:01:27.589738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.931 qpair failed and we were unable to recover it. 00:37:35.931 [2024-11-19 08:01:27.589875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.931 [2024-11-19 08:01:27.589910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.931 qpair failed and we were unable to recover it. 00:37:35.931 [2024-11-19 08:01:27.590019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.931 [2024-11-19 08:01:27.590056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.931 qpair failed and we were unable to recover it. 00:37:35.931 [2024-11-19 08:01:27.590216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.931 [2024-11-19 08:01:27.590267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.931 qpair failed and we were unable to recover it. 00:37:35.931 [2024-11-19 08:01:27.590402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.931 [2024-11-19 08:01:27.590442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.931 qpair failed and we were unable to recover it. 00:37:35.931 [2024-11-19 08:01:27.590591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.931 [2024-11-19 08:01:27.590629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.931 qpair failed and we were unable to recover it. 00:37:35.931 [2024-11-19 08:01:27.590807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.931 [2024-11-19 08:01:27.590842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.931 qpair failed and we were unable to recover it. 00:37:35.931 [2024-11-19 08:01:27.591011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.931 [2024-11-19 08:01:27.591049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.931 qpair failed and we were unable to recover it. 00:37:35.931 [2024-11-19 08:01:27.591199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.931 [2024-11-19 08:01:27.591237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.931 qpair failed and we were unable to recover it. 00:37:35.931 [2024-11-19 08:01:27.591368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.931 [2024-11-19 08:01:27.591430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.931 qpair failed and we were unable to recover it. 00:37:35.931 [2024-11-19 08:01:27.591573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.931 [2024-11-19 08:01:27.591616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.931 qpair failed and we were unable to recover it. 00:37:35.931 [2024-11-19 08:01:27.591732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.931 [2024-11-19 08:01:27.591783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.931 qpair failed and we were unable to recover it. 00:37:35.931 [2024-11-19 08:01:27.591912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.931 [2024-11-19 08:01:27.591953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.931 qpair failed and we were unable to recover it. 00:37:35.931 [2024-11-19 08:01:27.592089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.931 [2024-11-19 08:01:27.592138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.931 qpair failed and we were unable to recover it. 00:37:35.931 [2024-11-19 08:01:27.592276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.931 [2024-11-19 08:01:27.592314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.931 qpair failed and we were unable to recover it. 00:37:35.931 [2024-11-19 08:01:27.592441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.931 [2024-11-19 08:01:27.592479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.931 qpair failed and we were unable to recover it. 00:37:35.931 [2024-11-19 08:01:27.592601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.931 [2024-11-19 08:01:27.592635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.931 qpair failed and we were unable to recover it. 00:37:35.931 [2024-11-19 08:01:27.592787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.931 [2024-11-19 08:01:27.592821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.931 qpair failed and we were unable to recover it. 00:37:35.931 [2024-11-19 08:01:27.592916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.931 [2024-11-19 08:01:27.592951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.931 qpair failed and we were unable to recover it. 00:37:35.931 [2024-11-19 08:01:27.593144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.931 [2024-11-19 08:01:27.593177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.931 qpair failed and we were unable to recover it. 00:37:35.931 [2024-11-19 08:01:27.593271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.931 [2024-11-19 08:01:27.593305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.931 qpair failed and we were unable to recover it. 00:37:35.932 [2024-11-19 08:01:27.593463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.932 [2024-11-19 08:01:27.593501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.932 qpair failed and we were unable to recover it. 00:37:35.932 [2024-11-19 08:01:27.593627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.932 [2024-11-19 08:01:27.593661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.932 qpair failed and we were unable to recover it. 00:37:35.932 [2024-11-19 08:01:27.593802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.932 [2024-11-19 08:01:27.593850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.932 qpair failed and we were unable to recover it. 00:37:35.932 [2024-11-19 08:01:27.594064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.932 [2024-11-19 08:01:27.594128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.932 qpair failed and we were unable to recover it. 00:37:35.932 [2024-11-19 08:01:27.594323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.932 [2024-11-19 08:01:27.594378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.932 qpair failed and we were unable to recover it. 00:37:35.932 [2024-11-19 08:01:27.594557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.932 [2024-11-19 08:01:27.594596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.932 qpair failed and we were unable to recover it. 00:37:35.932 [2024-11-19 08:01:27.594763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.932 [2024-11-19 08:01:27.594799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.932 qpair failed and we were unable to recover it. 00:37:35.932 [2024-11-19 08:01:27.594933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.932 [2024-11-19 08:01:27.594969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.932 qpair failed and we were unable to recover it. 00:37:35.932 [2024-11-19 08:01:27.595084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.932 [2024-11-19 08:01:27.595120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.932 qpair failed and we were unable to recover it. 00:37:35.932 [2024-11-19 08:01:27.595232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.932 [2024-11-19 08:01:27.595285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.932 qpair failed and we were unable to recover it. 00:37:35.932 [2024-11-19 08:01:27.595444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.932 [2024-11-19 08:01:27.595483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.932 qpair failed and we were unable to recover it. 00:37:35.932 [2024-11-19 08:01:27.595623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.932 [2024-11-19 08:01:27.595663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.932 qpair failed and we were unable to recover it. 00:37:35.932 [2024-11-19 08:01:27.595822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.932 [2024-11-19 08:01:27.595870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.932 qpair failed and we were unable to recover it. 00:37:35.932 [2024-11-19 08:01:27.595998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.932 [2024-11-19 08:01:27.596047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.932 qpair failed and we were unable to recover it. 00:37:35.932 [2024-11-19 08:01:27.596224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.932 [2024-11-19 08:01:27.596277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.932 qpair failed and we were unable to recover it. 00:37:35.932 [2024-11-19 08:01:27.596466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.932 [2024-11-19 08:01:27.596527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.932 qpair failed and we were unable to recover it. 00:37:35.932 [2024-11-19 08:01:27.596677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.932 [2024-11-19 08:01:27.596738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.932 qpair failed and we were unable to recover it. 00:37:35.932 [2024-11-19 08:01:27.596846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.932 [2024-11-19 08:01:27.596880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.932 qpair failed and we were unable to recover it. 00:37:35.932 [2024-11-19 08:01:27.597017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.932 [2024-11-19 08:01:27.597053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.932 qpair failed and we were unable to recover it. 00:37:35.932 [2024-11-19 08:01:27.597191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.932 [2024-11-19 08:01:27.597226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.932 qpair failed and we were unable to recover it. 00:37:35.932 [2024-11-19 08:01:27.597369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.932 [2024-11-19 08:01:27.597407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.932 qpair failed and we were unable to recover it. 00:37:35.932 [2024-11-19 08:01:27.597557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.932 [2024-11-19 08:01:27.597592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.932 qpair failed and we were unable to recover it. 00:37:35.932 [2024-11-19 08:01:27.597757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.932 [2024-11-19 08:01:27.597808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.932 qpair failed and we were unable to recover it. 00:37:35.932 [2024-11-19 08:01:27.597932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.932 [2024-11-19 08:01:27.597989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.932 qpair failed and we were unable to recover it. 00:37:35.932 [2024-11-19 08:01:27.598125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.932 [2024-11-19 08:01:27.598193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.932 qpair failed and we were unable to recover it. 00:37:35.932 [2024-11-19 08:01:27.598379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.932 [2024-11-19 08:01:27.598437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.932 qpair failed and we were unable to recover it. 00:37:35.932 [2024-11-19 08:01:27.598582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.932 [2024-11-19 08:01:27.598622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.932 qpair failed and we were unable to recover it. 00:37:35.932 [2024-11-19 08:01:27.598768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.932 [2024-11-19 08:01:27.598805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.932 qpair failed and we were unable to recover it. 00:37:35.932 [2024-11-19 08:01:27.598942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.932 [2024-11-19 08:01:27.598998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.932 qpair failed and we were unable to recover it. 00:37:35.932 [2024-11-19 08:01:27.599184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.932 [2024-11-19 08:01:27.599242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.932 qpair failed and we were unable to recover it. 00:37:35.932 [2024-11-19 08:01:27.599379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.932 [2024-11-19 08:01:27.599435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.932 qpair failed and we were unable to recover it. 00:37:35.932 [2024-11-19 08:01:27.599570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.932 [2024-11-19 08:01:27.599605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.932 qpair failed and we were unable to recover it. 00:37:35.932 [2024-11-19 08:01:27.599745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.932 [2024-11-19 08:01:27.599794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.932 qpair failed and we were unable to recover it. 00:37:35.932 [2024-11-19 08:01:27.599955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.932 [2024-11-19 08:01:27.600004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.932 qpair failed and we were unable to recover it. 00:37:35.932 [2024-11-19 08:01:27.600147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.932 [2024-11-19 08:01:27.600185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.932 qpair failed and we were unable to recover it. 00:37:35.932 [2024-11-19 08:01:27.600322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.932 [2024-11-19 08:01:27.600357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.932 qpair failed and we were unable to recover it. 00:37:35.932 [2024-11-19 08:01:27.600473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.932 [2024-11-19 08:01:27.600509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.932 qpair failed and we were unable to recover it. 00:37:35.932 [2024-11-19 08:01:27.600636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.932 [2024-11-19 08:01:27.600677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.933 qpair failed and we were unable to recover it. 00:37:35.933 [2024-11-19 08:01:27.600845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.933 [2024-11-19 08:01:27.600899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.933 qpair failed and we were unable to recover it. 00:37:35.933 [2024-11-19 08:01:27.601022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.933 [2024-11-19 08:01:27.601076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.933 qpair failed and we were unable to recover it. 00:37:35.933 [2024-11-19 08:01:27.601230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.933 [2024-11-19 08:01:27.601283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.933 qpair failed and we were unable to recover it. 00:37:35.933 [2024-11-19 08:01:27.601432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.933 [2024-11-19 08:01:27.601471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.933 qpair failed and we were unable to recover it. 00:37:35.933 [2024-11-19 08:01:27.601588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.933 [2024-11-19 08:01:27.601622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.933 qpair failed and we were unable to recover it. 00:37:35.933 [2024-11-19 08:01:27.601752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.933 [2024-11-19 08:01:27.601787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.933 qpair failed and we were unable to recover it. 00:37:35.933 [2024-11-19 08:01:27.601897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.933 [2024-11-19 08:01:27.601933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.933 qpair failed and we were unable to recover it. 00:37:35.933 [2024-11-19 08:01:27.602086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.933 [2024-11-19 08:01:27.602135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.933 qpair failed and we were unable to recover it. 00:37:35.933 [2024-11-19 08:01:27.602258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.933 [2024-11-19 08:01:27.602294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.933 qpair failed and we were unable to recover it. 00:37:35.933 [2024-11-19 08:01:27.602430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.933 [2024-11-19 08:01:27.602464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.933 qpair failed and we were unable to recover it. 00:37:35.933 [2024-11-19 08:01:27.602576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.933 [2024-11-19 08:01:27.602611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.933 qpair failed and we were unable to recover it. 00:37:35.933 [2024-11-19 08:01:27.602740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.933 [2024-11-19 08:01:27.602775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.933 qpair failed and we were unable to recover it. 00:37:35.933 [2024-11-19 08:01:27.602901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.933 [2024-11-19 08:01:27.602939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.933 qpair failed and we were unable to recover it. 00:37:35.933 [2024-11-19 08:01:27.603096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.933 [2024-11-19 08:01:27.603134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.933 qpair failed and we were unable to recover it. 00:37:35.933 [2024-11-19 08:01:27.603234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.933 [2024-11-19 08:01:27.603272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.933 qpair failed and we were unable to recover it. 00:37:35.933 [2024-11-19 08:01:27.603431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.933 [2024-11-19 08:01:27.603465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.933 qpair failed and we were unable to recover it. 00:37:35.933 [2024-11-19 08:01:27.603573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.933 [2024-11-19 08:01:27.603607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.933 qpair failed and we were unable to recover it. 00:37:35.933 [2024-11-19 08:01:27.603722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.933 [2024-11-19 08:01:27.603756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.933 qpair failed and we were unable to recover it. 00:37:35.933 [2024-11-19 08:01:27.603898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.933 [2024-11-19 08:01:27.603954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.933 qpair failed and we were unable to recover it. 00:37:35.933 [2024-11-19 08:01:27.604112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.933 [2024-11-19 08:01:27.604166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.933 qpair failed and we were unable to recover it. 00:37:35.933 [2024-11-19 08:01:27.604326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.933 [2024-11-19 08:01:27.604367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.933 qpair failed and we were unable to recover it. 00:37:35.933 [2024-11-19 08:01:27.604540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.933 [2024-11-19 08:01:27.604579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.933 qpair failed and we were unable to recover it. 00:37:35.933 [2024-11-19 08:01:27.604714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.933 [2024-11-19 08:01:27.604749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.933 qpair failed and we were unable to recover it. 00:37:35.933 [2024-11-19 08:01:27.604914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.933 [2024-11-19 08:01:27.604949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.933 qpair failed and we were unable to recover it. 00:37:35.933 [2024-11-19 08:01:27.605144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.933 [2024-11-19 08:01:27.605200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.933 qpair failed and we were unable to recover it. 00:37:35.933 [2024-11-19 08:01:27.605409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.933 [2024-11-19 08:01:27.605449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.933 qpair failed and we were unable to recover it. 00:37:35.933 [2024-11-19 08:01:27.605571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.933 [2024-11-19 08:01:27.605610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.933 qpair failed and we were unable to recover it. 00:37:35.933 [2024-11-19 08:01:27.605747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.933 [2024-11-19 08:01:27.605783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.933 qpair failed and we were unable to recover it. 00:37:35.933 [2024-11-19 08:01:27.605907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.933 [2024-11-19 08:01:27.605956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.933 qpair failed and we were unable to recover it. 00:37:35.933 [2024-11-19 08:01:27.606076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.933 [2024-11-19 08:01:27.606114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.933 qpair failed and we were unable to recover it. 00:37:35.933 [2024-11-19 08:01:27.606285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.933 [2024-11-19 08:01:27.606342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.933 qpair failed and we were unable to recover it. 00:37:35.933 [2024-11-19 08:01:27.606490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.933 [2024-11-19 08:01:27.606530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.933 qpair failed and we were unable to recover it. 00:37:35.933 [2024-11-19 08:01:27.606683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.933 [2024-11-19 08:01:27.606746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.933 qpair failed and we were unable to recover it. 00:37:35.933 [2024-11-19 08:01:27.606871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.933 [2024-11-19 08:01:27.606907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.933 qpair failed and we were unable to recover it. 00:37:35.933 [2024-11-19 08:01:27.607052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.933 [2024-11-19 08:01:27.607105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.933 qpair failed and we were unable to recover it. 00:37:35.933 [2024-11-19 08:01:27.607387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.933 [2024-11-19 08:01:27.607427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.933 qpair failed and we were unable to recover it. 00:37:35.933 [2024-11-19 08:01:27.607590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.933 [2024-11-19 08:01:27.607625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.933 qpair failed and we were unable to recover it. 00:37:35.933 [2024-11-19 08:01:27.607760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.934 [2024-11-19 08:01:27.607795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.934 qpair failed and we were unable to recover it. 00:37:35.934 [2024-11-19 08:01:27.607941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.934 [2024-11-19 08:01:27.607975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.934 qpair failed and we were unable to recover it. 00:37:35.934 [2024-11-19 08:01:27.608087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.934 [2024-11-19 08:01:27.608139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.934 qpair failed and we were unable to recover it. 00:37:35.934 [2024-11-19 08:01:27.608300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.934 [2024-11-19 08:01:27.608359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.934 qpair failed and we were unable to recover it. 00:37:35.934 [2024-11-19 08:01:27.608477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.934 [2024-11-19 08:01:27.608529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.934 qpair failed and we were unable to recover it. 00:37:35.934 [2024-11-19 08:01:27.608703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.934 [2024-11-19 08:01:27.608771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.934 qpair failed and we were unable to recover it. 00:37:35.934 [2024-11-19 08:01:27.608895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.934 [2024-11-19 08:01:27.608932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.934 qpair failed and we were unable to recover it. 00:37:35.934 [2024-11-19 08:01:27.609096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.934 [2024-11-19 08:01:27.609132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.934 qpair failed and we were unable to recover it. 00:37:35.934 [2024-11-19 08:01:27.609300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.934 [2024-11-19 08:01:27.609357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.934 qpair failed and we were unable to recover it. 00:37:35.934 [2024-11-19 08:01:27.609521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.934 [2024-11-19 08:01:27.609584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.934 qpair failed and we were unable to recover it. 00:37:35.934 [2024-11-19 08:01:27.609737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.934 [2024-11-19 08:01:27.609786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.934 qpair failed and we were unable to recover it. 00:37:35.934 [2024-11-19 08:01:27.609966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.934 [2024-11-19 08:01:27.610001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.934 qpair failed and we were unable to recover it. 00:37:35.934 [2024-11-19 08:01:27.610112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.934 [2024-11-19 08:01:27.610165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.934 qpair failed and we were unable to recover it. 00:37:35.934 [2024-11-19 08:01:27.610332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.934 [2024-11-19 08:01:27.610392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.934 qpair failed and we were unable to recover it. 00:37:35.934 [2024-11-19 08:01:27.610532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.934 [2024-11-19 08:01:27.610567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.934 qpair failed and we were unable to recover it. 00:37:35.934 [2024-11-19 08:01:27.610720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.934 [2024-11-19 08:01:27.610769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.934 qpair failed and we were unable to recover it. 00:37:35.934 [2024-11-19 08:01:27.610914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.934 [2024-11-19 08:01:27.610949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.934 qpair failed and we were unable to recover it. 00:37:35.934 [2024-11-19 08:01:27.611165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.934 [2024-11-19 08:01:27.611234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.934 qpair failed and we were unable to recover it. 00:37:35.934 [2024-11-19 08:01:27.611440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.934 [2024-11-19 08:01:27.611498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.934 qpair failed and we were unable to recover it. 00:37:35.934 [2024-11-19 08:01:27.611647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.934 [2024-11-19 08:01:27.611684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.934 qpair failed and we were unable to recover it. 00:37:35.934 [2024-11-19 08:01:27.611850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.934 [2024-11-19 08:01:27.611889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.934 qpair failed and we were unable to recover it. 00:37:35.934 [2024-11-19 08:01:27.612032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.934 [2024-11-19 08:01:27.612094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.934 qpair failed and we were unable to recover it. 00:37:35.934 [2024-11-19 08:01:27.612284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.934 [2024-11-19 08:01:27.612318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.934 qpair failed and we were unable to recover it. 00:37:35.934 [2024-11-19 08:01:27.612479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.934 [2024-11-19 08:01:27.612530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.934 qpair failed and we were unable to recover it. 00:37:35.934 [2024-11-19 08:01:27.612663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.934 [2024-11-19 08:01:27.612704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.934 qpair failed and we were unable to recover it. 00:37:35.934 [2024-11-19 08:01:27.612836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.934 [2024-11-19 08:01:27.612885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.934 qpair failed and we were unable to recover it. 00:37:35.934 [2024-11-19 08:01:27.613075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.934 [2024-11-19 08:01:27.613144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.934 qpair failed and we were unable to recover it. 00:37:35.934 [2024-11-19 08:01:27.613293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.934 [2024-11-19 08:01:27.613330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.934 qpair failed and we were unable to recover it. 00:37:35.934 [2024-11-19 08:01:27.613455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.934 [2024-11-19 08:01:27.613490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.934 qpair failed and we were unable to recover it. 00:37:35.934 [2024-11-19 08:01:27.613626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.934 [2024-11-19 08:01:27.613662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.934 qpair failed and we were unable to recover it. 00:37:35.934 [2024-11-19 08:01:27.613784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.934 [2024-11-19 08:01:27.613832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.934 qpair failed and we were unable to recover it. 00:37:35.934 [2024-11-19 08:01:27.613946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.934 [2024-11-19 08:01:27.613981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.934 qpair failed and we were unable to recover it. 00:37:35.934 [2024-11-19 08:01:27.614089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.934 [2024-11-19 08:01:27.614123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.934 qpair failed and we were unable to recover it. 00:37:35.934 [2024-11-19 08:01:27.614258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.934 [2024-11-19 08:01:27.614291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.934 qpair failed and we were unable to recover it. 00:37:35.934 [2024-11-19 08:01:27.614404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.934 [2024-11-19 08:01:27.614444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.935 qpair failed and we were unable to recover it. 00:37:35.935 [2024-11-19 08:01:27.614589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.935 [2024-11-19 08:01:27.614624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.935 qpair failed and we were unable to recover it. 00:37:35.935 [2024-11-19 08:01:27.614760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.935 [2024-11-19 08:01:27.614796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.935 qpair failed and we were unable to recover it. 00:37:35.935 [2024-11-19 08:01:27.614951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.935 [2024-11-19 08:01:27.614990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.935 qpair failed and we were unable to recover it. 00:37:35.935 [2024-11-19 08:01:27.615137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.935 [2024-11-19 08:01:27.615177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.935 qpair failed and we were unable to recover it. 00:37:35.935 [2024-11-19 08:01:27.615327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.935 [2024-11-19 08:01:27.615363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.935 qpair failed and we were unable to recover it. 00:37:35.935 [2024-11-19 08:01:27.615488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.935 [2024-11-19 08:01:27.615528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.935 qpair failed and we were unable to recover it. 00:37:35.935 [2024-11-19 08:01:27.615650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.935 [2024-11-19 08:01:27.615711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.935 qpair failed and we were unable to recover it. 00:37:35.935 [2024-11-19 08:01:27.615853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.935 [2024-11-19 08:01:27.615889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.935 qpair failed and we were unable to recover it. 00:37:35.935 [2024-11-19 08:01:27.616006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.935 [2024-11-19 08:01:27.616043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.935 qpair failed and we were unable to recover it. 00:37:35.935 [2024-11-19 08:01:27.616195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.935 [2024-11-19 08:01:27.616248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.935 qpair failed and we were unable to recover it. 00:37:35.935 [2024-11-19 08:01:27.616483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.935 [2024-11-19 08:01:27.616523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.935 qpair failed and we were unable to recover it. 00:37:35.935 [2024-11-19 08:01:27.616674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.935 [2024-11-19 08:01:27.616719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.935 qpair failed and we were unable to recover it. 00:37:35.935 [2024-11-19 08:01:27.616860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.935 [2024-11-19 08:01:27.616908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.935 qpair failed and we were unable to recover it. 00:37:35.935 [2024-11-19 08:01:27.617063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.935 [2024-11-19 08:01:27.617103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.935 qpair failed and we were unable to recover it. 00:37:35.935 [2024-11-19 08:01:27.617244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.935 [2024-11-19 08:01:27.617283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.935 qpair failed and we were unable to recover it. 00:37:35.935 [2024-11-19 08:01:27.617440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.935 [2024-11-19 08:01:27.617497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.935 qpair failed and we were unable to recover it. 00:37:35.935 [2024-11-19 08:01:27.617649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.935 [2024-11-19 08:01:27.617683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.935 qpair failed and we were unable to recover it. 00:37:35.935 [2024-11-19 08:01:27.617808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.935 [2024-11-19 08:01:27.617843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.935 qpair failed and we were unable to recover it. 00:37:35.935 [2024-11-19 08:01:27.618064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.935 [2024-11-19 08:01:27.618100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.935 qpair failed and we were unable to recover it. 00:37:35.935 [2024-11-19 08:01:27.618286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.935 [2024-11-19 08:01:27.618326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.935 qpair failed and we were unable to recover it. 00:37:35.935 [2024-11-19 08:01:27.618477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.935 [2024-11-19 08:01:27.618515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.935 qpair failed and we were unable to recover it. 00:37:35.935 [2024-11-19 08:01:27.618659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.935 [2024-11-19 08:01:27.618701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.935 qpair failed and we were unable to recover it. 00:37:35.935 [2024-11-19 08:01:27.618935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.935 [2024-11-19 08:01:27.618983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.935 qpair failed and we were unable to recover it. 00:37:35.935 [2024-11-19 08:01:27.619148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.935 [2024-11-19 08:01:27.619203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.935 qpair failed and we were unable to recover it. 00:37:35.935 [2024-11-19 08:01:27.619314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.935 [2024-11-19 08:01:27.619350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.935 qpair failed and we were unable to recover it. 00:37:35.935 [2024-11-19 08:01:27.619570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.935 [2024-11-19 08:01:27.619605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.935 qpair failed and we were unable to recover it. 00:37:35.935 [2024-11-19 08:01:27.619773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.935 [2024-11-19 08:01:27.619821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.935 qpair failed and we were unable to recover it. 00:37:35.935 [2024-11-19 08:01:27.619938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.935 [2024-11-19 08:01:27.619992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.935 qpair failed and we were unable to recover it. 00:37:35.935 [2024-11-19 08:01:27.620192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.935 [2024-11-19 08:01:27.620230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.935 qpair failed and we were unable to recover it. 00:37:35.935 [2024-11-19 08:01:27.620374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.935 [2024-11-19 08:01:27.620413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.935 qpair failed and we were unable to recover it. 00:37:35.935 [2024-11-19 08:01:27.620585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.935 [2024-11-19 08:01:27.620622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.935 qpair failed and we were unable to recover it. 00:37:35.935 [2024-11-19 08:01:27.620787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.935 [2024-11-19 08:01:27.620822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.935 qpair failed and we were unable to recover it. 00:37:35.935 [2024-11-19 08:01:27.620956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.935 [2024-11-19 08:01:27.620995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.935 qpair failed and we were unable to recover it. 00:37:35.935 [2024-11-19 08:01:27.621134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.935 [2024-11-19 08:01:27.621210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.935 qpair failed and we were unable to recover it. 00:37:35.935 [2024-11-19 08:01:27.621389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.935 [2024-11-19 08:01:27.621447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.935 qpair failed and we were unable to recover it. 00:37:35.935 [2024-11-19 08:01:27.621603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.935 [2024-11-19 08:01:27.621644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.935 qpair failed and we were unable to recover it. 00:37:35.935 [2024-11-19 08:01:27.621786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.935 [2024-11-19 08:01:27.621841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.935 qpair failed and we were unable to recover it. 00:37:35.936 [2024-11-19 08:01:27.621992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.936 [2024-11-19 08:01:27.622031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.936 qpair failed and we were unable to recover it. 00:37:35.936 [2024-11-19 08:01:27.622221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.936 [2024-11-19 08:01:27.622283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.936 qpair failed and we were unable to recover it. 00:37:35.936 [2024-11-19 08:01:27.622500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.936 [2024-11-19 08:01:27.622564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.936 qpair failed and we were unable to recover it. 00:37:35.936 [2024-11-19 08:01:27.622753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.936 [2024-11-19 08:01:27.622802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.936 qpair failed and we were unable to recover it. 00:37:35.936 [2024-11-19 08:01:27.623018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.936 [2024-11-19 08:01:27.623071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.936 qpair failed and we were unable to recover it. 00:37:35.936 [2024-11-19 08:01:27.623250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.936 [2024-11-19 08:01:27.623311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.936 qpair failed and we were unable to recover it. 00:37:35.936 [2024-11-19 08:01:27.623435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.936 [2024-11-19 08:01:27.623474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.936 qpair failed and we were unable to recover it. 00:37:35.936 [2024-11-19 08:01:27.623595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.936 [2024-11-19 08:01:27.623629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.936 qpair failed and we were unable to recover it. 00:37:35.936 [2024-11-19 08:01:27.623743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.936 [2024-11-19 08:01:27.623780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.936 qpair failed and we were unable to recover it. 00:37:35.936 [2024-11-19 08:01:27.623909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.936 [2024-11-19 08:01:27.623943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.936 qpair failed and we were unable to recover it. 00:37:35.936 [2024-11-19 08:01:27.624074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.936 [2024-11-19 08:01:27.624127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.936 qpair failed and we were unable to recover it. 00:37:35.936 [2024-11-19 08:01:27.624300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.936 [2024-11-19 08:01:27.624339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.936 qpair failed and we were unable to recover it. 00:37:35.936 [2024-11-19 08:01:27.624498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.936 [2024-11-19 08:01:27.624552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.936 qpair failed and we were unable to recover it. 00:37:35.936 [2024-11-19 08:01:27.624668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.936 [2024-11-19 08:01:27.624713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.936 qpair failed and we were unable to recover it. 00:37:35.936 [2024-11-19 08:01:27.624865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.936 [2024-11-19 08:01:27.624898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.936 qpair failed and we were unable to recover it. 00:37:35.936 [2024-11-19 08:01:27.625053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.936 [2024-11-19 08:01:27.625091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.936 qpair failed and we were unable to recover it. 00:37:35.936 [2024-11-19 08:01:27.625282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.936 [2024-11-19 08:01:27.625347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.936 qpair failed and we were unable to recover it. 00:37:35.936 [2024-11-19 08:01:27.625489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.936 [2024-11-19 08:01:27.625527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.936 qpair failed and we were unable to recover it. 00:37:35.936 [2024-11-19 08:01:27.625698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.936 [2024-11-19 08:01:27.625767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.936 qpair failed and we were unable to recover it. 00:37:35.936 [2024-11-19 08:01:27.625911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.936 [2024-11-19 08:01:27.625960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.936 qpair failed and we were unable to recover it. 00:37:35.936 [2024-11-19 08:01:27.626115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.936 [2024-11-19 08:01:27.626169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.936 qpair failed and we were unable to recover it. 00:37:35.936 [2024-11-19 08:01:27.626351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.936 [2024-11-19 08:01:27.626390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.936 qpair failed and we were unable to recover it. 00:37:35.936 [2024-11-19 08:01:27.626554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.936 [2024-11-19 08:01:27.626589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.936 qpair failed and we were unable to recover it. 00:37:35.936 [2024-11-19 08:01:27.626726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.936 [2024-11-19 08:01:27.626761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.936 qpair failed and we were unable to recover it. 00:37:35.936 [2024-11-19 08:01:27.626860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.936 [2024-11-19 08:01:27.626894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.936 qpair failed and we were unable to recover it. 00:37:35.936 [2024-11-19 08:01:27.627026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.936 [2024-11-19 08:01:27.627060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.936 qpair failed and we were unable to recover it. 00:37:35.936 [2024-11-19 08:01:27.627172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.936 [2024-11-19 08:01:27.627209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.936 qpair failed and we were unable to recover it. 00:37:35.936 [2024-11-19 08:01:27.627328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.936 [2024-11-19 08:01:27.627365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.936 qpair failed and we were unable to recover it. 00:37:35.936 [2024-11-19 08:01:27.627483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.936 [2024-11-19 08:01:27.627520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.936 qpair failed and we were unable to recover it. 00:37:35.936 [2024-11-19 08:01:27.627700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.936 [2024-11-19 08:01:27.627739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.936 qpair failed and we were unable to recover it. 00:37:35.936 [2024-11-19 08:01:27.627877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.936 [2024-11-19 08:01:27.627913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.936 qpair failed and we were unable to recover it. 00:37:35.936 [2024-11-19 08:01:27.628052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.936 [2024-11-19 08:01:27.628092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.936 qpair failed and we were unable to recover it. 00:37:35.936 [2024-11-19 08:01:27.628288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.936 [2024-11-19 08:01:27.628342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.936 qpair failed and we were unable to recover it. 00:37:35.936 [2024-11-19 08:01:27.628475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.936 [2024-11-19 08:01:27.628530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.936 qpair failed and we were unable to recover it. 00:37:35.936 [2024-11-19 08:01:27.628717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.936 [2024-11-19 08:01:27.628756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.936 qpair failed and we were unable to recover it. 00:37:35.936 [2024-11-19 08:01:27.628895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.936 [2024-11-19 08:01:27.628930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.936 qpair failed and we were unable to recover it. 00:37:35.936 [2024-11-19 08:01:27.629056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.936 [2024-11-19 08:01:27.629094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.936 qpair failed and we were unable to recover it. 00:37:35.936 [2024-11-19 08:01:27.629213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.936 [2024-11-19 08:01:27.629250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.937 qpair failed and we were unable to recover it. 00:37:35.937 [2024-11-19 08:01:27.629402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.937 [2024-11-19 08:01:27.629460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.937 qpair failed and we were unable to recover it. 00:37:35.937 [2024-11-19 08:01:27.629621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.937 [2024-11-19 08:01:27.629657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.937 qpair failed and we were unable to recover it. 00:37:35.937 [2024-11-19 08:01:27.629826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.937 [2024-11-19 08:01:27.629875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.937 qpair failed and we were unable to recover it. 00:37:35.937 [2024-11-19 08:01:27.629999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.937 [2024-11-19 08:01:27.630036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.937 qpair failed and we were unable to recover it. 00:37:35.937 [2024-11-19 08:01:27.630235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.937 [2024-11-19 08:01:27.630303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.937 qpair failed and we were unable to recover it. 00:37:35.937 [2024-11-19 08:01:27.630443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.937 [2024-11-19 08:01:27.630496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.937 qpair failed and we were unable to recover it. 00:37:35.937 [2024-11-19 08:01:27.630649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.937 [2024-11-19 08:01:27.630697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.937 qpair failed and we were unable to recover it. 00:37:35.937 [2024-11-19 08:01:27.630837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.937 [2024-11-19 08:01:27.630873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.937 qpair failed and we were unable to recover it. 00:37:35.937 [2024-11-19 08:01:27.631008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.937 [2024-11-19 08:01:27.631044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.937 qpair failed and we were unable to recover it. 00:37:35.937 [2024-11-19 08:01:27.631167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.937 [2024-11-19 08:01:27.631205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.937 qpair failed and we were unable to recover it. 00:37:35.937 [2024-11-19 08:01:27.631346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.937 [2024-11-19 08:01:27.631399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.937 qpair failed and we were unable to recover it. 00:37:35.937 [2024-11-19 08:01:27.631548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.937 [2024-11-19 08:01:27.631596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.937 qpair failed and we were unable to recover it. 00:37:35.937 [2024-11-19 08:01:27.631733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.937 [2024-11-19 08:01:27.631771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.937 qpair failed and we were unable to recover it. 00:37:35.937 [2024-11-19 08:01:27.631904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.937 [2024-11-19 08:01:27.631957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.937 qpair failed and we were unable to recover it. 00:37:35.937 [2024-11-19 08:01:27.632111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.937 [2024-11-19 08:01:27.632163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.937 qpair failed and we were unable to recover it. 00:37:35.937 [2024-11-19 08:01:27.632279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.937 [2024-11-19 08:01:27.632315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.937 qpair failed and we were unable to recover it. 00:37:35.937 [2024-11-19 08:01:27.632458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.937 [2024-11-19 08:01:27.632492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.937 qpair failed and we were unable to recover it. 00:37:35.937 [2024-11-19 08:01:27.632658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.937 [2024-11-19 08:01:27.632703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.937 qpair failed and we were unable to recover it. 00:37:35.937 [2024-11-19 08:01:27.632821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.937 [2024-11-19 08:01:27.632855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.937 qpair failed and we were unable to recover it. 00:37:35.937 [2024-11-19 08:01:27.632997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.937 [2024-11-19 08:01:27.633032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.937 qpair failed and we were unable to recover it. 00:37:35.937 [2024-11-19 08:01:27.633174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.937 [2024-11-19 08:01:27.633209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.937 qpair failed and we were unable to recover it. 00:37:35.937 [2024-11-19 08:01:27.633317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.937 [2024-11-19 08:01:27.633353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.937 qpair failed and we were unable to recover it. 00:37:35.937 [2024-11-19 08:01:27.633454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.937 [2024-11-19 08:01:27.633489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.937 qpair failed and we were unable to recover it. 00:37:35.937 [2024-11-19 08:01:27.633630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.937 [2024-11-19 08:01:27.633679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.937 qpair failed and we were unable to recover it. 00:37:35.937 [2024-11-19 08:01:27.633890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.937 [2024-11-19 08:01:27.633945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.937 qpair failed and we were unable to recover it. 00:37:35.937 [2024-11-19 08:01:27.634182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.937 [2024-11-19 08:01:27.634244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.937 qpair failed and we were unable to recover it. 00:37:35.937 [2024-11-19 08:01:27.634466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.937 [2024-11-19 08:01:27.634524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.937 qpair failed and we were unable to recover it. 00:37:35.937 [2024-11-19 08:01:27.634757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.937 [2024-11-19 08:01:27.634794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.937 qpair failed and we were unable to recover it. 00:37:35.937 [2024-11-19 08:01:27.634937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.937 [2024-11-19 08:01:27.634972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.937 qpair failed and we were unable to recover it. 00:37:35.937 [2024-11-19 08:01:27.635105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.937 [2024-11-19 08:01:27.635145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.937 qpair failed and we were unable to recover it. 00:37:35.937 [2024-11-19 08:01:27.635331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.937 [2024-11-19 08:01:27.635370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.937 qpair failed and we were unable to recover it. 00:37:35.937 [2024-11-19 08:01:27.635510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.937 [2024-11-19 08:01:27.635563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.937 qpair failed and we were unable to recover it. 00:37:35.937 [2024-11-19 08:01:27.635730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.937 [2024-11-19 08:01:27.635766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.937 qpair failed and we were unable to recover it. 00:37:35.937 [2024-11-19 08:01:27.635909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.937 [2024-11-19 08:01:27.635944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.937 qpair failed and we were unable to recover it. 00:37:35.937 [2024-11-19 08:01:27.636076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.937 [2024-11-19 08:01:27.636110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.937 qpair failed and we were unable to recover it. 00:37:35.937 [2024-11-19 08:01:27.636306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.937 [2024-11-19 08:01:27.636374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.937 qpair failed and we were unable to recover it. 00:37:35.937 [2024-11-19 08:01:27.636540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.937 [2024-11-19 08:01:27.636576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.937 qpair failed and we were unable to recover it. 00:37:35.937 [2024-11-19 08:01:27.636711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.938 [2024-11-19 08:01:27.636746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.938 qpair failed and we were unable to recover it. 00:37:35.938 [2024-11-19 08:01:27.636877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.938 [2024-11-19 08:01:27.636912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.938 qpair failed and we were unable to recover it. 00:37:35.938 [2024-11-19 08:01:27.637027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.938 [2024-11-19 08:01:27.637081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.938 qpair failed and we were unable to recover it. 00:37:35.938 [2024-11-19 08:01:27.637253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.938 [2024-11-19 08:01:27.637291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.938 qpair failed and we were unable to recover it. 00:37:35.938 [2024-11-19 08:01:27.637494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.938 [2024-11-19 08:01:27.637532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.938 qpair failed and we were unable to recover it. 00:37:35.938 [2024-11-19 08:01:27.637702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.938 [2024-11-19 08:01:27.637769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.938 qpair failed and we were unable to recover it. 00:37:35.938 [2024-11-19 08:01:27.637913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.938 [2024-11-19 08:01:27.637962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.938 qpair failed and we were unable to recover it. 00:37:35.938 [2024-11-19 08:01:27.638131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.938 [2024-11-19 08:01:27.638228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.938 qpair failed and we were unable to recover it. 00:37:35.938 [2024-11-19 08:01:27.638386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.938 [2024-11-19 08:01:27.638422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.938 qpair failed and we were unable to recover it. 00:37:35.938 [2024-11-19 08:01:27.638589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.938 [2024-11-19 08:01:27.638628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.938 qpair failed and we were unable to recover it. 00:37:35.938 [2024-11-19 08:01:27.638762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.938 [2024-11-19 08:01:27.638797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.938 qpair failed and we were unable to recover it. 00:37:35.938 [2024-11-19 08:01:27.638952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.938 [2024-11-19 08:01:27.638990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.938 qpair failed and we were unable to recover it. 00:37:35.938 [2024-11-19 08:01:27.639184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.938 [2024-11-19 08:01:27.639238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.938 qpair failed and we were unable to recover it. 00:37:35.938 [2024-11-19 08:01:27.639352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.938 [2024-11-19 08:01:27.639394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.938 qpair failed and we were unable to recover it. 00:37:35.938 [2024-11-19 08:01:27.639556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.938 [2024-11-19 08:01:27.639596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.938 qpair failed and we were unable to recover it. 00:37:35.938 [2024-11-19 08:01:27.639723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.938 [2024-11-19 08:01:27.639760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.938 qpair failed and we were unable to recover it. 00:37:35.938 [2024-11-19 08:01:27.639943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.938 [2024-11-19 08:01:27.640010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.938 qpair failed and we were unable to recover it. 00:37:35.938 [2024-11-19 08:01:27.640219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.938 [2024-11-19 08:01:27.640280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.938 qpair failed and we were unable to recover it. 00:37:35.938 [2024-11-19 08:01:27.640468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.938 [2024-11-19 08:01:27.640529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.938 qpair failed and we were unable to recover it. 00:37:35.938 [2024-11-19 08:01:27.640684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.938 [2024-11-19 08:01:27.640749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.938 qpair failed and we were unable to recover it. 00:37:35.938 [2024-11-19 08:01:27.640883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.938 [2024-11-19 08:01:27.640918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.938 qpair failed and we were unable to recover it. 00:37:35.938 [2024-11-19 08:01:27.641065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.938 [2024-11-19 08:01:27.641117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.938 qpair failed and we were unable to recover it. 00:37:35.938 [2024-11-19 08:01:27.641248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.938 [2024-11-19 08:01:27.641283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.938 qpair failed and we were unable to recover it. 00:37:35.938 [2024-11-19 08:01:27.641451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.938 [2024-11-19 08:01:27.641490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.938 qpair failed and we were unable to recover it. 00:37:35.938 [2024-11-19 08:01:27.641655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.938 [2024-11-19 08:01:27.641739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.938 qpair failed and we were unable to recover it. 00:37:35.938 [2024-11-19 08:01:27.641897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.938 [2024-11-19 08:01:27.641946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.938 qpair failed and we were unable to recover it. 00:37:35.938 [2024-11-19 08:01:27.642118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.938 [2024-11-19 08:01:27.642162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.938 qpair failed and we were unable to recover it. 00:37:35.938 [2024-11-19 08:01:27.642276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.938 [2024-11-19 08:01:27.642333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.938 qpair failed and we were unable to recover it. 00:37:35.938 [2024-11-19 08:01:27.642467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.938 [2024-11-19 08:01:27.642520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.938 qpair failed and we were unable to recover it. 00:37:35.938 [2024-11-19 08:01:27.642654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.938 [2024-11-19 08:01:27.642699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.938 qpair failed and we were unable to recover it. 00:37:35.938 [2024-11-19 08:01:27.642810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.938 [2024-11-19 08:01:27.642846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.938 qpair failed and we were unable to recover it. 00:37:35.938 [2024-11-19 08:01:27.642977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.938 [2024-11-19 08:01:27.643035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.938 qpair failed and we were unable to recover it. 00:37:35.938 [2024-11-19 08:01:27.643189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.938 [2024-11-19 08:01:27.643244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.938 qpair failed and we were unable to recover it. 00:37:35.938 [2024-11-19 08:01:27.643431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.938 [2024-11-19 08:01:27.643484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.938 qpair failed and we were unable to recover it. 00:37:35.938 [2024-11-19 08:01:27.643607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.938 [2024-11-19 08:01:27.643643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.938 qpair failed and we were unable to recover it. 00:37:35.938 [2024-11-19 08:01:27.643785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.938 [2024-11-19 08:01:27.643820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.938 qpair failed and we were unable to recover it. 00:37:35.938 [2024-11-19 08:01:27.643953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.938 [2024-11-19 08:01:27.643989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.938 qpair failed and we were unable to recover it. 00:37:35.938 [2024-11-19 08:01:27.644229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.938 [2024-11-19 08:01:27.644267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.938 qpair failed and we were unable to recover it. 00:37:35.939 [2024-11-19 08:01:27.644507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.939 [2024-11-19 08:01:27.644547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.939 qpair failed and we were unable to recover it. 00:37:35.939 [2024-11-19 08:01:27.644701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.939 [2024-11-19 08:01:27.644754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.939 qpair failed and we were unable to recover it. 00:37:35.939 [2024-11-19 08:01:27.644862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.939 [2024-11-19 08:01:27.644900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.939 qpair failed and we were unable to recover it. 00:37:35.939 [2024-11-19 08:01:27.645051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.939 [2024-11-19 08:01:27.645104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.939 qpair failed and we were unable to recover it. 00:37:35.939 [2024-11-19 08:01:27.645267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.939 [2024-11-19 08:01:27.645307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.939 qpair failed and we were unable to recover it. 00:37:35.939 [2024-11-19 08:01:27.645472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.939 [2024-11-19 08:01:27.645511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.939 qpair failed and we were unable to recover it. 00:37:35.939 [2024-11-19 08:01:27.645663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.939 [2024-11-19 08:01:27.645704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.939 qpair failed and we were unable to recover it. 00:37:35.939 [2024-11-19 08:01:27.645837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.939 [2024-11-19 08:01:27.645871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.939 qpair failed and we were unable to recover it. 00:37:35.939 [2024-11-19 08:01:27.646028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.939 [2024-11-19 08:01:27.646068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.939 qpair failed and we were unable to recover it. 00:37:35.939 [2024-11-19 08:01:27.646250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.939 [2024-11-19 08:01:27.646295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.939 qpair failed and we were unable to recover it. 00:37:35.939 [2024-11-19 08:01:27.646456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.939 [2024-11-19 08:01:27.646496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.939 qpair failed and we were unable to recover it. 00:37:35.939 [2024-11-19 08:01:27.646749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.939 [2024-11-19 08:01:27.646784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.939 qpair failed and we were unable to recover it. 00:37:35.939 [2024-11-19 08:01:27.646892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.939 [2024-11-19 08:01:27.646945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.939 qpair failed and we were unable to recover it. 00:37:35.939 [2024-11-19 08:01:27.647090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.939 [2024-11-19 08:01:27.647129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.939 qpair failed and we were unable to recover it. 00:37:35.939 [2024-11-19 08:01:27.647296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.939 [2024-11-19 08:01:27.647336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.939 qpair failed and we were unable to recover it. 00:37:35.939 [2024-11-19 08:01:27.647508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.939 [2024-11-19 08:01:27.647559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.939 qpair failed and we were unable to recover it. 00:37:35.939 [2024-11-19 08:01:27.647719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.939 [2024-11-19 08:01:27.647754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.939 qpair failed and we were unable to recover it. 00:37:35.939 [2024-11-19 08:01:27.647880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.939 [2024-11-19 08:01:27.647928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.939 qpair failed and we were unable to recover it. 00:37:35.939 [2024-11-19 08:01:27.648102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.939 [2024-11-19 08:01:27.648156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.939 qpair failed and we were unable to recover it. 00:37:35.939 [2024-11-19 08:01:27.648260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.939 [2024-11-19 08:01:27.648295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.939 qpair failed and we were unable to recover it. 00:37:35.939 [2024-11-19 08:01:27.648426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.939 [2024-11-19 08:01:27.648461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.939 qpair failed and we were unable to recover it. 00:37:35.939 [2024-11-19 08:01:27.648569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.939 [2024-11-19 08:01:27.648605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.939 qpair failed and we were unable to recover it. 00:37:35.939 [2024-11-19 08:01:27.648779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.939 [2024-11-19 08:01:27.648827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.939 qpair failed and we were unable to recover it. 00:37:35.939 [2024-11-19 08:01:27.648951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.939 [2024-11-19 08:01:27.648988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.939 qpair failed and we were unable to recover it. 00:37:35.939 [2024-11-19 08:01:27.649097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.939 [2024-11-19 08:01:27.649133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.939 qpair failed and we were unable to recover it. 00:37:35.939 [2024-11-19 08:01:27.649260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.939 [2024-11-19 08:01:27.649294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.939 qpair failed and we were unable to recover it. 00:37:35.939 [2024-11-19 08:01:27.649424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.939 [2024-11-19 08:01:27.649465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.939 qpair failed and we were unable to recover it. 00:37:35.939 [2024-11-19 08:01:27.649625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.939 [2024-11-19 08:01:27.649669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.939 qpair failed and we were unable to recover it. 00:37:35.939 [2024-11-19 08:01:27.649836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.939 [2024-11-19 08:01:27.649892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.939 qpair failed and we were unable to recover it. 00:37:35.939 [2024-11-19 08:01:27.650099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.939 [2024-11-19 08:01:27.650160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.939 qpair failed and we were unable to recover it. 00:37:35.939 [2024-11-19 08:01:27.650310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.939 [2024-11-19 08:01:27.650375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.939 qpair failed and we were unable to recover it. 00:37:35.939 [2024-11-19 08:01:27.650510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.939 [2024-11-19 08:01:27.650545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.939 qpair failed and we were unable to recover it. 00:37:35.939 [2024-11-19 08:01:27.650720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.939 [2024-11-19 08:01:27.650757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.939 qpair failed and we were unable to recover it. 00:37:35.940 [2024-11-19 08:01:27.650883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.940 [2024-11-19 08:01:27.650923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.940 qpair failed and we were unable to recover it. 00:37:35.940 [2024-11-19 08:01:27.651088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.940 [2024-11-19 08:01:27.651140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.940 qpair failed and we were unable to recover it. 00:37:35.940 [2024-11-19 08:01:27.651305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.940 [2024-11-19 08:01:27.651367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.940 qpair failed and we were unable to recover it. 00:37:35.940 [2024-11-19 08:01:27.651477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.940 [2024-11-19 08:01:27.651512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.940 qpair failed and we were unable to recover it. 00:37:35.940 [2024-11-19 08:01:27.651649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.940 [2024-11-19 08:01:27.651683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.940 qpair failed and we were unable to recover it. 00:37:35.940 [2024-11-19 08:01:27.651840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.940 [2024-11-19 08:01:27.651893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.940 qpair failed and we were unable to recover it. 00:37:35.940 [2024-11-19 08:01:27.652093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.940 [2024-11-19 08:01:27.652162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.940 qpair failed and we were unable to recover it. 00:37:35.940 [2024-11-19 08:01:27.652303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.940 [2024-11-19 08:01:27.652360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.940 qpair failed and we were unable to recover it. 00:37:35.940 [2024-11-19 08:01:27.652581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.940 [2024-11-19 08:01:27.652641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.940 qpair failed and we were unable to recover it. 00:37:35.940 [2024-11-19 08:01:27.652824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.940 [2024-11-19 08:01:27.652865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.940 qpair failed and we were unable to recover it. 00:37:35.940 [2024-11-19 08:01:27.653056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.940 [2024-11-19 08:01:27.653118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.940 qpair failed and we were unable to recover it. 00:37:35.940 [2024-11-19 08:01:27.653355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.940 [2024-11-19 08:01:27.653416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.940 qpair failed and we were unable to recover it. 00:37:35.940 [2024-11-19 08:01:27.653578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.940 [2024-11-19 08:01:27.653617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.940 qpair failed and we were unable to recover it. 00:37:35.940 [2024-11-19 08:01:27.653786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.940 [2024-11-19 08:01:27.653822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.940 qpair failed and we were unable to recover it. 00:37:35.940 [2024-11-19 08:01:27.653993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.940 [2024-11-19 08:01:27.654046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.940 qpair failed and we were unable to recover it. 00:37:35.940 [2024-11-19 08:01:27.654265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.940 [2024-11-19 08:01:27.654320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.940 qpair failed and we were unable to recover it. 00:37:35.940 [2024-11-19 08:01:27.654456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.940 [2024-11-19 08:01:27.654514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.940 qpair failed and we were unable to recover it. 00:37:35.940 [2024-11-19 08:01:27.654628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.940 [2024-11-19 08:01:27.654664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.940 qpair failed and we were unable to recover it. 00:37:35.940 [2024-11-19 08:01:27.654825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.940 [2024-11-19 08:01:27.654861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.940 qpair failed and we were unable to recover it. 00:37:35.940 [2024-11-19 08:01:27.654985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.940 [2024-11-19 08:01:27.655023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.940 qpair failed and we were unable to recover it. 00:37:35.940 [2024-11-19 08:01:27.655216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.940 [2024-11-19 08:01:27.655254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.940 qpair failed and we were unable to recover it. 00:37:35.940 [2024-11-19 08:01:27.655467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.940 [2024-11-19 08:01:27.655523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.940 qpair failed and we were unable to recover it. 00:37:35.940 [2024-11-19 08:01:27.655633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.940 [2024-11-19 08:01:27.655670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.940 qpair failed and we were unable to recover it. 00:37:35.940 [2024-11-19 08:01:27.655810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.940 [2024-11-19 08:01:27.655844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.940 qpair failed and we were unable to recover it. 00:37:35.940 [2024-11-19 08:01:27.655996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.940 [2024-11-19 08:01:27.656034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.940 qpair failed and we were unable to recover it. 00:37:35.940 [2024-11-19 08:01:27.656193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.940 [2024-11-19 08:01:27.656254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.940 qpair failed and we were unable to recover it. 00:37:35.940 [2024-11-19 08:01:27.656423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.940 [2024-11-19 08:01:27.656461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.940 qpair failed and we were unable to recover it. 00:37:35.940 [2024-11-19 08:01:27.656607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.940 [2024-11-19 08:01:27.656649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.940 qpair failed and we were unable to recover it. 00:37:35.940 [2024-11-19 08:01:27.656794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.940 [2024-11-19 08:01:27.656829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.940 qpair failed and we were unable to recover it. 00:37:35.940 [2024-11-19 08:01:27.656936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.940 [2024-11-19 08:01:27.656971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.940 qpair failed and we were unable to recover it. 00:37:35.940 [2024-11-19 08:01:27.657190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.940 [2024-11-19 08:01:27.657248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.940 qpair failed and we were unable to recover it. 00:37:35.940 [2024-11-19 08:01:27.657475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.940 [2024-11-19 08:01:27.657535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.940 qpair failed and we were unable to recover it. 00:37:35.940 [2024-11-19 08:01:27.657682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.940 [2024-11-19 08:01:27.657738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.940 qpair failed and we were unable to recover it. 00:37:35.940 [2024-11-19 08:01:27.657899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.940 [2024-11-19 08:01:27.657948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.940 qpair failed and we were unable to recover it. 00:37:35.940 [2024-11-19 08:01:27.658120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.940 [2024-11-19 08:01:27.658160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.940 qpair failed and we were unable to recover it. 00:37:35.940 [2024-11-19 08:01:27.658298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.940 [2024-11-19 08:01:27.658360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.940 qpair failed and we were unable to recover it. 00:37:35.940 [2024-11-19 08:01:27.658536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.940 [2024-11-19 08:01:27.658574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.941 qpair failed and we were unable to recover it. 00:37:35.941 [2024-11-19 08:01:27.658741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.941 [2024-11-19 08:01:27.658776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.941 qpair failed and we were unable to recover it. 00:37:35.941 [2024-11-19 08:01:27.658928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.941 [2024-11-19 08:01:27.658977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.941 qpair failed and we were unable to recover it. 00:37:35.941 [2024-11-19 08:01:27.659178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.941 [2024-11-19 08:01:27.659245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.941 qpair failed and we were unable to recover it. 00:37:35.941 [2024-11-19 08:01:27.659419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.941 [2024-11-19 08:01:27.659475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.941 qpair failed and we were unable to recover it. 00:37:35.941 [2024-11-19 08:01:27.659619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.941 [2024-11-19 08:01:27.659654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.941 qpair failed and we were unable to recover it. 00:37:35.941 [2024-11-19 08:01:27.659775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.941 [2024-11-19 08:01:27.659812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.941 qpair failed and we were unable to recover it. 00:37:35.941 [2024-11-19 08:01:27.659965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.941 [2024-11-19 08:01:27.660020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.941 qpair failed and we were unable to recover it. 00:37:35.941 [2024-11-19 08:01:27.660209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.941 [2024-11-19 08:01:27.660265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.941 qpair failed and we were unable to recover it. 00:37:35.941 [2024-11-19 08:01:27.660399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.941 [2024-11-19 08:01:27.660435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.941 qpair failed and we were unable to recover it. 00:37:35.941 [2024-11-19 08:01:27.660580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.941 [2024-11-19 08:01:27.660616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.941 qpair failed and we were unable to recover it. 00:37:35.941 [2024-11-19 08:01:27.660728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.941 [2024-11-19 08:01:27.660763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.941 qpair failed and we were unable to recover it. 00:37:35.941 [2024-11-19 08:01:27.660914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.941 [2024-11-19 08:01:27.660953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.941 qpair failed and we were unable to recover it. 00:37:35.941 [2024-11-19 08:01:27.661120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.941 [2024-11-19 08:01:27.661161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.941 qpair failed and we were unable to recover it. 00:37:35.941 [2024-11-19 08:01:27.661404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.941 [2024-11-19 08:01:27.661460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.941 qpair failed and we were unable to recover it. 00:37:35.941 [2024-11-19 08:01:27.661588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.941 [2024-11-19 08:01:27.661628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.941 qpair failed and we were unable to recover it. 00:37:35.941 [2024-11-19 08:01:27.661818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.941 [2024-11-19 08:01:27.661853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.941 qpair failed and we were unable to recover it. 00:37:35.941 [2024-11-19 08:01:27.662047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.941 [2024-11-19 08:01:27.662101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.941 qpair failed and we were unable to recover it. 00:37:35.941 [2024-11-19 08:01:27.662270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.941 [2024-11-19 08:01:27.662310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.941 qpair failed and we were unable to recover it. 00:37:35.941 [2024-11-19 08:01:27.662563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.941 [2024-11-19 08:01:27.662624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.941 qpair failed and we were unable to recover it. 00:37:35.941 [2024-11-19 08:01:27.662796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.941 [2024-11-19 08:01:27.662836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.941 qpair failed and we were unable to recover it. 00:37:35.941 [2024-11-19 08:01:27.662938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.941 [2024-11-19 08:01:27.662990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.941 qpair failed and we were unable to recover it. 00:37:35.941 [2024-11-19 08:01:27.663197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.941 [2024-11-19 08:01:27.663302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.941 qpair failed and we were unable to recover it. 00:37:35.941 [2024-11-19 08:01:27.663524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.941 [2024-11-19 08:01:27.663563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.941 qpair failed and we were unable to recover it. 00:37:35.941 [2024-11-19 08:01:27.663735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.941 [2024-11-19 08:01:27.663773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.941 qpair failed and we were unable to recover it. 00:37:35.941 [2024-11-19 08:01:27.663904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.941 [2024-11-19 08:01:27.663953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.941 qpair failed and we were unable to recover it. 00:37:35.941 [2024-11-19 08:01:27.664126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.941 [2024-11-19 08:01:27.664183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.941 qpair failed and we were unable to recover it. 00:37:35.941 [2024-11-19 08:01:27.664306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.941 [2024-11-19 08:01:27.664364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.941 qpair failed and we were unable to recover it. 00:37:35.941 [2024-11-19 08:01:27.664564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.941 [2024-11-19 08:01:27.664600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.941 qpair failed and we were unable to recover it. 00:37:35.941 [2024-11-19 08:01:27.664752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.941 [2024-11-19 08:01:27.664787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.941 qpair failed and we were unable to recover it. 00:37:35.941 [2024-11-19 08:01:27.664899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.941 [2024-11-19 08:01:27.664934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.941 qpair failed and we were unable to recover it. 00:37:35.941 [2024-11-19 08:01:27.665074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.941 [2024-11-19 08:01:27.665109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.941 qpair failed and we were unable to recover it. 00:37:35.941 [2024-11-19 08:01:27.665243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.941 [2024-11-19 08:01:27.665277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.941 qpair failed and we were unable to recover it. 00:37:35.941 [2024-11-19 08:01:27.665413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.941 [2024-11-19 08:01:27.665447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.941 qpair failed and we were unable to recover it. 00:37:35.941 [2024-11-19 08:01:27.665560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.941 [2024-11-19 08:01:27.665597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.941 qpair failed and we were unable to recover it. 00:37:35.941 [2024-11-19 08:01:27.665761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.941 [2024-11-19 08:01:27.665810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.941 qpair failed and we were unable to recover it. 00:37:35.941 [2024-11-19 08:01:27.665982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.941 [2024-11-19 08:01:27.666036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.941 qpair failed and we were unable to recover it. 00:37:35.941 [2024-11-19 08:01:27.666218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.941 [2024-11-19 08:01:27.666271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.941 qpair failed and we were unable to recover it. 00:37:35.941 [2024-11-19 08:01:27.666491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.942 [2024-11-19 08:01:27.666578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.942 qpair failed and we were unable to recover it. 00:37:35.942 [2024-11-19 08:01:27.666756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.942 [2024-11-19 08:01:27.666811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.942 qpair failed and we were unable to recover it. 00:37:35.942 [2024-11-19 08:01:27.666967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.942 [2024-11-19 08:01:27.667019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.942 qpair failed and we were unable to recover it. 00:37:35.942 [2024-11-19 08:01:27.667186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.942 [2024-11-19 08:01:27.667255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.942 qpair failed and we were unable to recover it. 00:37:35.942 [2024-11-19 08:01:27.667447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.942 [2024-11-19 08:01:27.667505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.942 qpair failed and we were unable to recover it. 00:37:35.942 [2024-11-19 08:01:27.667655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.942 [2024-11-19 08:01:27.667702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.942 qpair failed and we were unable to recover it. 00:37:35.942 [2024-11-19 08:01:27.667856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.942 [2024-11-19 08:01:27.667891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.942 qpair failed and we were unable to recover it. 00:37:35.942 [2024-11-19 08:01:27.668003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.942 [2024-11-19 08:01:27.668039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.942 qpair failed and we were unable to recover it. 00:37:35.942 [2024-11-19 08:01:27.668163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.942 [2024-11-19 08:01:27.668201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.942 qpair failed and we were unable to recover it. 00:37:35.942 [2024-11-19 08:01:27.668443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.942 [2024-11-19 08:01:27.668496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.942 qpair failed and we were unable to recover it. 00:37:35.942 [2024-11-19 08:01:27.668618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.942 [2024-11-19 08:01:27.668667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.942 qpair failed and we were unable to recover it. 00:37:35.942 [2024-11-19 08:01:27.668809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.942 [2024-11-19 08:01:27.668857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.942 qpair failed and we were unable to recover it. 00:37:35.942 [2024-11-19 08:01:27.669011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.942 [2024-11-19 08:01:27.669069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.942 qpair failed and we were unable to recover it. 00:37:35.942 [2024-11-19 08:01:27.669245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.942 [2024-11-19 08:01:27.669283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.942 qpair failed and we were unable to recover it. 00:37:35.942 [2024-11-19 08:01:27.669425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.942 [2024-11-19 08:01:27.669484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.942 qpair failed and we were unable to recover it. 00:37:35.942 [2024-11-19 08:01:27.669613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.942 [2024-11-19 08:01:27.669658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.942 qpair failed and we were unable to recover it. 00:37:35.942 [2024-11-19 08:01:27.669806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.942 [2024-11-19 08:01:27.669841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.942 qpair failed and we were unable to recover it. 00:37:35.942 [2024-11-19 08:01:27.669977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.942 [2024-11-19 08:01:27.670028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.942 qpair failed and we were unable to recover it. 00:37:35.942 [2024-11-19 08:01:27.670189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.942 [2024-11-19 08:01:27.670242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.942 qpair failed and we were unable to recover it. 00:37:35.942 [2024-11-19 08:01:27.670355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.942 [2024-11-19 08:01:27.670392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.942 qpair failed and we were unable to recover it. 00:37:35.942 [2024-11-19 08:01:27.670532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.942 [2024-11-19 08:01:27.670587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.942 qpair failed and we were unable to recover it. 00:37:35.942 [2024-11-19 08:01:27.670787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.942 [2024-11-19 08:01:27.670837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.942 qpair failed and we were unable to recover it. 00:37:35.942 [2024-11-19 08:01:27.671008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.942 [2024-11-19 08:01:27.671083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.942 qpair failed and we were unable to recover it. 00:37:35.942 [2024-11-19 08:01:27.671203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.942 [2024-11-19 08:01:27.671240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.942 qpair failed and we were unable to recover it. 00:37:35.942 [2024-11-19 08:01:27.671372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.942 [2024-11-19 08:01:27.671427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.942 qpair failed and we were unable to recover it. 00:37:35.942 [2024-11-19 08:01:27.671560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.942 [2024-11-19 08:01:27.671595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.942 qpair failed and we were unable to recover it. 00:37:35.942 [2024-11-19 08:01:27.671710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.942 [2024-11-19 08:01:27.671746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.942 qpair failed and we were unable to recover it. 00:37:35.942 [2024-11-19 08:01:27.671859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.942 [2024-11-19 08:01:27.671894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.942 qpair failed and we were unable to recover it. 00:37:35.942 [2024-11-19 08:01:27.671996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.942 [2024-11-19 08:01:27.672030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.942 qpair failed and we were unable to recover it. 00:37:35.942 [2024-11-19 08:01:27.672172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.942 [2024-11-19 08:01:27.672207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.942 qpair failed and we were unable to recover it. 00:37:35.942 [2024-11-19 08:01:27.672367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.942 [2024-11-19 08:01:27.672401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.942 qpair failed and we were unable to recover it. 00:37:35.942 [2024-11-19 08:01:27.672507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.942 [2024-11-19 08:01:27.672545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.942 qpair failed and we were unable to recover it. 00:37:35.942 [2024-11-19 08:01:27.672705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.942 [2024-11-19 08:01:27.672742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.942 qpair failed and we were unable to recover it. 00:37:35.942 [2024-11-19 08:01:27.672867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.942 [2024-11-19 08:01:27.672906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.942 qpair failed and we were unable to recover it. 00:37:35.942 [2024-11-19 08:01:27.673151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.942 [2024-11-19 08:01:27.673204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.942 qpair failed and we were unable to recover it. 00:37:35.942 [2024-11-19 08:01:27.673385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.942 [2024-11-19 08:01:27.673441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.942 qpair failed and we were unable to recover it. 00:37:35.942 [2024-11-19 08:01:27.673579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.942 [2024-11-19 08:01:27.673615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.942 qpair failed and we were unable to recover it. 00:37:35.943 [2024-11-19 08:01:27.673761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.943 [2024-11-19 08:01:27.673814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.943 qpair failed and we were unable to recover it. 00:37:35.943 [2024-11-19 08:01:27.673949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.943 [2024-11-19 08:01:27.673989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.943 qpair failed and we were unable to recover it. 00:37:35.943 [2024-11-19 08:01:27.674170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.943 [2024-11-19 08:01:27.674234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.943 qpair failed and we were unable to recover it. 00:37:35.943 [2024-11-19 08:01:27.674375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.943 [2024-11-19 08:01:27.674414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.943 qpair failed and we were unable to recover it. 00:37:35.943 [2024-11-19 08:01:27.674575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.943 [2024-11-19 08:01:27.674619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.943 qpair failed and we were unable to recover it. 00:37:35.943 [2024-11-19 08:01:27.674733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.943 [2024-11-19 08:01:27.674767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.943 qpair failed and we were unable to recover it. 00:37:35.943 [2024-11-19 08:01:27.674890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.943 [2024-11-19 08:01:27.674929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.943 qpair failed and we were unable to recover it. 00:37:35.943 [2024-11-19 08:01:27.675087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.943 [2024-11-19 08:01:27.675125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.943 qpair failed and we were unable to recover it. 00:37:35.943 [2024-11-19 08:01:27.675272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.943 [2024-11-19 08:01:27.675310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.943 qpair failed and we were unable to recover it. 00:37:35.943 [2024-11-19 08:01:27.675486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.943 [2024-11-19 08:01:27.675539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.943 qpair failed and we were unable to recover it. 00:37:35.943 [2024-11-19 08:01:27.675676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.943 [2024-11-19 08:01:27.675718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.943 qpair failed and we were unable to recover it. 00:37:35.943 [2024-11-19 08:01:27.675869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.943 [2024-11-19 08:01:27.675938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.943 qpair failed and we were unable to recover it. 00:37:35.943 [2024-11-19 08:01:27.676139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.943 [2024-11-19 08:01:27.676207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.943 qpair failed and we were unable to recover it. 00:37:35.943 [2024-11-19 08:01:27.676342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.943 [2024-11-19 08:01:27.676378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.943 qpair failed and we were unable to recover it. 00:37:35.943 [2024-11-19 08:01:27.676568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.943 [2024-11-19 08:01:27.676608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.943 qpair failed and we were unable to recover it. 00:37:35.943 [2024-11-19 08:01:27.676749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.943 [2024-11-19 08:01:27.676785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.943 qpair failed and we were unable to recover it. 00:37:35.943 [2024-11-19 08:01:27.676903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.943 [2024-11-19 08:01:27.676951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.943 qpair failed and we were unable to recover it. 00:37:35.943 [2024-11-19 08:01:27.677174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.943 [2024-11-19 08:01:27.677215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.943 qpair failed and we were unable to recover it. 00:37:35.943 [2024-11-19 08:01:27.677373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.943 [2024-11-19 08:01:27.677413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.943 qpair failed and we were unable to recover it. 00:37:35.943 [2024-11-19 08:01:27.677567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.943 [2024-11-19 08:01:27.677605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.943 qpair failed and we were unable to recover it. 00:37:35.943 [2024-11-19 08:01:27.677772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.943 [2024-11-19 08:01:27.677807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.943 qpair failed and we were unable to recover it. 00:37:35.943 [2024-11-19 08:01:27.677942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.943 [2024-11-19 08:01:27.677995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.943 qpair failed and we were unable to recover it. 00:37:35.943 [2024-11-19 08:01:27.678150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.943 [2024-11-19 08:01:27.678187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.943 qpair failed and we were unable to recover it. 00:37:35.943 [2024-11-19 08:01:27.678385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.943 [2024-11-19 08:01:27.678423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.943 qpair failed and we were unable to recover it. 00:37:35.943 [2024-11-19 08:01:27.678549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.943 [2024-11-19 08:01:27.678587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.943 qpair failed and we were unable to recover it. 00:37:35.943 [2024-11-19 08:01:27.678738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.943 [2024-11-19 08:01:27.678775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.943 qpair failed and we were unable to recover it. 00:37:35.943 [2024-11-19 08:01:27.678894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.943 [2024-11-19 08:01:27.678930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.943 qpair failed and we were unable to recover it. 00:37:35.943 [2024-11-19 08:01:27.679074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.943 [2024-11-19 08:01:27.679109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.943 qpair failed and we were unable to recover it. 00:37:35.943 [2024-11-19 08:01:27.679273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.943 [2024-11-19 08:01:27.679308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.943 qpair failed and we were unable to recover it. 00:37:35.943 [2024-11-19 08:01:27.679488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.943 [2024-11-19 08:01:27.679564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.943 qpair failed and we were unable to recover it. 00:37:35.943 [2024-11-19 08:01:27.679744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.943 [2024-11-19 08:01:27.679782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.943 qpair failed and we were unable to recover it. 00:37:35.943 [2024-11-19 08:01:27.679927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.943 [2024-11-19 08:01:27.679963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.943 qpair failed and we were unable to recover it. 00:37:35.943 [2024-11-19 08:01:27.680099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.943 [2024-11-19 08:01:27.680133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.943 qpair failed and we were unable to recover it. 00:37:35.943 [2024-11-19 08:01:27.680259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.943 [2024-11-19 08:01:27.680294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.943 qpair failed and we were unable to recover it. 00:37:35.943 [2024-11-19 08:01:27.680432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.943 [2024-11-19 08:01:27.680466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.943 qpair failed and we were unable to recover it. 00:37:35.943 [2024-11-19 08:01:27.680600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.943 [2024-11-19 08:01:27.680649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.943 qpair failed and we were unable to recover it. 00:37:35.943 [2024-11-19 08:01:27.680795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.943 [2024-11-19 08:01:27.680844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.943 qpair failed and we were unable to recover it. 00:37:35.944 [2024-11-19 08:01:27.680977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.944 [2024-11-19 08:01:27.681033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.944 qpair failed and we were unable to recover it. 00:37:35.944 [2024-11-19 08:01:27.681154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.944 [2024-11-19 08:01:27.681189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.944 qpair failed and we were unable to recover it. 00:37:35.944 [2024-11-19 08:01:27.681352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.944 [2024-11-19 08:01:27.681405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.944 qpair failed and we were unable to recover it. 00:37:35.944 [2024-11-19 08:01:27.681514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.944 [2024-11-19 08:01:27.681548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.944 qpair failed and we were unable to recover it. 00:37:35.944 [2024-11-19 08:01:27.681656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.944 [2024-11-19 08:01:27.681698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.944 qpair failed and we were unable to recover it. 00:37:35.944 [2024-11-19 08:01:27.681810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.944 [2024-11-19 08:01:27.681844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.944 qpair failed and we were unable to recover it. 00:37:35.944 [2024-11-19 08:01:27.681969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.944 [2024-11-19 08:01:27.682003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.944 qpair failed and we were unable to recover it. 00:37:35.944 [2024-11-19 08:01:27.682118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.944 [2024-11-19 08:01:27.682152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.944 qpair failed and we were unable to recover it. 00:37:35.944 [2024-11-19 08:01:27.682295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.944 [2024-11-19 08:01:27.682335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.944 qpair failed and we were unable to recover it. 00:37:35.944 [2024-11-19 08:01:27.682476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.944 [2024-11-19 08:01:27.682513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.944 qpair failed and we were unable to recover it. 00:37:35.944 [2024-11-19 08:01:27.682652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.944 [2024-11-19 08:01:27.682687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.944 qpair failed and we were unable to recover it. 00:37:35.944 [2024-11-19 08:01:27.682860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.944 [2024-11-19 08:01:27.682914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.944 qpair failed and we were unable to recover it. 00:37:35.944 [2024-11-19 08:01:27.683020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.944 [2024-11-19 08:01:27.683055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.944 qpair failed and we were unable to recover it. 00:37:35.944 [2024-11-19 08:01:27.683155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.944 [2024-11-19 08:01:27.683189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.944 qpair failed and we were unable to recover it. 00:37:35.944 [2024-11-19 08:01:27.683316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.944 [2024-11-19 08:01:27.683356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.944 qpair failed and we were unable to recover it. 00:37:35.944 [2024-11-19 08:01:27.683476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.944 [2024-11-19 08:01:27.683520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.944 qpair failed and we were unable to recover it. 00:37:35.944 [2024-11-19 08:01:27.683750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.944 [2024-11-19 08:01:27.683792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.944 qpair failed and we were unable to recover it. 00:37:35.944 [2024-11-19 08:01:27.683940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.944 [2024-11-19 08:01:27.683979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.944 qpair failed and we were unable to recover it. 00:37:35.944 [2024-11-19 08:01:27.684096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.944 [2024-11-19 08:01:27.684136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.944 qpair failed and we were unable to recover it. 00:37:35.944 [2024-11-19 08:01:27.684293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.944 [2024-11-19 08:01:27.684332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.944 qpair failed and we were unable to recover it. 00:37:35.944 [2024-11-19 08:01:27.684507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.944 [2024-11-19 08:01:27.684562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.944 qpair failed and we were unable to recover it. 00:37:35.944 [2024-11-19 08:01:27.684665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.944 [2024-11-19 08:01:27.684708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.944 qpair failed and we were unable to recover it. 00:37:35.944 [2024-11-19 08:01:27.684838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.944 [2024-11-19 08:01:27.684877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.944 qpair failed and we were unable to recover it. 00:37:35.944 [2024-11-19 08:01:27.685044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.944 [2024-11-19 08:01:27.685100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.944 qpair failed and we were unable to recover it. 00:37:35.944 [2024-11-19 08:01:27.685274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.944 [2024-11-19 08:01:27.685330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.944 qpair failed and we were unable to recover it. 00:37:35.944 [2024-11-19 08:01:27.685472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.944 [2024-11-19 08:01:27.685507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.944 qpair failed and we were unable to recover it. 00:37:35.944 [2024-11-19 08:01:27.685645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.944 [2024-11-19 08:01:27.685682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.944 qpair failed and we were unable to recover it. 00:37:35.944 [2024-11-19 08:01:27.685832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.944 [2024-11-19 08:01:27.685868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.944 qpair failed and we were unable to recover it. 00:37:35.944 [2024-11-19 08:01:27.686116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.944 [2024-11-19 08:01:27.686176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.944 qpair failed and we were unable to recover it. 00:37:35.944 [2024-11-19 08:01:27.686354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.944 [2024-11-19 08:01:27.686417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.944 qpair failed and we were unable to recover it. 00:37:35.944 [2024-11-19 08:01:27.686562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.944 [2024-11-19 08:01:27.686602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.944 qpair failed and we were unable to recover it. 00:37:35.944 [2024-11-19 08:01:27.686783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.944 [2024-11-19 08:01:27.686819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.944 qpair failed and we were unable to recover it. 00:37:35.944 [2024-11-19 08:01:27.686945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.944 [2024-11-19 08:01:27.687009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.944 qpair failed and we were unable to recover it. 00:37:35.944 [2024-11-19 08:01:27.687161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.944 [2024-11-19 08:01:27.687216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.944 qpair failed and we were unable to recover it. 00:37:35.945 [2024-11-19 08:01:27.687335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.945 [2024-11-19 08:01:27.687374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.945 qpair failed and we were unable to recover it. 00:37:35.945 [2024-11-19 08:01:27.687489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.945 [2024-11-19 08:01:27.687523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.945 qpair failed and we were unable to recover it. 00:37:35.945 [2024-11-19 08:01:27.687661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.945 [2024-11-19 08:01:27.687704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.945 qpair failed and we were unable to recover it. 00:37:35.945 [2024-11-19 08:01:27.687816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.945 [2024-11-19 08:01:27.687851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.945 qpair failed and we were unable to recover it. 00:37:35.945 [2024-11-19 08:01:27.687964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.945 [2024-11-19 08:01:27.688001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.945 qpair failed and we were unable to recover it. 00:37:35.945 [2024-11-19 08:01:27.688120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.945 [2024-11-19 08:01:27.688168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.945 qpair failed and we were unable to recover it. 00:37:35.945 [2024-11-19 08:01:27.688319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.945 [2024-11-19 08:01:27.688356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.945 qpair failed and we were unable to recover it. 00:37:35.945 [2024-11-19 08:01:27.688493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.945 [2024-11-19 08:01:27.688529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.945 qpair failed and we were unable to recover it. 00:37:35.945 [2024-11-19 08:01:27.688672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.945 [2024-11-19 08:01:27.688718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.945 qpair failed and we were unable to recover it. 00:37:35.945 [2024-11-19 08:01:27.688827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.945 [2024-11-19 08:01:27.688862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.945 qpair failed and we were unable to recover it. 00:37:35.945 [2024-11-19 08:01:27.689073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.945 [2024-11-19 08:01:27.689108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.945 qpair failed and we were unable to recover it. 00:37:35.945 [2024-11-19 08:01:27.689270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.945 [2024-11-19 08:01:27.689320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.945 qpair failed and we were unable to recover it. 00:37:35.945 [2024-11-19 08:01:27.689457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.945 [2024-11-19 08:01:27.689492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.945 qpair failed and we were unable to recover it. 00:37:35.945 [2024-11-19 08:01:27.689592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.945 [2024-11-19 08:01:27.689627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.945 qpair failed and we were unable to recover it. 00:37:35.945 [2024-11-19 08:01:27.689773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.945 [2024-11-19 08:01:27.689810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.945 qpair failed and we were unable to recover it. 00:37:35.945 [2024-11-19 08:01:27.689943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.945 [2024-11-19 08:01:27.689977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.945 qpair failed and we were unable to recover it. 00:37:35.945 [2024-11-19 08:01:27.690090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.945 [2024-11-19 08:01:27.690125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.945 qpair failed and we were unable to recover it. 00:37:35.945 [2024-11-19 08:01:27.690258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.945 [2024-11-19 08:01:27.690296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.945 qpair failed and we were unable to recover it. 00:37:35.945 [2024-11-19 08:01:27.690512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.945 [2024-11-19 08:01:27.690566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.945 qpair failed and we were unable to recover it. 00:37:35.945 [2024-11-19 08:01:27.690711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.945 [2024-11-19 08:01:27.690765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.945 qpair failed and we were unable to recover it. 00:37:35.945 [2024-11-19 08:01:27.690947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.945 [2024-11-19 08:01:27.691000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.945 qpair failed and we were unable to recover it. 00:37:35.945 [2024-11-19 08:01:27.691162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.945 [2024-11-19 08:01:27.691220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.945 qpair failed and we were unable to recover it. 00:37:35.945 [2024-11-19 08:01:27.691437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.945 [2024-11-19 08:01:27.691493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.945 qpair failed and we were unable to recover it. 00:37:35.945 [2024-11-19 08:01:27.691602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.945 [2024-11-19 08:01:27.691638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.945 qpair failed and we were unable to recover it. 00:37:35.945 [2024-11-19 08:01:27.691790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.945 [2024-11-19 08:01:27.691830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.945 qpair failed and we were unable to recover it. 00:37:35.945 [2024-11-19 08:01:27.691960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.945 [2024-11-19 08:01:27.692014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.945 qpair failed and we were unable to recover it. 00:37:35.945 [2024-11-19 08:01:27.692201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.945 [2024-11-19 08:01:27.692261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.945 qpair failed and we were unable to recover it. 00:37:35.945 [2024-11-19 08:01:27.692435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.945 [2024-11-19 08:01:27.692493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.945 qpair failed and we were unable to recover it. 00:37:35.945 [2024-11-19 08:01:27.692604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.945 [2024-11-19 08:01:27.692643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.945 qpair failed and we were unable to recover it. 00:37:35.945 [2024-11-19 08:01:27.692804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.945 [2024-11-19 08:01:27.692839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.945 qpair failed and we were unable to recover it. 00:37:35.945 [2024-11-19 08:01:27.692990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.945 [2024-11-19 08:01:27.693043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.945 qpair failed and we were unable to recover it. 00:37:35.945 [2024-11-19 08:01:27.693190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.945 [2024-11-19 08:01:27.693288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.945 qpair failed and we were unable to recover it. 00:37:35.945 [2024-11-19 08:01:27.693546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.945 [2024-11-19 08:01:27.693607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.945 qpair failed and we were unable to recover it. 00:37:35.945 [2024-11-19 08:01:27.693791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.945 [2024-11-19 08:01:27.693845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.945 qpair failed and we were unable to recover it. 00:37:35.945 [2024-11-19 08:01:27.694009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.945 [2024-11-19 08:01:27.694062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.945 qpair failed and we were unable to recover it. 00:37:35.945 [2024-11-19 08:01:27.694218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.945 [2024-11-19 08:01:27.694272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.945 qpair failed and we were unable to recover it. 00:37:35.945 [2024-11-19 08:01:27.694407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.945 [2024-11-19 08:01:27.694441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.945 qpair failed and we were unable to recover it. 00:37:35.945 [2024-11-19 08:01:27.694572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.946 [2024-11-19 08:01:27.694621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.946 qpair failed and we were unable to recover it. 00:37:35.946 [2024-11-19 08:01:27.694799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.946 [2024-11-19 08:01:27.694841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.946 qpair failed and we were unable to recover it. 00:37:35.946 [2024-11-19 08:01:27.694982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.946 [2024-11-19 08:01:27.695036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.946 qpair failed and we were unable to recover it. 00:37:35.946 [2024-11-19 08:01:27.695233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.946 [2024-11-19 08:01:27.695293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.946 qpair failed and we were unable to recover it. 00:37:35.946 [2024-11-19 08:01:27.695488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.946 [2024-11-19 08:01:27.695523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.946 qpair failed and we were unable to recover it. 00:37:35.946 [2024-11-19 08:01:27.695658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.946 [2024-11-19 08:01:27.695699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.946 qpair failed and we were unable to recover it. 00:37:35.946 [2024-11-19 08:01:27.695802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.946 [2024-11-19 08:01:27.695837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.946 qpair failed and we were unable to recover it. 00:37:35.946 [2024-11-19 08:01:27.695997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.946 [2024-11-19 08:01:27.696051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.946 qpair failed and we were unable to recover it. 00:37:35.946 [2024-11-19 08:01:27.696283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.946 [2024-11-19 08:01:27.696343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.946 qpair failed and we were unable to recover it. 00:37:35.946 [2024-11-19 08:01:27.696475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.946 [2024-11-19 08:01:27.696510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.946 qpair failed and we were unable to recover it. 00:37:35.946 [2024-11-19 08:01:27.696645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.946 [2024-11-19 08:01:27.696681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.946 qpair failed and we were unable to recover it. 00:37:35.946 [2024-11-19 08:01:27.696845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.946 [2024-11-19 08:01:27.696884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.946 qpair failed and we were unable to recover it. 00:37:35.946 [2024-11-19 08:01:27.697028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.946 [2024-11-19 08:01:27.697081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.946 qpair failed and we were unable to recover it. 00:37:35.946 [2024-11-19 08:01:27.697265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.946 [2024-11-19 08:01:27.697327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.946 qpair failed and we were unable to recover it. 00:37:35.946 [2024-11-19 08:01:27.697482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.946 [2024-11-19 08:01:27.697521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.946 qpair failed and we were unable to recover it. 00:37:35.946 [2024-11-19 08:01:27.697650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.946 [2024-11-19 08:01:27.697695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.946 qpair failed and we were unable to recover it. 00:37:35.946 [2024-11-19 08:01:27.697857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.946 [2024-11-19 08:01:27.697891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.946 qpair failed and we were unable to recover it. 00:37:35.946 [2024-11-19 08:01:27.698055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.946 [2024-11-19 08:01:27.698110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.946 qpair failed and we were unable to recover it. 00:37:35.946 [2024-11-19 08:01:27.698391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.946 [2024-11-19 08:01:27.698451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.946 qpair failed and we were unable to recover it. 00:37:35.946 [2024-11-19 08:01:27.698631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.946 [2024-11-19 08:01:27.698665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.946 qpair failed and we were unable to recover it. 00:37:35.946 [2024-11-19 08:01:27.698813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.946 [2024-11-19 08:01:27.698847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.946 qpair failed and we were unable to recover it. 00:37:35.946 [2024-11-19 08:01:27.698964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.946 [2024-11-19 08:01:27.698998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.946 qpair failed and we were unable to recover it. 00:37:35.946 [2024-11-19 08:01:27.699132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.946 [2024-11-19 08:01:27.699185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.946 qpair failed and we were unable to recover it. 00:37:35.946 [2024-11-19 08:01:27.699407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.946 [2024-11-19 08:01:27.699447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.946 qpair failed and we were unable to recover it. 00:37:35.946 [2024-11-19 08:01:27.699573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.946 [2024-11-19 08:01:27.699613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.946 qpair failed and we were unable to recover it. 00:37:35.946 [2024-11-19 08:01:27.699770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.946 [2024-11-19 08:01:27.699819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.946 qpair failed and we were unable to recover it. 00:37:35.946 [2024-11-19 08:01:27.699970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.946 [2024-11-19 08:01:27.700008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.946 qpair failed and we were unable to recover it. 00:37:35.946 [2024-11-19 08:01:27.700205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.946 [2024-11-19 08:01:27.700275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.946 qpair failed and we were unable to recover it. 00:37:35.946 [2024-11-19 08:01:27.700423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.946 [2024-11-19 08:01:27.700483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.946 qpair failed and we were unable to recover it. 00:37:35.946 [2024-11-19 08:01:27.700624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.946 [2024-11-19 08:01:27.700661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.946 qpair failed and we were unable to recover it. 00:37:35.946 [2024-11-19 08:01:27.700788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.946 [2024-11-19 08:01:27.700825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.946 qpair failed and we were unable to recover it. 00:37:35.946 [2024-11-19 08:01:27.700965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.946 [2024-11-19 08:01:27.701000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.946 qpair failed and we were unable to recover it. 00:37:35.946 [2024-11-19 08:01:27.701160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.946 [2024-11-19 08:01:27.701199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.946 qpair failed and we were unable to recover it. 00:37:35.946 [2024-11-19 08:01:27.701353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.946 [2024-11-19 08:01:27.701392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.946 qpair failed and we were unable to recover it. 00:37:35.946 [2024-11-19 08:01:27.701595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.946 [2024-11-19 08:01:27.701634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.946 qpair failed and we were unable to recover it. 00:37:35.946 [2024-11-19 08:01:27.701814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.946 [2024-11-19 08:01:27.701850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.946 qpair failed and we were unable to recover it. 00:37:35.946 [2024-11-19 08:01:27.701985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.946 [2024-11-19 08:01:27.702020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.946 qpair failed and we were unable to recover it. 00:37:35.946 [2024-11-19 08:01:27.702179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.946 [2024-11-19 08:01:27.702213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.946 qpair failed and we were unable to recover it. 00:37:35.946 [2024-11-19 08:01:27.702363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.947 [2024-11-19 08:01:27.702402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.947 qpair failed and we were unable to recover it. 00:37:35.947 [2024-11-19 08:01:27.702530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.947 [2024-11-19 08:01:27.702570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.947 qpair failed and we were unable to recover it. 00:37:35.947 [2024-11-19 08:01:27.702767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.947 [2024-11-19 08:01:27.702802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.947 qpair failed and we were unable to recover it. 00:37:35.947 [2024-11-19 08:01:27.702916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.947 [2024-11-19 08:01:27.702951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.947 qpair failed and we were unable to recover it. 00:37:35.947 [2024-11-19 08:01:27.703079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.947 [2024-11-19 08:01:27.703114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.947 qpair failed and we were unable to recover it. 00:37:35.947 [2024-11-19 08:01:27.703292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.947 [2024-11-19 08:01:27.703346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.947 qpair failed and we were unable to recover it. 00:37:35.947 [2024-11-19 08:01:27.703552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.947 [2024-11-19 08:01:27.703592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.947 qpair failed and we were unable to recover it. 00:37:35.947 [2024-11-19 08:01:27.703725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.947 [2024-11-19 08:01:27.703780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.947 qpair failed and we were unable to recover it. 00:37:35.947 [2024-11-19 08:01:27.703912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.947 [2024-11-19 08:01:27.703947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.947 qpair failed and we were unable to recover it. 00:37:35.947 [2024-11-19 08:01:27.704107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.947 [2024-11-19 08:01:27.704148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.947 qpair failed and we were unable to recover it. 00:37:35.947 [2024-11-19 08:01:27.704319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.947 [2024-11-19 08:01:27.704359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.947 qpair failed and we were unable to recover it. 00:37:35.947 [2024-11-19 08:01:27.704508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.947 [2024-11-19 08:01:27.704547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.947 qpair failed and we were unable to recover it. 00:37:35.947 [2024-11-19 08:01:27.704725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.947 [2024-11-19 08:01:27.704774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.947 qpair failed and we were unable to recover it. 00:37:35.947 [2024-11-19 08:01:27.704900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.947 [2024-11-19 08:01:27.704938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.947 qpair failed and we were unable to recover it. 00:37:35.947 [2024-11-19 08:01:27.705122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.947 [2024-11-19 08:01:27.705175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.947 qpair failed and we were unable to recover it. 00:37:35.947 [2024-11-19 08:01:27.705339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.947 [2024-11-19 08:01:27.705393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.947 qpair failed and we were unable to recover it. 00:37:35.947 [2024-11-19 08:01:27.705559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.947 [2024-11-19 08:01:27.705594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.947 qpair failed and we were unable to recover it. 00:37:35.947 [2024-11-19 08:01:27.705751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.947 [2024-11-19 08:01:27.705807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.947 qpair failed and we were unable to recover it. 00:37:35.947 [2024-11-19 08:01:27.705963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.947 [2024-11-19 08:01:27.706003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.947 qpair failed and we were unable to recover it. 00:37:35.947 [2024-11-19 08:01:27.706153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.947 [2024-11-19 08:01:27.706220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.947 qpair failed and we were unable to recover it. 00:37:35.947 [2024-11-19 08:01:27.706364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.947 [2024-11-19 08:01:27.706422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.947 qpair failed and we were unable to recover it. 00:37:35.947 [2024-11-19 08:01:27.706589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.947 [2024-11-19 08:01:27.706624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.947 qpair failed and we were unable to recover it. 00:37:35.947 [2024-11-19 08:01:27.706762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.947 [2024-11-19 08:01:27.706811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.947 qpair failed and we were unable to recover it. 00:37:35.947 [2024-11-19 08:01:27.706974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.947 [2024-11-19 08:01:27.707027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.947 qpair failed and we were unable to recover it. 00:37:35.947 [2024-11-19 08:01:27.707201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.947 [2024-11-19 08:01:27.707263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.947 qpair failed and we were unable to recover it. 00:37:35.947 [2024-11-19 08:01:27.707377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.947 [2024-11-19 08:01:27.707429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.947 qpair failed and we were unable to recover it. 00:37:35.947 [2024-11-19 08:01:27.707640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.947 [2024-11-19 08:01:27.707680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.947 qpair failed and we were unable to recover it. 00:37:35.947 [2024-11-19 08:01:27.707812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.947 [2024-11-19 08:01:27.707862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.947 qpair failed and we were unable to recover it. 00:37:35.947 [2024-11-19 08:01:27.707976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.947 [2024-11-19 08:01:27.708012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.947 qpair failed and we were unable to recover it. 00:37:35.947 [2024-11-19 08:01:27.708254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.947 [2024-11-19 08:01:27.708320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.947 qpair failed and we were unable to recover it. 00:37:35.947 [2024-11-19 08:01:27.708508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.947 [2024-11-19 08:01:27.708562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.947 qpair failed and we were unable to recover it. 00:37:35.947 [2024-11-19 08:01:27.708702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.947 [2024-11-19 08:01:27.708737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.947 qpair failed and we were unable to recover it. 00:37:35.947 [2024-11-19 08:01:27.708842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.947 [2024-11-19 08:01:27.708896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.947 qpair failed and we were unable to recover it. 00:37:35.947 [2024-11-19 08:01:27.709046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.947 [2024-11-19 08:01:27.709104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.947 qpair failed and we were unable to recover it. 00:37:35.947 [2024-11-19 08:01:27.709272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.947 [2024-11-19 08:01:27.709329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.947 qpair failed and we were unable to recover it. 00:37:35.947 [2024-11-19 08:01:27.709519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.947 [2024-11-19 08:01:27.709579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.947 qpair failed and we were unable to recover it. 00:37:35.947 [2024-11-19 08:01:27.709729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.947 [2024-11-19 08:01:27.709778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.947 qpair failed and we were unable to recover it. 00:37:35.947 [2024-11-19 08:01:27.709932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.947 [2024-11-19 08:01:27.709987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.947 qpair failed and we were unable to recover it. 00:37:35.948 [2024-11-19 08:01:27.710223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.948 [2024-11-19 08:01:27.710285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.948 qpair failed and we were unable to recover it. 00:37:35.948 [2024-11-19 08:01:27.710441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.948 [2024-11-19 08:01:27.710480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.948 qpair failed and we were unable to recover it. 00:37:35.948 [2024-11-19 08:01:27.710661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.948 [2024-11-19 08:01:27.710703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.948 qpair failed and we were unable to recover it. 00:37:35.948 [2024-11-19 08:01:27.710843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.948 [2024-11-19 08:01:27.710879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.948 qpair failed and we were unable to recover it. 00:37:35.948 [2024-11-19 08:01:27.711006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.948 [2024-11-19 08:01:27.711044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.948 qpair failed and we were unable to recover it. 00:37:35.948 [2024-11-19 08:01:27.711207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.948 [2024-11-19 08:01:27.711244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.948 qpair failed and we were unable to recover it. 00:37:35.948 [2024-11-19 08:01:27.711406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.948 [2024-11-19 08:01:27.711460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.948 qpair failed and we were unable to recover it. 00:37:35.948 [2024-11-19 08:01:27.711606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.948 [2024-11-19 08:01:27.711644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.948 qpair failed and we were unable to recover it. 00:37:35.948 [2024-11-19 08:01:27.711804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.948 [2024-11-19 08:01:27.711837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.948 qpair failed and we were unable to recover it. 00:37:35.948 [2024-11-19 08:01:27.711947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.948 [2024-11-19 08:01:27.711982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.948 qpair failed and we were unable to recover it. 00:37:35.948 [2024-11-19 08:01:27.712128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.948 [2024-11-19 08:01:27.712162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.948 qpair failed and we were unable to recover it. 00:37:35.948 [2024-11-19 08:01:27.712333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.948 [2024-11-19 08:01:27.712368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.948 qpair failed and we were unable to recover it. 00:37:35.948 [2024-11-19 08:01:27.712506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.948 [2024-11-19 08:01:27.712546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.948 qpair failed and we were unable to recover it. 00:37:35.948 [2024-11-19 08:01:27.712724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.948 [2024-11-19 08:01:27.712775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.948 qpair failed and we were unable to recover it. 00:37:35.948 [2024-11-19 08:01:27.712884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.948 [2024-11-19 08:01:27.712918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.948 qpair failed and we were unable to recover it. 00:37:35.948 [2024-11-19 08:01:27.713025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.948 [2024-11-19 08:01:27.713059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.948 qpair failed and we were unable to recover it. 00:37:35.948 [2024-11-19 08:01:27.713330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.948 [2024-11-19 08:01:27.713395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.948 qpair failed and we were unable to recover it. 00:37:35.948 [2024-11-19 08:01:27.713555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.948 [2024-11-19 08:01:27.713595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.948 qpair failed and we were unable to recover it. 00:37:35.948 [2024-11-19 08:01:27.713759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.948 [2024-11-19 08:01:27.713796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.948 qpair failed and we were unable to recover it. 00:37:35.948 [2024-11-19 08:01:27.713928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.948 [2024-11-19 08:01:27.713964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.948 qpair failed and we were unable to recover it. 00:37:35.948 [2024-11-19 08:01:27.714155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.948 [2024-11-19 08:01:27.714227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.948 qpair failed and we were unable to recover it. 00:37:35.948 [2024-11-19 08:01:27.714388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.948 [2024-11-19 08:01:27.714428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.948 qpair failed and we were unable to recover it. 00:37:35.948 [2024-11-19 08:01:27.714577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.948 [2024-11-19 08:01:27.714617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.948 qpair failed and we were unable to recover it. 00:37:35.948 [2024-11-19 08:01:27.714781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.948 [2024-11-19 08:01:27.714830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.948 qpair failed and we were unable to recover it. 00:37:35.948 [2024-11-19 08:01:27.715018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.948 [2024-11-19 08:01:27.715068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.948 qpair failed and we were unable to recover it. 00:37:35.948 [2024-11-19 08:01:27.715296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.948 [2024-11-19 08:01:27.715362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.948 qpair failed and we were unable to recover it. 00:37:35.948 [2024-11-19 08:01:27.715513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.948 [2024-11-19 08:01:27.715554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.948 qpair failed and we were unable to recover it. 00:37:35.948 [2024-11-19 08:01:27.715715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.948 [2024-11-19 08:01:27.715751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.948 qpair failed and we were unable to recover it. 00:37:35.948 [2024-11-19 08:01:27.715859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.948 [2024-11-19 08:01:27.715899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.948 qpair failed and we were unable to recover it. 00:37:35.948 [2024-11-19 08:01:27.716022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.948 [2024-11-19 08:01:27.716063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.948 qpair failed and we were unable to recover it. 00:37:35.948 [2024-11-19 08:01:27.716264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.948 [2024-11-19 08:01:27.716305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.948 qpair failed and we were unable to recover it. 00:37:35.948 [2024-11-19 08:01:27.716433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.948 [2024-11-19 08:01:27.716474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.948 qpair failed and we were unable to recover it. 00:37:35.948 [2024-11-19 08:01:27.716636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.948 [2024-11-19 08:01:27.716677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.948 qpair failed and we were unable to recover it. 00:37:35.948 [2024-11-19 08:01:27.716847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.948 [2024-11-19 08:01:27.716882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.948 qpair failed and we were unable to recover it. 00:37:35.949 [2024-11-19 08:01:27.717053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.949 [2024-11-19 08:01:27.717089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.949 qpair failed and we were unable to recover it. 00:37:35.949 [2024-11-19 08:01:27.717234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.949 [2024-11-19 08:01:27.717311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.949 qpair failed and we were unable to recover it. 00:37:35.949 [2024-11-19 08:01:27.717460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.949 [2024-11-19 08:01:27.717499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.949 qpair failed and we were unable to recover it. 00:37:35.949 [2024-11-19 08:01:27.717666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.949 [2024-11-19 08:01:27.717733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.949 qpair failed and we were unable to recover it. 00:37:35.949 [2024-11-19 08:01:27.717896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.949 [2024-11-19 08:01:27.717935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.949 qpair failed and we were unable to recover it. 00:37:35.949 [2024-11-19 08:01:27.718161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.949 [2024-11-19 08:01:27.718201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.949 qpair failed and we were unable to recover it. 00:37:35.949 [2024-11-19 08:01:27.718366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.949 [2024-11-19 08:01:27.718405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.949 qpair failed and we were unable to recover it. 00:37:35.949 [2024-11-19 08:01:27.718562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.949 [2024-11-19 08:01:27.718603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.949 qpair failed and we were unable to recover it. 00:37:35.949 [2024-11-19 08:01:27.718766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.949 [2024-11-19 08:01:27.718815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.949 qpair failed and we were unable to recover it. 00:37:35.949 [2024-11-19 08:01:27.718962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.949 [2024-11-19 08:01:27.719000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.949 qpair failed and we were unable to recover it. 00:37:35.949 [2024-11-19 08:01:27.719132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.949 [2024-11-19 08:01:27.719168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.949 qpair failed and we were unable to recover it. 00:37:35.949 [2024-11-19 08:01:27.719305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.949 [2024-11-19 08:01:27.719341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.949 qpair failed and we were unable to recover it. 00:37:35.949 [2024-11-19 08:01:27.719541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.949 [2024-11-19 08:01:27.719581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.949 qpair failed and we were unable to recover it. 00:37:35.949 [2024-11-19 08:01:27.719771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.949 [2024-11-19 08:01:27.719821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.949 qpair failed and we were unable to recover it. 00:37:35.949 [2024-11-19 08:01:27.719984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.949 [2024-11-19 08:01:27.720026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.949 qpair failed and we were unable to recover it. 00:37:35.949 [2024-11-19 08:01:27.720231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.949 [2024-11-19 08:01:27.720271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.949 qpair failed and we were unable to recover it. 00:37:35.949 [2024-11-19 08:01:27.720422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.949 [2024-11-19 08:01:27.720461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.949 qpair failed and we were unable to recover it. 00:37:35.949 [2024-11-19 08:01:27.720603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.949 [2024-11-19 08:01:27.720656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.949 qpair failed and we were unable to recover it. 00:37:35.949 [2024-11-19 08:01:27.720769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.949 [2024-11-19 08:01:27.720838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.949 qpair failed and we were unable to recover it. 00:37:35.949 [2024-11-19 08:01:27.720983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.949 [2024-11-19 08:01:27.721020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.949 qpair failed and we were unable to recover it. 00:37:35.949 [2024-11-19 08:01:27.721158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.949 [2024-11-19 08:01:27.721195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.949 qpair failed and we were unable to recover it. 00:37:35.949 [2024-11-19 08:01:27.721439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.949 [2024-11-19 08:01:27.721480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.949 qpair failed and we were unable to recover it. 00:37:35.949 [2024-11-19 08:01:27.721605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.949 [2024-11-19 08:01:27.721645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.949 qpair failed and we were unable to recover it. 00:37:35.949 [2024-11-19 08:01:27.721786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.949 [2024-11-19 08:01:27.721822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.949 qpair failed and we were unable to recover it. 00:37:35.949 [2024-11-19 08:01:27.721967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.949 [2024-11-19 08:01:27.722002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.949 qpair failed and we were unable to recover it. 00:37:35.949 [2024-11-19 08:01:27.722132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.949 [2024-11-19 08:01:27.722173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.949 qpair failed and we were unable to recover it. 00:37:35.949 [2024-11-19 08:01:27.722364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.949 [2024-11-19 08:01:27.722434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.949 qpair failed and we were unable to recover it. 00:37:35.949 [2024-11-19 08:01:27.722662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.949 [2024-11-19 08:01:27.722709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.949 qpair failed and we were unable to recover it. 00:37:35.949 [2024-11-19 08:01:27.722855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.949 [2024-11-19 08:01:27.722892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.949 qpair failed and we were unable to recover it. 00:37:35.949 [2024-11-19 08:01:27.723012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.949 [2024-11-19 08:01:27.723052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.949 qpair failed and we were unable to recover it. 00:37:35.949 [2024-11-19 08:01:27.723186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.949 [2024-11-19 08:01:27.723241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.949 qpair failed and we were unable to recover it. 00:37:35.949 [2024-11-19 08:01:27.723496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.949 [2024-11-19 08:01:27.723563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.949 qpair failed and we were unable to recover it. 00:37:35.949 [2024-11-19 08:01:27.723720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.949 [2024-11-19 08:01:27.723773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.949 qpair failed and we were unable to recover it. 00:37:35.949 [2024-11-19 08:01:27.723957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.949 [2024-11-19 08:01:27.724007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.949 qpair failed and we were unable to recover it. 00:37:35.949 [2024-11-19 08:01:27.724182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.949 [2024-11-19 08:01:27.724247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.949 qpair failed and we were unable to recover it. 00:37:35.949 [2024-11-19 08:01:27.724501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.949 [2024-11-19 08:01:27.724559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.949 qpair failed and we were unable to recover it. 00:37:35.949 [2024-11-19 08:01:27.724776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.949 [2024-11-19 08:01:27.724812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.949 qpair failed and we were unable to recover it. 00:37:35.949 [2024-11-19 08:01:27.724926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.950 [2024-11-19 08:01:27.724962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.950 qpair failed and we were unable to recover it. 00:37:35.950 [2024-11-19 08:01:27.725080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.950 [2024-11-19 08:01:27.725117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.950 qpair failed and we were unable to recover it. 00:37:35.950 [2024-11-19 08:01:27.725328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.950 [2024-11-19 08:01:27.725364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.950 qpair failed and we were unable to recover it. 00:37:35.950 [2024-11-19 08:01:27.725465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.950 [2024-11-19 08:01:27.725501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.950 qpair failed and we were unable to recover it. 00:37:35.950 [2024-11-19 08:01:27.725644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.950 [2024-11-19 08:01:27.725705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.950 qpair failed and we were unable to recover it. 00:37:35.950 [2024-11-19 08:01:27.725822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.950 [2024-11-19 08:01:27.725858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.950 qpair failed and we were unable to recover it. 00:37:35.950 [2024-11-19 08:01:27.726015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.950 [2024-11-19 08:01:27.726066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.950 qpair failed and we were unable to recover it. 00:37:35.950 [2024-11-19 08:01:27.726278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.950 [2024-11-19 08:01:27.726342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.950 qpair failed and we were unable to recover it. 00:37:35.950 [2024-11-19 08:01:27.726543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.950 [2024-11-19 08:01:27.726583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.950 qpair failed and we were unable to recover it. 00:37:35.950 [2024-11-19 08:01:27.726742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.950 [2024-11-19 08:01:27.726779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.950 qpair failed and we were unable to recover it. 00:37:35.950 [2024-11-19 08:01:27.726906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.950 [2024-11-19 08:01:27.726966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.950 qpair failed and we were unable to recover it. 00:37:35.950 [2024-11-19 08:01:27.727126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.950 [2024-11-19 08:01:27.727183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.950 qpair failed and we were unable to recover it. 00:37:35.950 [2024-11-19 08:01:27.727308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.950 [2024-11-19 08:01:27.727347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.950 qpair failed and we were unable to recover it. 00:37:35.950 [2024-11-19 08:01:27.727579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.950 [2024-11-19 08:01:27.727616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.950 qpair failed and we were unable to recover it. 00:37:35.950 [2024-11-19 08:01:27.727832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.950 [2024-11-19 08:01:27.727868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.950 qpair failed and we were unable to recover it. 00:37:35.950 [2024-11-19 08:01:27.728003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.950 [2024-11-19 08:01:27.728039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.950 qpair failed and we were unable to recover it. 00:37:35.950 [2024-11-19 08:01:27.728259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.950 [2024-11-19 08:01:27.728320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.950 qpair failed and we were unable to recover it. 00:37:35.950 [2024-11-19 08:01:27.728454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.950 [2024-11-19 08:01:27.728501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.950 qpair failed and we were unable to recover it. 00:37:35.950 [2024-11-19 08:01:27.728646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.950 [2024-11-19 08:01:27.728683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.950 qpair failed and we were unable to recover it. 00:37:35.950 [2024-11-19 08:01:27.728837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.950 [2024-11-19 08:01:27.728877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.950 qpair failed and we were unable to recover it. 00:37:35.950 [2024-11-19 08:01:27.728987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.950 [2024-11-19 08:01:27.729026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.950 qpair failed and we were unable to recover it. 00:37:35.950 [2024-11-19 08:01:27.729173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.950 [2024-11-19 08:01:27.729213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.950 qpair failed and we were unable to recover it. 00:37:35.950 [2024-11-19 08:01:27.729367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.950 [2024-11-19 08:01:27.729407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.950 qpair failed and we were unable to recover it. 00:37:35.950 [2024-11-19 08:01:27.729545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.950 [2024-11-19 08:01:27.729601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.950 qpair failed and we were unable to recover it. 00:37:35.950 [2024-11-19 08:01:27.729801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.950 [2024-11-19 08:01:27.729839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.950 qpair failed and we were unable to recover it. 00:37:35.950 [2024-11-19 08:01:27.729968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.950 [2024-11-19 08:01:27.730034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.950 qpair failed and we were unable to recover it. 00:37:35.950 [2024-11-19 08:01:27.730187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.950 [2024-11-19 08:01:27.730241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.950 qpair failed and we were unable to recover it. 00:37:35.950 [2024-11-19 08:01:27.730393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.950 [2024-11-19 08:01:27.730452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.950 qpair failed and we were unable to recover it. 00:37:35.950 [2024-11-19 08:01:27.730585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.950 [2024-11-19 08:01:27.730621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.950 qpair failed and we were unable to recover it. 00:37:35.950 [2024-11-19 08:01:27.730734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.950 [2024-11-19 08:01:27.730772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.950 qpair failed and we were unable to recover it. 00:37:35.950 [2024-11-19 08:01:27.730900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.950 [2024-11-19 08:01:27.730951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.950 qpair failed and we were unable to recover it. 00:37:35.950 [2024-11-19 08:01:27.731097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.950 [2024-11-19 08:01:27.731134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.950 qpair failed and we were unable to recover it. 00:37:35.950 [2024-11-19 08:01:27.731249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.950 [2024-11-19 08:01:27.731284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.950 qpair failed and we were unable to recover it. 00:37:35.950 [2024-11-19 08:01:27.731418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.950 [2024-11-19 08:01:27.731454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.950 qpair failed and we were unable to recover it. 00:37:35.950 [2024-11-19 08:01:27.731561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.950 [2024-11-19 08:01:27.731596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.950 qpair failed and we were unable to recover it. 00:37:35.950 [2024-11-19 08:01:27.731732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.950 [2024-11-19 08:01:27.731770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.950 qpair failed and we were unable to recover it. 00:37:35.950 [2024-11-19 08:01:27.731894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.950 [2024-11-19 08:01:27.731950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.950 qpair failed and we were unable to recover it. 00:37:35.950 [2024-11-19 08:01:27.732101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.950 [2024-11-19 08:01:27.732165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.950 qpair failed and we were unable to recover it. 00:37:35.950 [2024-11-19 08:01:27.732352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.951 [2024-11-19 08:01:27.732406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.951 qpair failed and we were unable to recover it. 00:37:35.951 [2024-11-19 08:01:27.732545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.951 [2024-11-19 08:01:27.732582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.951 qpair failed and we were unable to recover it. 00:37:35.951 [2024-11-19 08:01:27.732737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.951 [2024-11-19 08:01:27.732788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.951 qpair failed and we were unable to recover it. 00:37:35.951 [2024-11-19 08:01:27.732910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.951 [2024-11-19 08:01:27.732947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.951 qpair failed and we were unable to recover it. 00:37:35.951 [2024-11-19 08:01:27.733053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.951 [2024-11-19 08:01:27.733089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.951 qpair failed and we were unable to recover it. 00:37:35.951 [2024-11-19 08:01:27.733238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.951 [2024-11-19 08:01:27.733278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.951 qpair failed and we were unable to recover it. 00:37:35.951 [2024-11-19 08:01:27.733421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.951 [2024-11-19 08:01:27.733489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.951 qpair failed and we were unable to recover it. 00:37:35.951 [2024-11-19 08:01:27.733668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.951 [2024-11-19 08:01:27.733717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.951 qpair failed and we were unable to recover it. 00:37:35.951 [2024-11-19 08:01:27.733836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.951 [2024-11-19 08:01:27.733881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.951 qpair failed and we were unable to recover it. 00:37:35.951 [2024-11-19 08:01:27.734039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.951 [2024-11-19 08:01:27.734083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.951 qpair failed and we were unable to recover it. 00:37:35.951 [2024-11-19 08:01:27.734307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.951 [2024-11-19 08:01:27.734373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.951 qpair failed and we were unable to recover it. 00:37:35.951 [2024-11-19 08:01:27.734524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.951 [2024-11-19 08:01:27.734565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.951 qpair failed and we were unable to recover it. 00:37:35.951 [2024-11-19 08:01:27.734732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.951 [2024-11-19 08:01:27.734771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.951 qpair failed and we were unable to recover it. 00:37:35.951 [2024-11-19 08:01:27.734903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.951 [2024-11-19 08:01:27.734953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.951 qpair failed and we were unable to recover it. 00:37:35.951 [2024-11-19 08:01:27.735098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.951 [2024-11-19 08:01:27.735135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.951 qpair failed and we were unable to recover it. 00:37:35.951 [2024-11-19 08:01:27.735277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.951 [2024-11-19 08:01:27.735313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.951 qpair failed and we were unable to recover it. 00:37:35.951 [2024-11-19 08:01:27.735416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.951 [2024-11-19 08:01:27.735451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.951 qpair failed and we were unable to recover it. 00:37:35.951 [2024-11-19 08:01:27.735591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.951 [2024-11-19 08:01:27.735627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.951 qpair failed and we were unable to recover it. 00:37:35.951 [2024-11-19 08:01:27.735785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.951 [2024-11-19 08:01:27.735822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.951 qpair failed and we were unable to recover it. 00:37:35.951 [2024-11-19 08:01:27.735927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.951 [2024-11-19 08:01:27.735961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.951 qpair failed and we were unable to recover it. 00:37:35.951 [2024-11-19 08:01:27.736093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.951 [2024-11-19 08:01:27.736128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.951 qpair failed and we were unable to recover it. 00:37:35.951 [2024-11-19 08:01:27.736300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.951 [2024-11-19 08:01:27.736352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.951 qpair failed and we were unable to recover it. 00:37:35.951 [2024-11-19 08:01:27.736538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.951 [2024-11-19 08:01:27.736576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.951 qpair failed and we were unable to recover it. 00:37:35.951 [2024-11-19 08:01:27.736685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.951 [2024-11-19 08:01:27.736747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.951 qpair failed and we were unable to recover it. 00:37:35.951 [2024-11-19 08:01:27.736888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.951 [2024-11-19 08:01:27.736923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.951 qpair failed and we were unable to recover it. 00:37:35.951 [2024-11-19 08:01:27.737039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.951 [2024-11-19 08:01:27.737078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.951 qpair failed and we were unable to recover it. 00:37:35.951 [2024-11-19 08:01:27.737223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.951 [2024-11-19 08:01:27.737279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.951 qpair failed and we were unable to recover it. 00:37:35.951 [2024-11-19 08:01:27.737429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.951 [2024-11-19 08:01:27.737473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.951 qpair failed and we were unable to recover it. 00:37:35.951 [2024-11-19 08:01:27.737634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.951 [2024-11-19 08:01:27.737684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.951 qpair failed and we were unable to recover it. 00:37:35.951 [2024-11-19 08:01:27.737827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.951 [2024-11-19 08:01:27.737863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.951 qpair failed and we were unable to recover it. 00:37:35.951 [2024-11-19 08:01:27.737996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.951 [2024-11-19 08:01:27.738036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.951 qpair failed and we were unable to recover it. 00:37:35.951 [2024-11-19 08:01:27.738189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.951 [2024-11-19 08:01:27.738227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.951 qpair failed and we were unable to recover it. 00:37:35.951 [2024-11-19 08:01:27.738377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.951 [2024-11-19 08:01:27.738416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.951 qpair failed and we were unable to recover it. 00:37:35.951 [2024-11-19 08:01:27.738546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.951 [2024-11-19 08:01:27.738585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.951 qpair failed and we were unable to recover it. 00:37:35.951 [2024-11-19 08:01:27.738718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.951 [2024-11-19 08:01:27.738781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.951 qpair failed and we were unable to recover it. 00:37:35.951 [2024-11-19 08:01:27.738962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.951 [2024-11-19 08:01:27.739002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.951 qpair failed and we were unable to recover it. 00:37:35.951 [2024-11-19 08:01:27.739161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.951 [2024-11-19 08:01:27.739216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.951 qpair failed and we were unable to recover it. 00:37:35.951 [2024-11-19 08:01:27.739375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.952 [2024-11-19 08:01:27.739429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.952 qpair failed and we were unable to recover it. 00:37:35.952 [2024-11-19 08:01:27.739543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.952 [2024-11-19 08:01:27.739578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.952 qpair failed and we were unable to recover it. 00:37:35.952 [2024-11-19 08:01:27.739746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.952 [2024-11-19 08:01:27.739803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.952 qpair failed and we were unable to recover it. 00:37:35.952 [2024-11-19 08:01:27.739948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.952 [2024-11-19 08:01:27.739998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.952 qpair failed and we were unable to recover it. 00:37:35.952 [2024-11-19 08:01:27.740129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.952 [2024-11-19 08:01:27.740166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.952 qpair failed and we were unable to recover it. 00:37:35.952 [2024-11-19 08:01:27.740299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.952 [2024-11-19 08:01:27.740336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.952 qpair failed and we were unable to recover it. 00:37:35.952 [2024-11-19 08:01:27.740483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.952 [2024-11-19 08:01:27.740520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.952 qpair failed and we were unable to recover it. 00:37:35.952 [2024-11-19 08:01:27.740651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.952 [2024-11-19 08:01:27.740708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.952 qpair failed and we were unable to recover it. 00:37:35.952 [2024-11-19 08:01:27.740894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.952 [2024-11-19 08:01:27.740931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.952 qpair failed and we were unable to recover it. 00:37:35.952 [2024-11-19 08:01:27.741074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.952 [2024-11-19 08:01:27.741125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.952 qpair failed and we were unable to recover it. 00:37:35.952 [2024-11-19 08:01:27.741296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.952 [2024-11-19 08:01:27.741333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.952 qpair failed and we were unable to recover it. 00:37:35.952 [2024-11-19 08:01:27.741461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.952 [2024-11-19 08:01:27.741497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.952 qpair failed and we were unable to recover it. 00:37:35.952 [2024-11-19 08:01:27.741601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.952 [2024-11-19 08:01:27.741638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.952 qpair failed and we were unable to recover it. 00:37:35.952 [2024-11-19 08:01:27.741779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.952 [2024-11-19 08:01:27.741816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.952 qpair failed and we were unable to recover it. 00:37:35.952 [2024-11-19 08:01:27.741944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.952 [2024-11-19 08:01:27.742011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.952 qpair failed and we were unable to recover it. 00:37:35.952 [2024-11-19 08:01:27.742265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.952 [2024-11-19 08:01:27.742333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.952 qpair failed and we were unable to recover it. 00:37:35.952 [2024-11-19 08:01:27.742567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.952 [2024-11-19 08:01:27.742603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.952 qpair failed and we were unable to recover it. 00:37:35.952 [2024-11-19 08:01:27.742707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.952 [2024-11-19 08:01:27.742752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.952 qpair failed and we were unable to recover it. 00:37:35.952 [2024-11-19 08:01:27.742863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.952 [2024-11-19 08:01:27.742897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.952 qpair failed and we were unable to recover it. 00:37:35.952 [2024-11-19 08:01:27.743026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.952 [2024-11-19 08:01:27.743097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.952 qpair failed and we were unable to recover it. 00:37:35.952 [2024-11-19 08:01:27.743373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.952 [2024-11-19 08:01:27.743416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.952 qpair failed and we were unable to recover it. 00:37:35.952 [2024-11-19 08:01:27.743565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.952 [2024-11-19 08:01:27.743601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.952 qpair failed and we were unable to recover it. 00:37:35.952 [2024-11-19 08:01:27.743743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.952 [2024-11-19 08:01:27.743779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.952 qpair failed and we were unable to recover it. 00:37:35.952 [2024-11-19 08:01:27.743932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.952 [2024-11-19 08:01:27.743975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.952 qpair failed and we were unable to recover it. 00:37:35.952 [2024-11-19 08:01:27.744117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.952 [2024-11-19 08:01:27.744178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.952 qpair failed and we were unable to recover it. 00:37:35.952 [2024-11-19 08:01:27.744292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.952 [2024-11-19 08:01:27.744333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.952 qpair failed and we were unable to recover it. 00:37:35.952 [2024-11-19 08:01:27.744491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.952 [2024-11-19 08:01:27.744536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.952 qpair failed and we were unable to recover it. 00:37:35.952 [2024-11-19 08:01:27.744657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.952 [2024-11-19 08:01:27.744700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.952 qpair failed and we were unable to recover it. 00:37:35.952 [2024-11-19 08:01:27.744851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.952 [2024-11-19 08:01:27.744887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.952 qpair failed and we were unable to recover it. 00:37:35.952 [2024-11-19 08:01:27.745031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.952 [2024-11-19 08:01:27.745072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.952 qpair failed and we were unable to recover it. 00:37:35.952 [2024-11-19 08:01:27.745229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.952 [2024-11-19 08:01:27.745270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.952 qpair failed and we were unable to recover it. 00:37:35.952 [2024-11-19 08:01:27.745425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.952 [2024-11-19 08:01:27.745466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.952 qpair failed and we were unable to recover it. 00:37:35.952 [2024-11-19 08:01:27.745592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.952 [2024-11-19 08:01:27.745632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.952 qpair failed and we were unable to recover it. 00:37:35.952 [2024-11-19 08:01:27.745779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.952 [2024-11-19 08:01:27.745815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.952 qpair failed and we were unable to recover it. 00:37:35.952 [2024-11-19 08:01:27.745960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.952 [2024-11-19 08:01:27.745996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.952 qpair failed and we were unable to recover it. 00:37:35.952 [2024-11-19 08:01:27.746125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.952 [2024-11-19 08:01:27.746161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.952 qpair failed and we were unable to recover it. 00:37:35.952 [2024-11-19 08:01:27.746367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.952 [2024-11-19 08:01:27.746436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.952 qpair failed and we were unable to recover it. 00:37:35.953 [2024-11-19 08:01:27.746570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.953 [2024-11-19 08:01:27.746620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.953 qpair failed and we were unable to recover it. 00:37:35.953 [2024-11-19 08:01:27.746755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.953 [2024-11-19 08:01:27.746793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.953 qpair failed and we were unable to recover it. 00:37:35.953 [2024-11-19 08:01:27.746913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.953 [2024-11-19 08:01:27.746948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.953 qpair failed and we were unable to recover it. 00:37:35.953 [2024-11-19 08:01:27.747094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.953 [2024-11-19 08:01:27.747134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.953 qpair failed and we were unable to recover it. 00:37:35.953 [2024-11-19 08:01:27.747296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.953 [2024-11-19 08:01:27.747334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.953 qpair failed and we were unable to recover it. 00:37:35.953 [2024-11-19 08:01:27.747451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.953 [2024-11-19 08:01:27.747498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.953 qpair failed and we were unable to recover it. 00:37:35.953 [2024-11-19 08:01:27.747712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.953 [2024-11-19 08:01:27.747765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.953 qpair failed and we were unable to recover it. 00:37:35.953 [2024-11-19 08:01:27.747908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.953 [2024-11-19 08:01:27.747966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.953 qpair failed and we were unable to recover it. 00:37:35.953 [2024-11-19 08:01:27.748188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.953 [2024-11-19 08:01:27.748249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.953 qpair failed and we were unable to recover it. 00:37:35.953 [2024-11-19 08:01:27.748475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.953 [2024-11-19 08:01:27.748535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.953 qpair failed and we were unable to recover it. 00:37:35.953 [2024-11-19 08:01:27.748698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.953 [2024-11-19 08:01:27.748745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.953 qpair failed and we were unable to recover it. 00:37:35.953 [2024-11-19 08:01:27.748850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.953 [2024-11-19 08:01:27.748886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.953 qpair failed and we were unable to recover it. 00:37:35.953 [2024-11-19 08:01:27.749040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.953 [2024-11-19 08:01:27.749079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.953 qpair failed and we were unable to recover it. 00:37:35.953 [2024-11-19 08:01:27.749206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.953 [2024-11-19 08:01:27.749259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.953 qpair failed and we were unable to recover it. 00:37:35.953 [2024-11-19 08:01:27.749387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.953 [2024-11-19 08:01:27.749427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.953 qpair failed and we were unable to recover it. 00:37:35.953 [2024-11-19 08:01:27.749554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.953 [2024-11-19 08:01:27.749609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.953 qpair failed and we were unable to recover it. 00:37:35.953 [2024-11-19 08:01:27.749891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.953 [2024-11-19 08:01:27.749930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.953 qpair failed and we were unable to recover it. 00:37:35.953 [2024-11-19 08:01:27.750077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.953 [2024-11-19 08:01:27.750125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.953 qpair failed and we were unable to recover it. 00:37:35.953 [2024-11-19 08:01:27.750281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.953 [2024-11-19 08:01:27.750320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.953 qpair failed and we were unable to recover it. 00:37:35.953 [2024-11-19 08:01:27.750456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.953 [2024-11-19 08:01:27.750497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.953 qpair failed and we were unable to recover it. 00:37:35.953 [2024-11-19 08:01:27.750628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.953 [2024-11-19 08:01:27.750682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.953 qpair failed and we were unable to recover it. 00:37:35.953 [2024-11-19 08:01:27.750831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.953 [2024-11-19 08:01:27.750880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.953 qpair failed and we were unable to recover it. 00:37:35.953 [2024-11-19 08:01:27.751091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.953 [2024-11-19 08:01:27.751151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.953 qpair failed and we were unable to recover it. 00:37:35.953 [2024-11-19 08:01:27.751264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.953 [2024-11-19 08:01:27.751303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.953 qpair failed and we were unable to recover it. 00:37:35.953 [2024-11-19 08:01:27.751486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.953 [2024-11-19 08:01:27.751544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.953 qpair failed and we were unable to recover it. 00:37:35.953 [2024-11-19 08:01:27.751718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.953 [2024-11-19 08:01:27.751777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.953 qpair failed and we were unable to recover it. 00:37:35.953 [2024-11-19 08:01:27.751931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.953 [2024-11-19 08:01:27.751979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.953 qpair failed and we were unable to recover it. 00:37:35.953 [2024-11-19 08:01:27.752089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.953 [2024-11-19 08:01:27.752126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.953 qpair failed and we were unable to recover it. 00:37:35.953 [2024-11-19 08:01:27.752291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.953 [2024-11-19 08:01:27.752327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.953 qpair failed and we were unable to recover it. 00:37:35.953 [2024-11-19 08:01:27.752438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.953 [2024-11-19 08:01:27.752474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.953 qpair failed and we were unable to recover it. 00:37:35.953 [2024-11-19 08:01:27.752601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.953 [2024-11-19 08:01:27.752650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.953 qpair failed and we were unable to recover it. 00:37:35.953 [2024-11-19 08:01:27.752824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.953 [2024-11-19 08:01:27.752874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.953 qpair failed and we were unable to recover it. 00:37:35.953 [2024-11-19 08:01:27.753005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.953 [2024-11-19 08:01:27.753045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.953 qpair failed and we were unable to recover it. 00:37:35.953 [2024-11-19 08:01:27.753192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.954 [2024-11-19 08:01:27.753248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.954 qpair failed and we were unable to recover it. 00:37:35.954 [2024-11-19 08:01:27.753384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.954 [2024-11-19 08:01:27.753442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.954 qpair failed and we were unable to recover it. 00:37:35.954 [2024-11-19 08:01:27.753552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.954 [2024-11-19 08:01:27.753588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.954 qpair failed and we were unable to recover it. 00:37:35.954 [2024-11-19 08:01:27.753714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.954 [2024-11-19 08:01:27.753754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.954 qpair failed and we were unable to recover it. 00:37:35.954 [2024-11-19 08:01:27.753868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.954 [2024-11-19 08:01:27.753903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.954 qpair failed and we were unable to recover it. 00:37:35.954 [2024-11-19 08:01:27.754076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.954 [2024-11-19 08:01:27.754112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.954 qpair failed and we were unable to recover it. 00:37:35.954 [2024-11-19 08:01:27.754255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.954 [2024-11-19 08:01:27.754290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.954 qpair failed and we were unable to recover it. 00:37:35.954 [2024-11-19 08:01:27.754470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.954 [2024-11-19 08:01:27.754520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.954 qpair failed and we were unable to recover it. 00:37:35.954 [2024-11-19 08:01:27.754665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.954 [2024-11-19 08:01:27.754710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.954 qpair failed and we were unable to recover it. 00:37:35.954 [2024-11-19 08:01:27.754846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.954 [2024-11-19 08:01:27.754902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.954 qpair failed and we were unable to recover it. 00:37:35.954 [2024-11-19 08:01:27.755023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.954 [2024-11-19 08:01:27.755063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.954 qpair failed and we were unable to recover it. 00:37:35.954 [2024-11-19 08:01:27.755224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.954 [2024-11-19 08:01:27.755303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.954 qpair failed and we were unable to recover it. 00:37:35.954 [2024-11-19 08:01:27.755494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.954 [2024-11-19 08:01:27.755563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.954 qpair failed and we were unable to recover it. 00:37:35.954 [2024-11-19 08:01:27.755708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.954 [2024-11-19 08:01:27.755754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.954 qpair failed and we were unable to recover it. 00:37:35.954 [2024-11-19 08:01:27.755867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.954 [2024-11-19 08:01:27.755903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.954 qpair failed and we were unable to recover it. 00:37:35.954 [2024-11-19 08:01:27.756076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.954 [2024-11-19 08:01:27.756131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.954 qpair failed and we were unable to recover it. 00:37:35.954 [2024-11-19 08:01:27.756417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.954 [2024-11-19 08:01:27.756480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.954 qpair failed and we were unable to recover it. 00:37:35.954 [2024-11-19 08:01:27.756625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.954 [2024-11-19 08:01:27.756661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.954 qpair failed and we were unable to recover it. 00:37:35.954 [2024-11-19 08:01:27.756817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.954 [2024-11-19 08:01:27.756852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.954 qpair failed and we were unable to recover it. 00:37:35.954 [2024-11-19 08:01:27.756997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.954 [2024-11-19 08:01:27.757034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.954 qpair failed and we were unable to recover it. 00:37:35.954 [2024-11-19 08:01:27.757158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.954 [2024-11-19 08:01:27.757213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.954 qpair failed and we were unable to recover it. 00:37:35.954 [2024-11-19 08:01:27.757334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.954 [2024-11-19 08:01:27.757374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.954 qpair failed and we were unable to recover it. 00:37:35.954 [2024-11-19 08:01:27.757497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.954 [2024-11-19 08:01:27.757535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.954 qpair failed and we were unable to recover it. 00:37:35.954 [2024-11-19 08:01:27.757681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.954 [2024-11-19 08:01:27.757749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.954 qpair failed and we were unable to recover it. 00:37:35.954 [2024-11-19 08:01:27.757864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.954 [2024-11-19 08:01:27.757898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.954 qpair failed and we were unable to recover it. 00:37:35.954 [2024-11-19 08:01:27.758056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.954 [2024-11-19 08:01:27.758132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.954 qpair failed and we were unable to recover it. 00:37:35.954 [2024-11-19 08:01:27.758276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.954 [2024-11-19 08:01:27.758341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.954 qpair failed and we were unable to recover it. 00:37:35.954 [2024-11-19 08:01:27.758513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.954 [2024-11-19 08:01:27.758571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.954 qpair failed and we were unable to recover it. 00:37:35.954 [2024-11-19 08:01:27.758714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.954 [2024-11-19 08:01:27.758757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.954 qpair failed and we were unable to recover it. 00:37:35.954 [2024-11-19 08:01:27.758884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.954 [2024-11-19 08:01:27.758941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.954 qpair failed and we were unable to recover it. 00:37:35.954 [2024-11-19 08:01:27.759116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.954 [2024-11-19 08:01:27.759153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.954 qpair failed and we were unable to recover it. 00:37:35.954 [2024-11-19 08:01:27.759294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.954 [2024-11-19 08:01:27.759329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.954 qpair failed and we were unable to recover it. 00:37:35.954 [2024-11-19 08:01:27.759443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.954 [2024-11-19 08:01:27.759478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.954 qpair failed and we were unable to recover it. 00:37:35.954 [2024-11-19 08:01:27.759612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.954 [2024-11-19 08:01:27.759647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.954 qpair failed and we were unable to recover it. 00:37:35.954 [2024-11-19 08:01:27.759796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.954 [2024-11-19 08:01:27.759831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.954 qpair failed and we were unable to recover it. 00:37:35.954 [2024-11-19 08:01:27.759953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.954 [2024-11-19 08:01:27.759989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.954 qpair failed and we were unable to recover it. 00:37:35.954 [2024-11-19 08:01:27.760114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.954 [2024-11-19 08:01:27.760166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.954 qpair failed and we were unable to recover it. 00:37:35.954 [2024-11-19 08:01:27.760330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.954 [2024-11-19 08:01:27.760368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.954 qpair failed and we were unable to recover it. 00:37:35.955 [2024-11-19 08:01:27.760512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.955 [2024-11-19 08:01:27.760551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.955 qpair failed and we were unable to recover it. 00:37:35.955 [2024-11-19 08:01:27.760682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.955 [2024-11-19 08:01:27.760729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.955 qpair failed and we were unable to recover it. 00:37:35.955 [2024-11-19 08:01:27.760853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.955 [2024-11-19 08:01:27.760902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.955 qpair failed and we were unable to recover it. 00:37:35.955 [2024-11-19 08:01:27.761033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.955 [2024-11-19 08:01:27.761090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.955 qpair failed and we were unable to recover it. 00:37:35.955 [2024-11-19 08:01:27.761242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.955 [2024-11-19 08:01:27.761281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.955 qpair failed and we were unable to recover it. 00:37:35.955 [2024-11-19 08:01:27.761450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.955 [2024-11-19 08:01:27.761489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.955 qpair failed and we were unable to recover it. 00:37:35.955 [2024-11-19 08:01:27.761626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.955 [2024-11-19 08:01:27.761665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.955 qpair failed and we were unable to recover it. 00:37:35.955 [2024-11-19 08:01:27.761835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.955 [2024-11-19 08:01:27.761872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.955 qpair failed and we were unable to recover it. 00:37:35.955 [2024-11-19 08:01:27.761998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.955 [2024-11-19 08:01:27.762048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.955 qpair failed and we were unable to recover it. 00:37:35.955 [2024-11-19 08:01:27.762195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.955 [2024-11-19 08:01:27.762252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.955 qpair failed and we were unable to recover it. 00:37:35.955 [2024-11-19 08:01:27.762366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.955 [2024-11-19 08:01:27.762402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.955 qpair failed and we were unable to recover it. 00:37:35.955 [2024-11-19 08:01:27.762522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.955 [2024-11-19 08:01:27.762558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.955 qpair failed and we were unable to recover it. 00:37:35.955 [2024-11-19 08:01:27.762703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.955 [2024-11-19 08:01:27.762744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.955 qpair failed and we were unable to recover it. 00:37:35.955 [2024-11-19 08:01:27.762874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.955 [2024-11-19 08:01:27.762914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.955 qpair failed and we were unable to recover it. 00:37:35.955 [2024-11-19 08:01:27.763049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.955 [2024-11-19 08:01:27.763106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.955 qpair failed and we were unable to recover it. 00:37:35.955 [2024-11-19 08:01:27.763270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.955 [2024-11-19 08:01:27.763327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.955 qpair failed and we were unable to recover it. 00:37:35.955 [2024-11-19 08:01:27.763476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.955 [2024-11-19 08:01:27.763510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.955 qpair failed and we were unable to recover it. 00:37:35.955 [2024-11-19 08:01:27.763634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.955 [2024-11-19 08:01:27.763669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.955 qpair failed and we were unable to recover it. 00:37:35.955 [2024-11-19 08:01:27.763827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.955 [2024-11-19 08:01:27.763862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.955 qpair failed and we were unable to recover it. 00:37:35.955 [2024-11-19 08:01:27.764022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.955 [2024-11-19 08:01:27.764060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.955 qpair failed and we were unable to recover it. 00:37:35.955 [2024-11-19 08:01:27.764165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.955 [2024-11-19 08:01:27.764202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.955 qpair failed and we were unable to recover it. 00:37:35.955 [2024-11-19 08:01:27.764353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.955 [2024-11-19 08:01:27.764391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.955 qpair failed and we were unable to recover it. 00:37:35.955 [2024-11-19 08:01:27.764521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.955 [2024-11-19 08:01:27.764555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.955 qpair failed and we were unable to recover it. 00:37:35.955 [2024-11-19 08:01:27.764699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.955 [2024-11-19 08:01:27.764747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.955 qpair failed and we were unable to recover it. 00:37:35.955 [2024-11-19 08:01:27.764858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.955 [2024-11-19 08:01:27.764912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.955 qpair failed and we were unable to recover it. 00:37:35.955 [2024-11-19 08:01:27.765087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.955 [2024-11-19 08:01:27.765135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.955 qpair failed and we were unable to recover it. 00:37:35.955 [2024-11-19 08:01:27.765319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.955 [2024-11-19 08:01:27.765357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.955 qpair failed and we were unable to recover it. 00:37:35.955 [2024-11-19 08:01:27.765503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.955 [2024-11-19 08:01:27.765541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.955 qpair failed and we were unable to recover it. 00:37:35.955 [2024-11-19 08:01:27.765712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.955 [2024-11-19 08:01:27.765761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.955 qpair failed and we were unable to recover it. 00:37:35.955 [2024-11-19 08:01:27.765897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.955 [2024-11-19 08:01:27.765951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.955 qpair failed and we were unable to recover it. 00:37:35.955 [2024-11-19 08:01:27.766072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.955 [2024-11-19 08:01:27.766110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.955 qpair failed and we were unable to recover it. 00:37:35.955 [2024-11-19 08:01:27.766242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.955 [2024-11-19 08:01:27.766277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.955 qpair failed and we were unable to recover it. 00:37:35.955 [2024-11-19 08:01:27.766386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.955 [2024-11-19 08:01:27.766422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.955 qpair failed and we were unable to recover it. 00:37:35.955 [2024-11-19 08:01:27.766589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.955 [2024-11-19 08:01:27.766624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.955 qpair failed and we were unable to recover it. 00:37:35.955 [2024-11-19 08:01:27.766744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.955 [2024-11-19 08:01:27.766782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.955 qpair failed and we were unable to recover it. 00:37:35.955 [2024-11-19 08:01:27.766917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.955 [2024-11-19 08:01:27.766952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.955 qpair failed and we were unable to recover it. 00:37:35.955 [2024-11-19 08:01:27.767060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.955 [2024-11-19 08:01:27.767095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.955 qpair failed and we were unable to recover it. 00:37:35.955 [2024-11-19 08:01:27.767207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.956 [2024-11-19 08:01:27.767242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.956 qpair failed and we were unable to recover it. 00:37:35.956 [2024-11-19 08:01:27.767395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.956 [2024-11-19 08:01:27.767444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.956 qpair failed and we were unable to recover it. 00:37:35.956 [2024-11-19 08:01:27.767588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.956 [2024-11-19 08:01:27.767626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.956 qpair failed and we were unable to recover it. 00:37:35.956 [2024-11-19 08:01:27.767772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.956 [2024-11-19 08:01:27.767809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.956 qpair failed and we were unable to recover it. 00:37:35.956 [2024-11-19 08:01:27.767971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.956 [2024-11-19 08:01:27.768010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.956 qpair failed and we were unable to recover it. 00:37:35.956 [2024-11-19 08:01:27.768148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.956 [2024-11-19 08:01:27.768188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.956 qpair failed and we were unable to recover it. 00:37:35.956 [2024-11-19 08:01:27.768327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.956 [2024-11-19 08:01:27.768365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.956 qpair failed and we were unable to recover it. 00:37:35.956 [2024-11-19 08:01:27.768543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.956 [2024-11-19 08:01:27.768582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.956 qpair failed and we were unable to recover it. 00:37:35.956 [2024-11-19 08:01:27.768700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.956 [2024-11-19 08:01:27.768755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.956 qpair failed and we were unable to recover it. 00:37:35.956 [2024-11-19 08:01:27.768881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.956 [2024-11-19 08:01:27.768919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.956 qpair failed and we were unable to recover it. 00:37:35.956 [2024-11-19 08:01:27.769094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.956 [2024-11-19 08:01:27.769134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.956 qpair failed and we were unable to recover it. 00:37:35.956 [2024-11-19 08:01:27.769324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.956 [2024-11-19 08:01:27.769386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.956 qpair failed and we were unable to recover it. 00:37:35.956 [2024-11-19 08:01:27.769533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.956 [2024-11-19 08:01:27.769573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.956 qpair failed and we were unable to recover it. 00:37:35.956 [2024-11-19 08:01:27.769746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.956 [2024-11-19 08:01:27.769796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.956 qpair failed and we were unable to recover it. 00:37:35.956 [2024-11-19 08:01:27.769952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.956 [2024-11-19 08:01:27.770007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.956 qpair failed and we were unable to recover it. 00:37:35.956 [2024-11-19 08:01:27.770146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.956 [2024-11-19 08:01:27.770189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.956 qpair failed and we were unable to recover it. 00:37:35.956 [2024-11-19 08:01:27.770347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.956 [2024-11-19 08:01:27.770388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.956 qpair failed and we were unable to recover it. 00:37:35.956 [2024-11-19 08:01:27.770511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.956 [2024-11-19 08:01:27.770557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.956 qpair failed and we were unable to recover it. 00:37:35.956 [2024-11-19 08:01:27.770720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.956 [2024-11-19 08:01:27.770757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.956 qpair failed and we were unable to recover it. 00:37:35.956 [2024-11-19 08:01:27.770898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.956 [2024-11-19 08:01:27.770935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.956 qpair failed and we were unable to recover it. 00:37:35.956 [2024-11-19 08:01:27.771086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.956 [2024-11-19 08:01:27.771140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.956 qpair failed and we were unable to recover it. 00:37:35.956 [2024-11-19 08:01:27.771295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.956 [2024-11-19 08:01:27.771335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.956 qpair failed and we were unable to recover it. 00:37:35.956 [2024-11-19 08:01:27.771460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.956 [2024-11-19 08:01:27.771499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.956 qpair failed and we were unable to recover it. 00:37:35.956 [2024-11-19 08:01:27.771714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.956 [2024-11-19 08:01:27.771764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.956 qpair failed and we were unable to recover it. 00:37:35.956 [2024-11-19 08:01:27.771891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.956 [2024-11-19 08:01:27.771963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.956 qpair failed and we were unable to recover it. 00:37:35.956 [2024-11-19 08:01:27.772103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.956 [2024-11-19 08:01:27.772144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.956 qpair failed and we were unable to recover it. 00:37:35.956 [2024-11-19 08:01:27.772288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.956 [2024-11-19 08:01:27.772349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.956 qpair failed and we were unable to recover it. 00:37:35.956 [2024-11-19 08:01:27.772463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.956 [2024-11-19 08:01:27.772501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.956 qpair failed and we were unable to recover it. 00:37:35.956 [2024-11-19 08:01:27.772654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.956 [2024-11-19 08:01:27.772700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.956 qpair failed and we were unable to recover it. 00:37:35.956 [2024-11-19 08:01:27.772855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.956 [2024-11-19 08:01:27.772889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.956 qpair failed and we were unable to recover it. 00:37:35.956 [2024-11-19 08:01:27.773050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.956 [2024-11-19 08:01:27.773107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.956 qpair failed and we were unable to recover it. 00:37:35.956 [2024-11-19 08:01:27.773230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.956 [2024-11-19 08:01:27.773274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.956 qpair failed and we were unable to recover it. 00:37:35.956 [2024-11-19 08:01:27.773463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.956 [2024-11-19 08:01:27.773503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.956 qpair failed and we were unable to recover it. 00:37:35.956 [2024-11-19 08:01:27.773621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.956 [2024-11-19 08:01:27.773661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.956 qpair failed and we were unable to recover it. 00:37:35.956 [2024-11-19 08:01:27.773813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.956 [2024-11-19 08:01:27.773848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.956 qpair failed and we were unable to recover it. 00:37:35.956 [2024-11-19 08:01:27.773981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.956 [2024-11-19 08:01:27.774020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.956 qpair failed and we were unable to recover it. 00:37:35.956 [2024-11-19 08:01:27.774165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.956 [2024-11-19 08:01:27.774228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.956 qpair failed and we were unable to recover it. 00:37:35.956 [2024-11-19 08:01:27.774380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.956 [2024-11-19 08:01:27.774419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.956 qpair failed and we were unable to recover it. 00:37:35.957 [2024-11-19 08:01:27.774539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.957 [2024-11-19 08:01:27.774580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.957 qpair failed and we were unable to recover it. 00:37:35.957 [2024-11-19 08:01:27.774702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.957 [2024-11-19 08:01:27.774755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.957 qpair failed and we were unable to recover it. 00:37:35.957 [2024-11-19 08:01:27.774893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.957 [2024-11-19 08:01:27.774929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.957 qpair failed and we were unable to recover it. 00:37:35.957 [2024-11-19 08:01:27.775067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.957 [2024-11-19 08:01:27.775124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.957 qpair failed and we were unable to recover it. 00:37:35.957 [2024-11-19 08:01:27.775267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.957 [2024-11-19 08:01:27.775325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.957 qpair failed and we were unable to recover it. 00:37:35.957 [2024-11-19 08:01:27.775455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.957 [2024-11-19 08:01:27.775490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.957 qpair failed and we were unable to recover it. 00:37:35.957 [2024-11-19 08:01:27.775630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.957 [2024-11-19 08:01:27.775670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.957 qpair failed and we were unable to recover it. 00:37:35.957 [2024-11-19 08:01:27.775789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.957 [2024-11-19 08:01:27.775824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.957 qpair failed and we were unable to recover it. 00:37:35.957 [2024-11-19 08:01:27.775941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.957 [2024-11-19 08:01:27.775976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.957 qpair failed and we were unable to recover it. 00:37:35.957 [2024-11-19 08:01:27.776080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.957 [2024-11-19 08:01:27.776117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.957 qpair failed and we were unable to recover it. 00:37:35.957 [2024-11-19 08:01:27.776233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.957 [2024-11-19 08:01:27.776269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.957 qpair failed and we were unable to recover it. 00:37:35.957 [2024-11-19 08:01:27.776420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.957 [2024-11-19 08:01:27.776455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.957 qpair failed and we were unable to recover it. 00:37:35.957 [2024-11-19 08:01:27.776604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.957 [2024-11-19 08:01:27.776640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.957 qpair failed and we were unable to recover it. 00:37:35.957 [2024-11-19 08:01:27.776758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.957 [2024-11-19 08:01:27.776795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.957 qpair failed and we were unable to recover it. 00:37:35.957 [2024-11-19 08:01:27.776899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.957 [2024-11-19 08:01:27.776934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.957 qpair failed and we were unable to recover it. 00:37:35.957 [2024-11-19 08:01:27.777053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.957 [2024-11-19 08:01:27.777089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.957 qpair failed and we were unable to recover it. 00:37:35.957 [2024-11-19 08:01:27.777189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.957 [2024-11-19 08:01:27.777224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.957 qpair failed and we were unable to recover it. 00:37:35.957 [2024-11-19 08:01:27.777342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.957 [2024-11-19 08:01:27.777381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.957 qpair failed and we were unable to recover it. 00:37:35.957 [2024-11-19 08:01:27.777525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.957 [2024-11-19 08:01:27.777562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.957 qpair failed and we were unable to recover it. 00:37:35.957 [2024-11-19 08:01:27.777697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.957 [2024-11-19 08:01:27.777734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.957 qpair failed and we were unable to recover it. 00:37:35.957 [2024-11-19 08:01:27.777855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.957 [2024-11-19 08:01:27.777889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.957 qpair failed and we were unable to recover it. 00:37:35.957 [2024-11-19 08:01:27.778048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.957 [2024-11-19 08:01:27.778100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.957 qpair failed and we were unable to recover it. 00:37:35.957 [2024-11-19 08:01:27.778277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.957 [2024-11-19 08:01:27.778317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.957 qpair failed and we were unable to recover it. 00:37:35.957 [2024-11-19 08:01:27.778464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.957 [2024-11-19 08:01:27.778504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.957 qpair failed and we were unable to recover it. 00:37:35.957 [2024-11-19 08:01:27.778638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.957 [2024-11-19 08:01:27.778675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.957 qpair failed and we were unable to recover it. 00:37:35.957 [2024-11-19 08:01:27.778806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.957 [2024-11-19 08:01:27.778856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.957 qpair failed and we were unable to recover it. 00:37:35.957 [2024-11-19 08:01:27.779031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.957 [2024-11-19 08:01:27.779074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.957 qpair failed and we were unable to recover it. 00:37:35.957 [2024-11-19 08:01:27.779188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.957 [2024-11-19 08:01:27.779228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.957 qpair failed and we were unable to recover it. 00:37:35.957 [2024-11-19 08:01:27.779406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.957 [2024-11-19 08:01:27.779505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.957 qpair failed and we were unable to recover it. 00:37:35.957 [2024-11-19 08:01:27.779623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.957 [2024-11-19 08:01:27.779662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.957 qpair failed and we were unable to recover it. 00:37:35.957 [2024-11-19 08:01:27.779819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.957 [2024-11-19 08:01:27.779868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.957 qpair failed and we were unable to recover it. 00:37:35.957 [2024-11-19 08:01:27.780190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.957 [2024-11-19 08:01:27.780250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.957 qpair failed and we were unable to recover it. 00:37:35.957 [2024-11-19 08:01:27.780449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.957 [2024-11-19 08:01:27.780511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.957 qpair failed and we were unable to recover it. 00:37:35.957 [2024-11-19 08:01:27.780699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.957 [2024-11-19 08:01:27.780753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.957 qpair failed and we were unable to recover it. 00:37:35.957 [2024-11-19 08:01:27.780886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.957 [2024-11-19 08:01:27.780922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.957 qpair failed and we were unable to recover it. 00:37:35.957 [2024-11-19 08:01:27.781084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.958 [2024-11-19 08:01:27.781158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.958 qpair failed and we were unable to recover it. 00:37:35.958 [2024-11-19 08:01:27.781424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.958 [2024-11-19 08:01:27.781476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.958 qpair failed and we were unable to recover it. 00:37:35.958 [2024-11-19 08:01:27.781661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.958 [2024-11-19 08:01:27.781703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.958 qpair failed and we were unable to recover it. 00:37:35.958 [2024-11-19 08:01:27.781814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.958 [2024-11-19 08:01:27.781851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.958 qpair failed and we were unable to recover it. 00:37:35.958 [2024-11-19 08:01:27.781956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.958 [2024-11-19 08:01:27.781991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.958 qpair failed and we were unable to recover it. 00:37:35.958 [2024-11-19 08:01:27.782146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.958 [2024-11-19 08:01:27.782184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.958 qpair failed and we were unable to recover it. 00:37:35.958 [2024-11-19 08:01:27.782337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.958 [2024-11-19 08:01:27.782397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.958 qpair failed and we were unable to recover it. 00:37:35.958 [2024-11-19 08:01:27.782625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.958 [2024-11-19 08:01:27.782663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.958 qpair failed and we were unable to recover it. 00:37:35.958 [2024-11-19 08:01:27.782846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.958 [2024-11-19 08:01:27.782895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.958 qpair failed and we were unable to recover it. 00:37:35.958 [2024-11-19 08:01:27.783103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.958 [2024-11-19 08:01:27.783143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.958 qpair failed and we were unable to recover it. 00:37:35.958 [2024-11-19 08:01:27.783305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.958 [2024-11-19 08:01:27.783344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.958 qpair failed and we were unable to recover it. 00:37:35.958 [2024-11-19 08:01:27.783449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.958 [2024-11-19 08:01:27.783496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.958 qpair failed and we were unable to recover it. 00:37:35.958 [2024-11-19 08:01:27.783632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.958 [2024-11-19 08:01:27.783668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.958 qpair failed and we were unable to recover it. 00:37:35.958 [2024-11-19 08:01:27.783816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.958 [2024-11-19 08:01:27.783851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.958 qpair failed and we were unable to recover it. 00:37:35.958 [2024-11-19 08:01:27.784010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.958 [2024-11-19 08:01:27.784054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.958 qpair failed and we were unable to recover it. 00:37:35.958 [2024-11-19 08:01:27.784276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.958 [2024-11-19 08:01:27.784337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.958 qpair failed and we were unable to recover it. 00:37:35.958 [2024-11-19 08:01:27.784498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.958 [2024-11-19 08:01:27.784604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.958 qpair failed and we were unable to recover it. 00:37:35.958 [2024-11-19 08:01:27.784763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.958 [2024-11-19 08:01:27.784815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.958 qpair failed and we were unable to recover it. 00:37:35.958 [2024-11-19 08:01:27.784915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.958 [2024-11-19 08:01:27.784950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.958 qpair failed and we were unable to recover it. 00:37:35.958 [2024-11-19 08:01:27.785158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.958 [2024-11-19 08:01:27.785224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.958 qpair failed and we were unable to recover it. 00:37:35.958 [2024-11-19 08:01:27.785331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.958 [2024-11-19 08:01:27.785369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.958 qpair failed and we were unable to recover it. 00:37:35.958 [2024-11-19 08:01:27.785532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.958 [2024-11-19 08:01:27.785601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.958 qpair failed and we were unable to recover it. 00:37:35.958 [2024-11-19 08:01:27.785751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.958 [2024-11-19 08:01:27.785802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.958 qpair failed and we were unable to recover it. 00:37:35.958 [2024-11-19 08:01:27.785919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.958 [2024-11-19 08:01:27.785975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.958 qpair failed and we were unable to recover it. 00:37:35.958 [2024-11-19 08:01:27.786096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.958 [2024-11-19 08:01:27.786136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.958 qpair failed and we were unable to recover it. 00:37:35.958 [2024-11-19 08:01:27.786381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.958 [2024-11-19 08:01:27.786443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.958 qpair failed and we were unable to recover it. 00:37:35.958 [2024-11-19 08:01:27.786615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.958 [2024-11-19 08:01:27.786654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.958 qpair failed and we were unable to recover it. 00:37:35.958 [2024-11-19 08:01:27.786791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.958 [2024-11-19 08:01:27.786827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.958 qpair failed and we were unable to recover it. 00:37:35.958 [2024-11-19 08:01:27.786982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.958 [2024-11-19 08:01:27.787043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.958 qpair failed and we were unable to recover it. 00:37:35.958 [2024-11-19 08:01:27.787151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.958 [2024-11-19 08:01:27.787188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.958 qpair failed and we were unable to recover it. 00:37:35.958 [2024-11-19 08:01:27.787432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.958 [2024-11-19 08:01:27.787495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.958 qpair failed and we were unable to recover it. 00:37:35.958 [2024-11-19 08:01:27.787629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.958 [2024-11-19 08:01:27.787664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.958 qpair failed and we were unable to recover it. 00:37:35.958 [2024-11-19 08:01:27.787865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.958 [2024-11-19 08:01:27.787920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.958 qpair failed and we were unable to recover it. 00:37:35.958 [2024-11-19 08:01:27.788127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.958 [2024-11-19 08:01:27.788187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.958 qpair failed and we were unable to recover it. 00:37:35.958 [2024-11-19 08:01:27.788402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.958 [2024-11-19 08:01:27.788441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.958 qpair failed and we were unable to recover it. 00:37:35.958 [2024-11-19 08:01:27.788580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.958 [2024-11-19 08:01:27.788615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.958 qpair failed and we were unable to recover it. 00:37:35.958 [2024-11-19 08:01:27.788760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.958 [2024-11-19 08:01:27.788797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.959 qpair failed and we were unable to recover it. 00:37:35.959 [2024-11-19 08:01:27.788927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.959 [2024-11-19 08:01:27.788962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.959 qpair failed and we were unable to recover it. 00:37:35.959 [2024-11-19 08:01:27.789127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.959 [2024-11-19 08:01:27.789166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.959 qpair failed and we were unable to recover it. 00:37:35.959 [2024-11-19 08:01:27.789368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.959 [2024-11-19 08:01:27.789406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.959 qpair failed and we were unable to recover it. 00:37:35.959 [2024-11-19 08:01:27.789577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.959 [2024-11-19 08:01:27.789615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.959 qpair failed and we were unable to recover it. 00:37:35.959 [2024-11-19 08:01:27.789803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.959 [2024-11-19 08:01:27.789839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.959 qpair failed and we were unable to recover it. 00:37:35.959 [2024-11-19 08:01:27.789995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.959 [2024-11-19 08:01:27.790033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.959 qpair failed and we were unable to recover it. 00:37:35.959 [2024-11-19 08:01:27.790210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.959 [2024-11-19 08:01:27.790249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.959 qpair failed and we were unable to recover it. 00:37:35.959 [2024-11-19 08:01:27.790437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.959 [2024-11-19 08:01:27.790476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.959 qpair failed and we were unable to recover it. 00:37:35.959 [2024-11-19 08:01:27.790724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.959 [2024-11-19 08:01:27.790798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.959 qpair failed and we were unable to recover it. 00:37:35.959 [2024-11-19 08:01:27.790961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.959 [2024-11-19 08:01:27.791016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.959 qpair failed and we were unable to recover it. 00:37:35.959 [2024-11-19 08:01:27.791234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.959 [2024-11-19 08:01:27.791289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.959 qpair failed and we were unable to recover it. 00:37:35.959 [2024-11-19 08:01:27.791507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.959 [2024-11-19 08:01:27.791543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.959 qpair failed and we were unable to recover it. 00:37:35.959 [2024-11-19 08:01:27.791710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.959 [2024-11-19 08:01:27.791746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.959 qpair failed and we were unable to recover it. 00:37:35.959 [2024-11-19 08:01:27.791878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.959 [2024-11-19 08:01:27.791934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.959 qpair failed and we were unable to recover it. 00:37:35.959 [2024-11-19 08:01:27.792073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.959 [2024-11-19 08:01:27.792114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.959 qpair failed and we were unable to recover it. 00:37:35.959 [2024-11-19 08:01:27.792243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.959 [2024-11-19 08:01:27.792279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.959 qpair failed and we were unable to recover it. 00:37:35.959 [2024-11-19 08:01:27.792387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.959 [2024-11-19 08:01:27.792422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.959 qpair failed and we were unable to recover it. 00:37:35.959 [2024-11-19 08:01:27.792530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.959 [2024-11-19 08:01:27.792566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.959 qpair failed and we were unable to recover it. 00:37:35.959 [2024-11-19 08:01:27.792728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.959 [2024-11-19 08:01:27.792763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.959 qpair failed and we were unable to recover it. 00:37:35.959 [2024-11-19 08:01:27.793004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.959 [2024-11-19 08:01:27.793061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.959 qpair failed and we were unable to recover it. 00:37:35.959 [2024-11-19 08:01:27.793236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.959 [2024-11-19 08:01:27.793279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.959 qpair failed and we were unable to recover it. 00:37:35.959 [2024-11-19 08:01:27.793440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.959 [2024-11-19 08:01:27.793481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.959 qpair failed and we were unable to recover it. 00:37:35.959 [2024-11-19 08:01:27.793631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.959 [2024-11-19 08:01:27.793671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.959 qpair failed and we were unable to recover it. 00:37:35.959 [2024-11-19 08:01:27.793846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.959 [2024-11-19 08:01:27.793884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.959 qpair failed and we were unable to recover it. 00:37:35.959 [2024-11-19 08:01:27.794099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.959 [2024-11-19 08:01:27.794154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.959 qpair failed and we were unable to recover it. 00:37:35.959 [2024-11-19 08:01:27.794316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.959 [2024-11-19 08:01:27.794356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.959 qpair failed and we were unable to recover it. 00:37:35.959 [2024-11-19 08:01:27.794471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.959 [2024-11-19 08:01:27.794523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.959 qpair failed and we were unable to recover it. 00:37:35.959 [2024-11-19 08:01:27.794684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.959 [2024-11-19 08:01:27.794725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.959 qpair failed and we were unable to recover it. 00:37:35.959 [2024-11-19 08:01:27.794861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.959 [2024-11-19 08:01:27.794896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.959 qpair failed and we were unable to recover it. 00:37:35.959 [2024-11-19 08:01:27.795009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.959 [2024-11-19 08:01:27.795067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.959 qpair failed and we were unable to recover it. 00:37:35.959 [2024-11-19 08:01:27.795182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.959 [2024-11-19 08:01:27.795223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.959 qpair failed and we were unable to recover it. 00:37:35.959 [2024-11-19 08:01:27.795491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.959 [2024-11-19 08:01:27.795550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.959 qpair failed and we were unable to recover it. 00:37:35.959 [2024-11-19 08:01:27.795683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.959 [2024-11-19 08:01:27.795725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.959 qpair failed and we were unable to recover it. 00:37:35.959 [2024-11-19 08:01:27.795842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.959 [2024-11-19 08:01:27.795878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.959 qpair failed and we were unable to recover it. 00:37:35.959 [2024-11-19 08:01:27.796012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.959 [2024-11-19 08:01:27.796047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.959 qpair failed and we were unable to recover it. 00:37:35.959 [2024-11-19 08:01:27.796161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.959 [2024-11-19 08:01:27.796197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.959 qpair failed and we were unable to recover it. 00:37:35.960 [2024-11-19 08:01:27.796360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.960 [2024-11-19 08:01:27.796416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.960 qpair failed and we were unable to recover it. 00:37:35.960 [2024-11-19 08:01:27.796585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.960 [2024-11-19 08:01:27.796620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.960 qpair failed and we were unable to recover it. 00:37:35.960 [2024-11-19 08:01:27.796799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.960 [2024-11-19 08:01:27.796835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.960 qpair failed and we were unable to recover it. 00:37:35.960 [2024-11-19 08:01:27.796990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.960 [2024-11-19 08:01:27.797050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.960 qpair failed and we were unable to recover it. 00:37:35.960 [2024-11-19 08:01:27.797182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.960 [2024-11-19 08:01:27.797217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.960 qpair failed and we were unable to recover it. 00:37:35.960 [2024-11-19 08:01:27.797357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.960 [2024-11-19 08:01:27.797392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.960 qpair failed and we were unable to recover it. 00:37:35.960 [2024-11-19 08:01:27.797499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.960 [2024-11-19 08:01:27.797535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.960 qpair failed and we were unable to recover it. 00:37:35.960 [2024-11-19 08:01:27.797683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.960 [2024-11-19 08:01:27.797741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.960 qpair failed and we were unable to recover it. 00:37:35.960 [2024-11-19 08:01:27.797858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.960 [2024-11-19 08:01:27.797896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.960 qpair failed and we were unable to recover it. 00:37:35.960 [2024-11-19 08:01:27.798064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.960 [2024-11-19 08:01:27.798100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.960 qpair failed and we were unable to recover it. 00:37:35.960 [2024-11-19 08:01:27.798273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.960 [2024-11-19 08:01:27.798310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.960 qpair failed and we were unable to recover it. 00:37:35.960 [2024-11-19 08:01:27.798428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.960 [2024-11-19 08:01:27.798463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.960 qpair failed and we were unable to recover it. 00:37:35.960 [2024-11-19 08:01:27.798597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.960 [2024-11-19 08:01:27.798632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.960 qpair failed and we were unable to recover it. 00:37:35.960 [2024-11-19 08:01:27.798783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.960 [2024-11-19 08:01:27.798819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.960 qpair failed and we were unable to recover it. 00:37:35.960 [2024-11-19 08:01:27.798977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.960 [2024-11-19 08:01:27.799028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.960 qpair failed and we were unable to recover it. 00:37:35.960 [2024-11-19 08:01:27.799174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.960 [2024-11-19 08:01:27.799228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.960 qpair failed and we were unable to recover it. 00:37:35.960 [2024-11-19 08:01:27.799380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.960 [2024-11-19 08:01:27.799442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.960 qpair failed and we were unable to recover it. 00:37:35.960 [2024-11-19 08:01:27.799623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.960 [2024-11-19 08:01:27.799672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.960 qpair failed and we were unable to recover it. 00:37:35.960 [2024-11-19 08:01:27.799820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.960 [2024-11-19 08:01:27.799876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.960 qpair failed and we were unable to recover it. 00:37:35.960 [2024-11-19 08:01:27.800056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.960 [2024-11-19 08:01:27.800092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.960 qpair failed and we were unable to recover it. 00:37:35.960 [2024-11-19 08:01:27.800298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.960 [2024-11-19 08:01:27.800368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.960 qpair failed and we were unable to recover it. 00:37:35.960 [2024-11-19 08:01:27.800567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.960 [2024-11-19 08:01:27.800627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.960 qpair failed and we were unable to recover it. 00:37:35.960 [2024-11-19 08:01:27.800789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.960 [2024-11-19 08:01:27.800825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.960 qpair failed and we were unable to recover it. 00:37:35.960 [2024-11-19 08:01:27.800985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.960 [2024-11-19 08:01:27.801023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.960 qpair failed and we were unable to recover it. 00:37:35.960 [2024-11-19 08:01:27.801162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.960 [2024-11-19 08:01:27.801200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.960 qpair failed and we were unable to recover it. 00:37:35.960 [2024-11-19 08:01:27.801325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.960 [2024-11-19 08:01:27.801363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.960 qpair failed and we were unable to recover it. 00:37:35.960 [2024-11-19 08:01:27.801479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.960 [2024-11-19 08:01:27.801518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.960 qpair failed and we were unable to recover it. 00:37:35.960 [2024-11-19 08:01:27.801669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.960 [2024-11-19 08:01:27.801716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.960 qpair failed and we were unable to recover it. 00:37:35.960 [2024-11-19 08:01:27.801848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.960 [2024-11-19 08:01:27.801884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.960 qpair failed and we were unable to recover it. 00:37:35.960 [2024-11-19 08:01:27.802011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.960 [2024-11-19 08:01:27.802050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.960 qpair failed and we were unable to recover it. 00:37:35.960 [2024-11-19 08:01:27.802228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.960 [2024-11-19 08:01:27.802266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.960 qpair failed and we were unable to recover it. 00:37:35.960 [2024-11-19 08:01:27.802426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.960 [2024-11-19 08:01:27.802481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.960 qpair failed and we were unable to recover it. 00:37:35.960 [2024-11-19 08:01:27.802654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.960 [2024-11-19 08:01:27.802699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.960 qpair failed and we were unable to recover it. 00:37:35.960 [2024-11-19 08:01:27.802836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.960 [2024-11-19 08:01:27.802871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.960 qpair failed and we were unable to recover it. 00:37:35.960 [2024-11-19 08:01:27.803062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.960 [2024-11-19 08:01:27.803117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.960 qpair failed and we were unable to recover it. 00:37:35.960 [2024-11-19 08:01:27.803332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.960 [2024-11-19 08:01:27.803392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.960 qpair failed and we were unable to recover it. 00:37:35.960 [2024-11-19 08:01:27.803551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.960 [2024-11-19 08:01:27.803587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.960 qpair failed and we were unable to recover it. 00:37:35.960 [2024-11-19 08:01:27.803729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.960 [2024-11-19 08:01:27.803766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.960 qpair failed and we were unable to recover it. 00:37:35.960 [2024-11-19 08:01:27.803881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.960 [2024-11-19 08:01:27.803916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.960 qpair failed and we were unable to recover it. 00:37:35.960 [2024-11-19 08:01:27.804089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.960 [2024-11-19 08:01:27.804143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.960 qpair failed and we were unable to recover it. 00:37:35.960 [2024-11-19 08:01:27.804264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.960 [2024-11-19 08:01:27.804317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.961 qpair failed and we were unable to recover it. 00:37:35.961 [2024-11-19 08:01:27.804422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.961 [2024-11-19 08:01:27.804460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.961 qpair failed and we were unable to recover it. 00:37:35.961 [2024-11-19 08:01:27.804606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.961 [2024-11-19 08:01:27.804644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.961 qpair failed and we were unable to recover it. 00:37:35.961 [2024-11-19 08:01:27.804788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.961 [2024-11-19 08:01:27.804824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.961 qpair failed and we were unable to recover it. 00:37:35.961 [2024-11-19 08:01:27.804980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.961 [2024-11-19 08:01:27.805030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.961 qpair failed and we were unable to recover it. 00:37:35.961 [2024-11-19 08:01:27.805215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.961 [2024-11-19 08:01:27.805253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.961 qpair failed and we were unable to recover it. 00:37:35.961 [2024-11-19 08:01:27.805397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.961 [2024-11-19 08:01:27.805437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.961 qpair failed and we were unable to recover it. 00:37:35.961 [2024-11-19 08:01:27.805618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.961 [2024-11-19 08:01:27.805659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.961 qpair failed and we were unable to recover it. 00:37:35.961 [2024-11-19 08:01:27.805841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.961 [2024-11-19 08:01:27.805890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.961 qpair failed and we were unable to recover it. 00:37:35.961 [2024-11-19 08:01:27.806094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.961 [2024-11-19 08:01:27.806160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.961 qpair failed and we were unable to recover it. 00:37:35.961 [2024-11-19 08:01:27.806441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.961 [2024-11-19 08:01:27.806501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.961 qpair failed and we were unable to recover it. 00:37:35.961 [2024-11-19 08:01:27.806652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.961 [2024-11-19 08:01:27.806698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.961 qpair failed and we were unable to recover it. 00:37:35.961 [2024-11-19 08:01:27.806853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.961 [2024-11-19 08:01:27.806888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.961 qpair failed and we were unable to recover it. 00:37:35.961 [2024-11-19 08:01:27.807017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.961 [2024-11-19 08:01:27.807056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.961 qpair failed and we were unable to recover it. 00:37:35.961 [2024-11-19 08:01:27.807213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.961 [2024-11-19 08:01:27.807269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.961 qpair failed and we were unable to recover it. 00:37:35.961 [2024-11-19 08:01:27.807436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.961 [2024-11-19 08:01:27.807495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.961 qpair failed and we were unable to recover it. 00:37:35.961 [2024-11-19 08:01:27.807648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.961 [2024-11-19 08:01:27.807683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.961 qpair failed and we were unable to recover it. 00:37:35.961 [2024-11-19 08:01:27.807817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.961 [2024-11-19 08:01:27.807851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.961 qpair failed and we were unable to recover it. 00:37:35.961 [2024-11-19 08:01:27.808003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.961 [2024-11-19 08:01:27.808059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.961 qpair failed and we were unable to recover it. 00:37:35.961 [2024-11-19 08:01:27.808258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.961 [2024-11-19 08:01:27.808315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.961 qpair failed and we were unable to recover it. 00:37:35.961 [2024-11-19 08:01:27.808469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.961 [2024-11-19 08:01:27.808523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.961 qpair failed and we were unable to recover it. 00:37:35.961 [2024-11-19 08:01:27.808663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.961 [2024-11-19 08:01:27.808711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.961 qpair failed and we were unable to recover it. 00:37:35.961 [2024-11-19 08:01:27.808849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.961 [2024-11-19 08:01:27.808885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.961 qpair failed and we were unable to recover it. 00:37:35.961 [2024-11-19 08:01:27.809039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.961 [2024-11-19 08:01:27.809095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.961 qpair failed and we were unable to recover it. 00:37:35.961 [2024-11-19 08:01:27.809269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.961 [2024-11-19 08:01:27.809305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.961 qpair failed and we were unable to recover it. 00:37:35.961 [2024-11-19 08:01:27.809440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.961 [2024-11-19 08:01:27.809475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.961 qpair failed and we were unable to recover it. 00:37:35.961 [2024-11-19 08:01:27.809606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.961 [2024-11-19 08:01:27.809641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.961 qpair failed and we were unable to recover it. 00:37:35.961 [2024-11-19 08:01:27.809788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.961 [2024-11-19 08:01:27.809823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.961 qpair failed and we were unable to recover it. 00:37:35.961 [2024-11-19 08:01:27.809958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.961 [2024-11-19 08:01:27.809993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.961 qpair failed and we were unable to recover it. 00:37:35.961 [2024-11-19 08:01:27.810132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.961 [2024-11-19 08:01:27.810167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.961 qpair failed and we were unable to recover it. 00:37:35.961 [2024-11-19 08:01:27.810307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.961 [2024-11-19 08:01:27.810343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.961 qpair failed and we were unable to recover it. 00:37:35.961 [2024-11-19 08:01:27.810477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.961 [2024-11-19 08:01:27.810513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.961 qpair failed and we were unable to recover it. 00:37:35.961 [2024-11-19 08:01:27.810641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.961 [2024-11-19 08:01:27.810698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.961 qpair failed and we were unable to recover it. 00:37:35.961 [2024-11-19 08:01:27.810862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.961 [2024-11-19 08:01:27.810904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.961 qpair failed and we were unable to recover it. 00:37:35.961 [2024-11-19 08:01:27.811036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.961 [2024-11-19 08:01:27.811077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.961 qpair failed and we were unable to recover it. 00:37:35.961 [2024-11-19 08:01:27.811256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.961 [2024-11-19 08:01:27.811296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.961 qpair failed and we were unable to recover it. 00:37:35.961 [2024-11-19 08:01:27.811449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.961 [2024-11-19 08:01:27.811489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.961 qpair failed and we were unable to recover it. 00:37:35.961 [2024-11-19 08:01:27.811662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.961 [2024-11-19 08:01:27.811707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.961 qpair failed and we were unable to recover it. 00:37:35.961 [2024-11-19 08:01:27.811879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.961 [2024-11-19 08:01:27.811914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.961 qpair failed and we were unable to recover it. 00:37:35.961 [2024-11-19 08:01:27.812073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.961 [2024-11-19 08:01:27.812113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.961 qpair failed and we were unable to recover it. 00:37:35.961 [2024-11-19 08:01:27.812263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.961 [2024-11-19 08:01:27.812313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.961 qpair failed and we were unable to recover it. 00:37:35.961 [2024-11-19 08:01:27.812551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.962 [2024-11-19 08:01:27.812590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.962 qpair failed and we were unable to recover it. 00:37:35.962 [2024-11-19 08:01:27.812769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.962 [2024-11-19 08:01:27.812819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.962 qpair failed and we were unable to recover it. 00:37:35.962 [2024-11-19 08:01:27.812971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.962 [2024-11-19 08:01:27.813012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.962 qpair failed and we were unable to recover it. 00:37:35.962 [2024-11-19 08:01:27.813215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.962 [2024-11-19 08:01:27.813257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:35.962 qpair failed and we were unable to recover it. 00:37:35.962 [2024-11-19 08:01:27.813395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.962 [2024-11-19 08:01:27.813436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.962 qpair failed and we were unable to recover it. 00:37:35.962 [2024-11-19 08:01:27.813559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.962 [2024-11-19 08:01:27.813597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.962 qpair failed and we were unable to recover it. 00:37:35.962 [2024-11-19 08:01:27.813796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.962 [2024-11-19 08:01:27.813847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.962 qpair failed and we were unable to recover it. 00:37:35.962 [2024-11-19 08:01:27.814016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.962 [2024-11-19 08:01:27.814071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.962 qpair failed and we were unable to recover it. 00:37:35.962 [2024-11-19 08:01:27.814258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.962 [2024-11-19 08:01:27.814312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.962 qpair failed and we were unable to recover it. 00:37:35.962 [2024-11-19 08:01:27.814478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.962 [2024-11-19 08:01:27.814514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.962 qpair failed and we were unable to recover it. 00:37:35.962 [2024-11-19 08:01:27.814651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.962 [2024-11-19 08:01:27.814698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.962 qpair failed and we were unable to recover it. 00:37:35.962 [2024-11-19 08:01:27.814854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.962 [2024-11-19 08:01:27.814903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.962 qpair failed and we were unable to recover it. 00:37:35.962 [2024-11-19 08:01:27.815053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.962 [2024-11-19 08:01:27.815091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.962 qpair failed and we were unable to recover it. 00:37:35.962 [2024-11-19 08:01:27.815228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.962 [2024-11-19 08:01:27.815265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.962 qpair failed and we were unable to recover it. 00:37:35.962 [2024-11-19 08:01:27.815381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.962 [2024-11-19 08:01:27.815418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.962 qpair failed and we were unable to recover it. 00:37:35.962 [2024-11-19 08:01:27.815560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.962 [2024-11-19 08:01:27.815596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.962 qpair failed and we were unable to recover it. 00:37:35.962 [2024-11-19 08:01:27.815717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.962 [2024-11-19 08:01:27.815752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.962 qpair failed and we were unable to recover it. 00:37:35.962 [2024-11-19 08:01:27.815875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.962 [2024-11-19 08:01:27.815920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.962 qpair failed and we were unable to recover it. 00:37:35.962 [2024-11-19 08:01:27.816064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.962 [2024-11-19 08:01:27.816103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.962 qpair failed and we were unable to recover it. 00:37:35.962 [2024-11-19 08:01:27.816250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.962 [2024-11-19 08:01:27.816289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.962 qpair failed and we were unable to recover it. 00:37:35.962 [2024-11-19 08:01:27.816457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.962 [2024-11-19 08:01:27.816495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.962 qpair failed and we were unable to recover it. 00:37:35.962 [2024-11-19 08:01:27.816641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.962 [2024-11-19 08:01:27.816679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.962 qpair failed and we were unable to recover it. 00:37:35.962 [2024-11-19 08:01:27.816843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.962 [2024-11-19 08:01:27.816878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.962 qpair failed and we were unable to recover it. 00:37:35.962 [2024-11-19 08:01:27.817032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.962 [2024-11-19 08:01:27.817087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.962 qpair failed and we were unable to recover it. 00:37:35.962 [2024-11-19 08:01:27.817270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.962 [2024-11-19 08:01:27.817324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.962 qpair failed and we were unable to recover it. 00:37:35.962 [2024-11-19 08:01:27.817460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.962 [2024-11-19 08:01:27.817495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.962 qpair failed and we were unable to recover it. 00:37:35.962 [2024-11-19 08:01:27.817627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.962 [2024-11-19 08:01:27.817662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.962 qpair failed and we were unable to recover it. 00:37:35.962 [2024-11-19 08:01:27.817806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.962 [2024-11-19 08:01:27.817842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.962 qpair failed and we were unable to recover it. 00:37:35.962 [2024-11-19 08:01:27.817953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.962 [2024-11-19 08:01:27.818007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.962 qpair failed and we were unable to recover it. 00:37:35.962 [2024-11-19 08:01:27.818259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.962 [2024-11-19 08:01:27.818319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.962 qpair failed and we were unable to recover it. 00:37:35.962 [2024-11-19 08:01:27.818466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.962 [2024-11-19 08:01:27.818505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.962 qpair failed and we were unable to recover it. 00:37:35.962 [2024-11-19 08:01:27.818659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.962 [2024-11-19 08:01:27.818705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.962 qpair failed and we were unable to recover it. 00:37:35.962 [2024-11-19 08:01:27.818860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.962 [2024-11-19 08:01:27.818895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.962 qpair failed and we were unable to recover it. 00:37:35.962 [2024-11-19 08:01:27.819022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.962 [2024-11-19 08:01:27.819077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.962 qpair failed and we were unable to recover it. 00:37:35.962 [2024-11-19 08:01:27.819190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.962 [2024-11-19 08:01:27.819225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.962 qpair failed and we were unable to recover it. 00:37:35.962 [2024-11-19 08:01:27.819429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.963 [2024-11-19 08:01:27.819482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.963 qpair failed and we were unable to recover it. 00:37:35.963 [2024-11-19 08:01:27.819641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.963 [2024-11-19 08:01:27.819676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.963 qpair failed and we were unable to recover it. 00:37:35.963 [2024-11-19 08:01:27.819800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.963 [2024-11-19 08:01:27.819835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.963 qpair failed and we were unable to recover it. 00:37:35.963 [2024-11-19 08:01:27.819955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.963 [2024-11-19 08:01:27.820009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.963 qpair failed and we were unable to recover it. 00:37:35.963 [2024-11-19 08:01:27.820189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.963 [2024-11-19 08:01:27.820259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.963 qpair failed and we were unable to recover it. 00:37:35.963 [2024-11-19 08:01:27.820372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.963 [2024-11-19 08:01:27.820408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.963 qpair failed and we were unable to recover it. 00:37:35.963 [2024-11-19 08:01:27.820543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.963 [2024-11-19 08:01:27.820579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.963 qpair failed and we were unable to recover it. 00:37:35.963 [2024-11-19 08:01:27.820722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.963 [2024-11-19 08:01:27.820757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.963 qpair failed and we were unable to recover it. 00:37:35.963 [2024-11-19 08:01:27.820890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.963 [2024-11-19 08:01:27.820925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.963 qpair failed and we were unable to recover it. 00:37:35.963 [2024-11-19 08:01:27.821039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.963 [2024-11-19 08:01:27.821073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.963 qpair failed and we were unable to recover it. 00:37:35.963 [2024-11-19 08:01:27.821207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.963 [2024-11-19 08:01:27.821243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.963 qpair failed and we were unable to recover it. 00:37:35.963 [2024-11-19 08:01:27.821371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.963 [2024-11-19 08:01:27.821406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.963 qpair failed and we were unable to recover it. 00:37:35.963 [2024-11-19 08:01:27.821511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.963 [2024-11-19 08:01:27.821546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.963 qpair failed and we were unable to recover it. 00:37:35.963 [2024-11-19 08:01:27.821650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.963 [2024-11-19 08:01:27.821685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.963 qpair failed and we were unable to recover it. 00:37:35.963 [2024-11-19 08:01:27.821840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.963 [2024-11-19 08:01:27.821875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.963 qpair failed and we were unable to recover it. 00:37:35.963 [2024-11-19 08:01:27.821978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.963 [2024-11-19 08:01:27.822013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.963 qpair failed and we were unable to recover it. 00:37:35.963 [2024-11-19 08:01:27.822150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.963 [2024-11-19 08:01:27.822188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.963 qpair failed and we were unable to recover it. 00:37:35.963 [2024-11-19 08:01:27.822327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.963 [2024-11-19 08:01:27.822366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.963 qpair failed and we were unable to recover it. 00:37:35.963 [2024-11-19 08:01:27.822500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.963 [2024-11-19 08:01:27.822535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.963 qpair failed and we were unable to recover it. 00:37:35.963 [2024-11-19 08:01:27.822629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.963 [2024-11-19 08:01:27.822664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.963 qpair failed and we were unable to recover it. 00:37:35.963 [2024-11-19 08:01:27.822807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.963 [2024-11-19 08:01:27.822842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:35.963 qpair failed and we were unable to recover it. 00:37:35.963 [2024-11-19 08:01:27.822955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.963 [2024-11-19 08:01:27.823025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:35.963 qpair failed and we were unable to recover it. 00:37:35.963 [2024-11-19 08:01:27.823246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.963 [2024-11-19 08:01:27.823305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.963 qpair failed and we were unable to recover it. 00:37:35.963 [2024-11-19 08:01:27.823415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.963 [2024-11-19 08:01:27.823451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.963 qpair failed and we were unable to recover it. 00:37:35.963 [2024-11-19 08:01:27.823568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.963 [2024-11-19 08:01:27.823604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.963 qpair failed and we were unable to recover it. 00:37:35.963 [2024-11-19 08:01:27.823765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.963 [2024-11-19 08:01:27.823820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.963 qpair failed and we were unable to recover it. 00:37:35.963 [2024-11-19 08:01:27.823976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.963 [2024-11-19 08:01:27.824030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:35.963 qpair failed and we were unable to recover it. 00:37:35.963 [2024-11-19 08:01:27.824164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.258 [2024-11-19 08:01:27.824219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.258 qpair failed and we were unable to recover it. 00:37:36.258 [2024-11-19 08:01:27.824350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.258 [2024-11-19 08:01:27.824396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.258 qpair failed and we were unable to recover it. 00:37:36.258 [2024-11-19 08:01:27.824573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.258 [2024-11-19 08:01:27.824624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.258 qpair failed and we were unable to recover it. 00:37:36.258 [2024-11-19 08:01:27.824745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.258 [2024-11-19 08:01:27.824784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.258 qpair failed and we were unable to recover it. 00:37:36.258 [2024-11-19 08:01:27.824925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.258 [2024-11-19 08:01:27.824967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.258 qpair failed and we were unable to recover it. 00:37:36.258 [2024-11-19 08:01:27.825091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.258 [2024-11-19 08:01:27.825128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.258 qpair failed and we were unable to recover it. 00:37:36.258 [2024-11-19 08:01:27.825293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.258 [2024-11-19 08:01:27.825329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.258 qpair failed and we were unable to recover it. 00:37:36.258 [2024-11-19 08:01:27.825465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.258 [2024-11-19 08:01:27.825501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.258 qpair failed and we were unable to recover it. 00:37:36.258 [2024-11-19 08:01:27.825664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.258 [2024-11-19 08:01:27.825708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.258 qpair failed and we were unable to recover it. 00:37:36.258 [2024-11-19 08:01:27.825867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.258 [2024-11-19 08:01:27.825920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.258 qpair failed and we were unable to recover it. 00:37:36.258 [2024-11-19 08:01:27.826078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.258 [2024-11-19 08:01:27.826132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.258 qpair failed and we were unable to recover it. 00:37:36.258 [2024-11-19 08:01:27.826289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.258 [2024-11-19 08:01:27.826342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.258 qpair failed and we were unable to recover it. 00:37:36.258 [2024-11-19 08:01:27.826471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.258 [2024-11-19 08:01:27.826507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.258 qpair failed and we were unable to recover it. 00:37:36.258 [2024-11-19 08:01:27.826643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.258 [2024-11-19 08:01:27.826678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.258 qpair failed and we were unable to recover it. 00:37:36.258 [2024-11-19 08:01:27.826857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.258 [2024-11-19 08:01:27.826896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.258 qpair failed and we were unable to recover it. 00:37:36.259 [2024-11-19 08:01:27.827098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.259 [2024-11-19 08:01:27.827150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.259 qpair failed and we were unable to recover it. 00:37:36.259 [2024-11-19 08:01:27.827314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.259 [2024-11-19 08:01:27.827365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.259 qpair failed and we were unable to recover it. 00:37:36.259 [2024-11-19 08:01:27.827505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.259 [2024-11-19 08:01:27.827540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.259 qpair failed and we were unable to recover it. 00:37:36.259 [2024-11-19 08:01:27.827687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.259 [2024-11-19 08:01:27.827730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.259 qpair failed and we were unable to recover it. 00:37:36.259 [2024-11-19 08:01:27.827864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.259 [2024-11-19 08:01:27.827900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.259 qpair failed and we were unable to recover it. 00:37:36.259 [2024-11-19 08:01:27.828041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.259 [2024-11-19 08:01:27.828076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.259 qpair failed and we were unable to recover it. 00:37:36.259 [2024-11-19 08:01:27.828209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.259 [2024-11-19 08:01:27.828244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.259 qpair failed and we were unable to recover it. 00:37:36.259 [2024-11-19 08:01:27.828383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.259 [2024-11-19 08:01:27.828418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.259 qpair failed and we were unable to recover it. 00:37:36.259 [2024-11-19 08:01:27.828540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.259 [2024-11-19 08:01:27.828575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.259 qpair failed and we were unable to recover it. 00:37:36.259 [2024-11-19 08:01:27.828741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.259 [2024-11-19 08:01:27.828776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.259 qpair failed and we were unable to recover it. 00:37:36.259 [2024-11-19 08:01:27.828890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.259 [2024-11-19 08:01:27.828925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.259 qpair failed and we were unable to recover it. 00:37:36.259 [2024-11-19 08:01:27.829084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.259 [2024-11-19 08:01:27.829119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.259 qpair failed and we were unable to recover it. 00:37:36.259 [2024-11-19 08:01:27.829228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.259 [2024-11-19 08:01:27.829264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.259 qpair failed and we were unable to recover it. 00:37:36.259 [2024-11-19 08:01:27.829369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.259 [2024-11-19 08:01:27.829404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.259 qpair failed and we were unable to recover it. 00:37:36.259 [2024-11-19 08:01:27.829566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.259 [2024-11-19 08:01:27.829601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.259 qpair failed and we were unable to recover it. 00:37:36.259 [2024-11-19 08:01:27.829765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.259 [2024-11-19 08:01:27.829800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.259 qpair failed and we were unable to recover it. 00:37:36.259 [2024-11-19 08:01:27.829935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.259 [2024-11-19 08:01:27.829970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.259 qpair failed and we were unable to recover it. 00:37:36.259 [2024-11-19 08:01:27.830103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.259 [2024-11-19 08:01:27.830138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.259 qpair failed and we were unable to recover it. 00:37:36.259 [2024-11-19 08:01:27.830299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.259 [2024-11-19 08:01:27.830334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.259 qpair failed and we were unable to recover it. 00:37:36.259 [2024-11-19 08:01:27.830485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.259 [2024-11-19 08:01:27.830533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.259 qpair failed and we were unable to recover it. 00:37:36.259 [2024-11-19 08:01:27.830703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.259 [2024-11-19 08:01:27.830747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.259 qpair failed and we were unable to recover it. 00:37:36.259 [2024-11-19 08:01:27.830864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.259 [2024-11-19 08:01:27.830902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.259 qpair failed and we were unable to recover it. 00:37:36.259 [2024-11-19 08:01:27.831068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.259 [2024-11-19 08:01:27.831104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.259 qpair failed and we were unable to recover it. 00:37:36.259 [2024-11-19 08:01:27.831239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.259 [2024-11-19 08:01:27.831274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.259 qpair failed and we were unable to recover it. 00:37:36.259 [2024-11-19 08:01:27.831407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.259 [2024-11-19 08:01:27.831457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.259 qpair failed and we were unable to recover it. 00:37:36.259 [2024-11-19 08:01:27.831565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.259 [2024-11-19 08:01:27.831601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.259 qpair failed and we were unable to recover it. 00:37:36.259 [2024-11-19 08:01:27.831770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.259 [2024-11-19 08:01:27.831826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.259 qpair failed and we were unable to recover it. 00:37:36.259 [2024-11-19 08:01:27.831936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.259 [2024-11-19 08:01:27.831971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.259 qpair failed and we were unable to recover it. 00:37:36.259 [2024-11-19 08:01:27.832153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.259 [2024-11-19 08:01:27.832210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.259 qpair failed and we were unable to recover it. 00:37:36.259 [2024-11-19 08:01:27.832315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.259 [2024-11-19 08:01:27.832350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.259 qpair failed and we were unable to recover it. 00:37:36.259 [2024-11-19 08:01:27.832489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.259 [2024-11-19 08:01:27.832523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.259 qpair failed and we were unable to recover it. 00:37:36.259 [2024-11-19 08:01:27.832684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.259 [2024-11-19 08:01:27.832735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.259 qpair failed and we were unable to recover it. 00:37:36.259 [2024-11-19 08:01:27.832873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.260 [2024-11-19 08:01:27.832908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.260 qpair failed and we were unable to recover it. 00:37:36.260 [2024-11-19 08:01:27.833017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.260 [2024-11-19 08:01:27.833051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.260 qpair failed and we were unable to recover it. 00:37:36.260 [2024-11-19 08:01:27.833159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.260 [2024-11-19 08:01:27.833205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.260 qpair failed and we were unable to recover it. 00:37:36.260 [2024-11-19 08:01:27.833351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.260 [2024-11-19 08:01:27.833386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.260 qpair failed and we were unable to recover it. 00:37:36.260 [2024-11-19 08:01:27.833502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.260 [2024-11-19 08:01:27.833539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.260 qpair failed and we were unable to recover it. 00:37:36.260 [2024-11-19 08:01:27.833714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.260 [2024-11-19 08:01:27.833750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.260 qpair failed and we were unable to recover it. 00:37:36.260 [2024-11-19 08:01:27.833892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.260 [2024-11-19 08:01:27.833948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.260 qpair failed and we were unable to recover it. 00:37:36.260 [2024-11-19 08:01:27.834246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.260 [2024-11-19 08:01:27.834305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.260 qpair failed and we were unable to recover it. 00:37:36.260 [2024-11-19 08:01:27.834444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.260 [2024-11-19 08:01:27.834480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.260 qpair failed and we were unable to recover it. 00:37:36.260 [2024-11-19 08:01:27.834595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.260 [2024-11-19 08:01:27.834629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.260 qpair failed and we were unable to recover it. 00:37:36.260 [2024-11-19 08:01:27.834800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.260 [2024-11-19 08:01:27.834854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.260 qpair failed and we were unable to recover it. 00:37:36.260 [2024-11-19 08:01:27.834981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.260 [2024-11-19 08:01:27.835034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.260 qpair failed and we were unable to recover it. 00:37:36.260 [2024-11-19 08:01:27.835191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.260 [2024-11-19 08:01:27.835241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.260 qpair failed and we were unable to recover it. 00:37:36.260 [2024-11-19 08:01:27.835377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.260 [2024-11-19 08:01:27.835412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.260 qpair failed and we were unable to recover it. 00:37:36.260 [2024-11-19 08:01:27.835550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.260 [2024-11-19 08:01:27.835595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.260 qpair failed and we were unable to recover it. 00:37:36.260 [2024-11-19 08:01:27.835716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.260 [2024-11-19 08:01:27.835753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.260 qpair failed and we were unable to recover it. 00:37:36.260 [2024-11-19 08:01:27.835896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.260 [2024-11-19 08:01:27.835942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.260 qpair failed and we were unable to recover it. 00:37:36.260 [2024-11-19 08:01:27.836078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.260 [2024-11-19 08:01:27.836113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.260 qpair failed and we were unable to recover it. 00:37:36.260 [2024-11-19 08:01:27.836269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.260 [2024-11-19 08:01:27.836307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.260 qpair failed and we were unable to recover it. 00:37:36.260 [2024-11-19 08:01:27.836454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.260 [2024-11-19 08:01:27.836492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.260 qpair failed and we were unable to recover it. 00:37:36.260 [2024-11-19 08:01:27.836665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.260 [2024-11-19 08:01:27.836732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.260 qpair failed and we were unable to recover it. 00:37:36.260 [2024-11-19 08:01:27.836901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.260 [2024-11-19 08:01:27.836937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.260 qpair failed and we were unable to recover it. 00:37:36.260 [2024-11-19 08:01:27.837052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.260 [2024-11-19 08:01:27.837088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.260 qpair failed and we were unable to recover it. 00:37:36.260 [2024-11-19 08:01:27.837201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.260 [2024-11-19 08:01:27.837236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.260 qpair failed and we were unable to recover it. 00:37:36.260 [2024-11-19 08:01:27.837395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.260 [2024-11-19 08:01:27.837430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.260 qpair failed and we were unable to recover it. 00:37:36.260 [2024-11-19 08:01:27.837564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.260 [2024-11-19 08:01:27.837600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.260 qpair failed and we were unable to recover it. 00:37:36.260 [2024-11-19 08:01:27.837732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.260 [2024-11-19 08:01:27.837768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.260 qpair failed and we were unable to recover it. 00:37:36.260 [2024-11-19 08:01:27.837935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.260 [2024-11-19 08:01:27.837971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.260 qpair failed and we were unable to recover it. 00:37:36.260 [2024-11-19 08:01:27.838113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.260 [2024-11-19 08:01:27.838152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.260 qpair failed and we were unable to recover it. 00:37:36.260 [2024-11-19 08:01:27.838315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.260 [2024-11-19 08:01:27.838350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.260 qpair failed and we were unable to recover it. 00:37:36.260 [2024-11-19 08:01:27.838488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.261 [2024-11-19 08:01:27.838524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.261 qpair failed and we were unable to recover it. 00:37:36.261 [2024-11-19 08:01:27.838663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.261 [2024-11-19 08:01:27.838703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.261 qpair failed and we were unable to recover it. 00:37:36.261 [2024-11-19 08:01:27.838805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.261 [2024-11-19 08:01:27.838840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.261 qpair failed and we were unable to recover it. 00:37:36.261 [2024-11-19 08:01:27.838999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.261 [2024-11-19 08:01:27.839038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.261 qpair failed and we were unable to recover it. 00:37:36.261 [2024-11-19 08:01:27.839199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.261 [2024-11-19 08:01:27.839238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.261 qpair failed and we were unable to recover it. 00:37:36.261 [2024-11-19 08:01:27.839379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.261 [2024-11-19 08:01:27.839418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.261 qpair failed and we were unable to recover it. 00:37:36.261 [2024-11-19 08:01:27.839639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.261 [2024-11-19 08:01:27.839678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.261 qpair failed and we were unable to recover it. 00:37:36.261 [2024-11-19 08:01:27.839848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.261 [2024-11-19 08:01:27.839883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.261 qpair failed and we were unable to recover it. 00:37:36.261 [2024-11-19 08:01:27.840097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.261 [2024-11-19 08:01:27.840164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.261 qpair failed and we were unable to recover it. 00:37:36.261 [2024-11-19 08:01:27.840374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.261 [2024-11-19 08:01:27.840415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.261 qpair failed and we were unable to recover it. 00:37:36.261 [2024-11-19 08:01:27.840592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.261 [2024-11-19 08:01:27.840632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.261 qpair failed and we were unable to recover it. 00:37:36.261 [2024-11-19 08:01:27.840773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.261 [2024-11-19 08:01:27.840809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.261 qpair failed and we were unable to recover it. 00:37:36.261 [2024-11-19 08:01:27.840952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.261 [2024-11-19 08:01:27.840988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.261 qpair failed and we were unable to recover it. 00:37:36.261 [2024-11-19 08:01:27.841101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.261 [2024-11-19 08:01:27.841156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.261 qpair failed and we were unable to recover it. 00:37:36.261 [2024-11-19 08:01:27.841378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.261 [2024-11-19 08:01:27.841435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.261 qpair failed and we were unable to recover it. 00:37:36.261 [2024-11-19 08:01:27.841581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.261 [2024-11-19 08:01:27.841616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.261 qpair failed and we were unable to recover it. 00:37:36.261 [2024-11-19 08:01:27.841787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.261 [2024-11-19 08:01:27.841838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.261 qpair failed and we were unable to recover it. 00:37:36.261 [2024-11-19 08:01:27.841982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.261 [2024-11-19 08:01:27.842036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.261 qpair failed and we were unable to recover it. 00:37:36.261 [2024-11-19 08:01:27.842288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.261 [2024-11-19 08:01:27.842327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.261 qpair failed and we were unable to recover it. 00:37:36.261 [2024-11-19 08:01:27.842472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.261 [2024-11-19 08:01:27.842511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.261 qpair failed and we were unable to recover it. 00:37:36.261 [2024-11-19 08:01:27.842695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.261 [2024-11-19 08:01:27.842734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.261 qpair failed and we were unable to recover it. 00:37:36.261 [2024-11-19 08:01:27.842862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.261 [2024-11-19 08:01:27.842897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.261 qpair failed and we were unable to recover it. 00:37:36.261 [2024-11-19 08:01:27.843019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.261 [2024-11-19 08:01:27.843072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.261 qpair failed and we were unable to recover it. 00:37:36.261 [2024-11-19 08:01:27.843231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.261 [2024-11-19 08:01:27.843270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.261 qpair failed and we were unable to recover it. 00:37:36.261 [2024-11-19 08:01:27.843457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.261 [2024-11-19 08:01:27.843525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.261 qpair failed and we were unable to recover it. 00:37:36.261 [2024-11-19 08:01:27.843749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.261 [2024-11-19 08:01:27.843787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.261 qpair failed and we were unable to recover it. 00:37:36.261 [2024-11-19 08:01:27.843932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.261 [2024-11-19 08:01:27.843968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.261 qpair failed and we were unable to recover it. 00:37:36.261 [2024-11-19 08:01:27.844095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.261 [2024-11-19 08:01:27.844131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.261 qpair failed and we were unable to recover it. 00:37:36.261 [2024-11-19 08:01:27.844341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.261 [2024-11-19 08:01:27.844403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.261 qpair failed and we were unable to recover it. 00:37:36.262 [2024-11-19 08:01:27.844553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.262 [2024-11-19 08:01:27.844592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.262 qpair failed and we were unable to recover it. 00:37:36.262 [2024-11-19 08:01:27.844817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.262 [2024-11-19 08:01:27.844853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.262 qpair failed and we were unable to recover it. 00:37:36.262 [2024-11-19 08:01:27.844990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.262 [2024-11-19 08:01:27.845024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.262 qpair failed and we were unable to recover it. 00:37:36.262 [2024-11-19 08:01:27.845236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.262 [2024-11-19 08:01:27.845275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.262 qpair failed and we were unable to recover it. 00:37:36.262 [2024-11-19 08:01:27.845494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.262 [2024-11-19 08:01:27.845546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.262 qpair failed and we were unable to recover it. 00:37:36.262 [2024-11-19 08:01:27.845679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.262 [2024-11-19 08:01:27.845722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.262 qpair failed and we were unable to recover it. 00:37:36.262 [2024-11-19 08:01:27.845879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.262 [2024-11-19 08:01:27.845914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.262 qpair failed and we were unable to recover it. 00:37:36.262 [2024-11-19 08:01:27.846125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.262 [2024-11-19 08:01:27.846159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.262 qpair failed and we were unable to recover it. 00:37:36.262 [2024-11-19 08:01:27.846417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.262 [2024-11-19 08:01:27.846477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.262 qpair failed and we were unable to recover it. 00:37:36.262 [2024-11-19 08:01:27.846648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.262 [2024-11-19 08:01:27.846700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.262 qpair failed and we were unable to recover it. 00:37:36.262 [2024-11-19 08:01:27.846906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.262 [2024-11-19 08:01:27.846955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.262 qpair failed and we were unable to recover it. 00:37:36.262 [2024-11-19 08:01:27.847111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.262 [2024-11-19 08:01:27.847168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.262 qpair failed and we were unable to recover it. 00:37:36.262 [2024-11-19 08:01:27.847452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.262 [2024-11-19 08:01:27.847492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.262 qpair failed and we were unable to recover it. 00:37:36.262 [2024-11-19 08:01:27.847616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.262 [2024-11-19 08:01:27.847655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.262 qpair failed and we were unable to recover it. 00:37:36.262 [2024-11-19 08:01:27.847791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.262 [2024-11-19 08:01:27.847827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.262 qpair failed and we were unable to recover it. 00:37:36.262 [2024-11-19 08:01:27.847935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.262 [2024-11-19 08:01:27.847971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.262 qpair failed and we were unable to recover it. 00:37:36.262 [2024-11-19 08:01:27.848121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.262 [2024-11-19 08:01:27.848159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.262 qpair failed and we were unable to recover it. 00:37:36.262 [2024-11-19 08:01:27.848369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.262 [2024-11-19 08:01:27.848408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.262 qpair failed and we were unable to recover it. 00:37:36.262 [2024-11-19 08:01:27.848565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.262 [2024-11-19 08:01:27.848605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.262 qpair failed and we were unable to recover it. 00:37:36.262 [2024-11-19 08:01:27.848766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.262 [2024-11-19 08:01:27.848802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.262 qpair failed and we were unable to recover it. 00:37:36.262 [2024-11-19 08:01:27.848973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.262 [2024-11-19 08:01:27.849023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.262 qpair failed and we were unable to recover it. 00:37:36.262 [2024-11-19 08:01:27.849230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.262 [2024-11-19 08:01:27.849286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.262 qpair failed and we were unable to recover it. 00:37:36.262 [2024-11-19 08:01:27.849487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.262 [2024-11-19 08:01:27.849541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.262 qpair failed and we were unable to recover it. 00:37:36.262 [2024-11-19 08:01:27.849708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.262 [2024-11-19 08:01:27.849745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.262 qpair failed and we were unable to recover it. 00:37:36.262 [2024-11-19 08:01:27.849882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.262 [2024-11-19 08:01:27.849918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.262 qpair failed and we were unable to recover it. 00:37:36.262 [2024-11-19 08:01:27.850025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.262 [2024-11-19 08:01:27.850061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.262 qpair failed and we were unable to recover it. 00:37:36.262 [2024-11-19 08:01:27.850227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.262 [2024-11-19 08:01:27.850263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.262 qpair failed and we were unable to recover it. 00:37:36.262 [2024-11-19 08:01:27.850496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.262 [2024-11-19 08:01:27.850533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.262 qpair failed and we were unable to recover it. 00:37:36.262 [2024-11-19 08:01:27.850642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.262 [2024-11-19 08:01:27.850699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.262 qpair failed and we were unable to recover it. 00:37:36.262 [2024-11-19 08:01:27.850842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.262 [2024-11-19 08:01:27.850880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.262 qpair failed and we were unable to recover it. 00:37:36.262 [2024-11-19 08:01:27.851001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.263 [2024-11-19 08:01:27.851036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.263 qpair failed and we were unable to recover it. 00:37:36.263 [2024-11-19 08:01:27.851198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.263 [2024-11-19 08:01:27.851233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.263 qpair failed and we were unable to recover it. 00:37:36.263 [2024-11-19 08:01:27.851465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.263 [2024-11-19 08:01:27.851525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.263 qpair failed and we were unable to recover it. 00:37:36.263 [2024-11-19 08:01:27.851737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.263 [2024-11-19 08:01:27.851787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.263 qpair failed and we were unable to recover it. 00:37:36.263 [2024-11-19 08:01:27.851962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.263 [2024-11-19 08:01:27.852017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.263 qpair failed and we were unable to recover it. 00:37:36.263 [2024-11-19 08:01:27.852196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.263 [2024-11-19 08:01:27.852265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.263 qpair failed and we were unable to recover it. 00:37:36.263 [2024-11-19 08:01:27.852488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.263 [2024-11-19 08:01:27.852535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.263 qpair failed and we were unable to recover it. 00:37:36.263 [2024-11-19 08:01:27.852704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.263 [2024-11-19 08:01:27.852750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.263 qpair failed and we were unable to recover it. 00:37:36.263 [2024-11-19 08:01:27.852939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.263 [2024-11-19 08:01:27.852995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.263 qpair failed and we were unable to recover it. 00:37:36.263 [2024-11-19 08:01:27.853122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.263 [2024-11-19 08:01:27.853175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.263 qpair failed and we were unable to recover it. 00:37:36.263 [2024-11-19 08:01:27.853281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.263 [2024-11-19 08:01:27.853317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.263 qpair failed and we were unable to recover it. 00:37:36.263 [2024-11-19 08:01:27.853449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.263 [2024-11-19 08:01:27.853485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.263 qpair failed and we were unable to recover it. 00:37:36.263 [2024-11-19 08:01:27.853593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.263 [2024-11-19 08:01:27.853628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.263 qpair failed and we were unable to recover it. 00:37:36.263 [2024-11-19 08:01:27.853773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.263 [2024-11-19 08:01:27.853826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.263 qpair failed and we were unable to recover it. 00:37:36.263 [2024-11-19 08:01:27.853975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.263 [2024-11-19 08:01:27.854030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.263 qpair failed and we were unable to recover it. 00:37:36.263 [2024-11-19 08:01:27.854192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.263 [2024-11-19 08:01:27.854227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.263 qpair failed and we were unable to recover it. 00:37:36.263 [2024-11-19 08:01:27.854367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.263 [2024-11-19 08:01:27.854403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.263 qpair failed and we were unable to recover it. 00:37:36.263 [2024-11-19 08:01:27.854506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.263 [2024-11-19 08:01:27.854542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.263 qpair failed and we were unable to recover it. 00:37:36.263 [2024-11-19 08:01:27.854706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.263 [2024-11-19 08:01:27.854741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.263 qpair failed and we were unable to recover it. 00:37:36.263 [2024-11-19 08:01:27.854907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.263 [2024-11-19 08:01:27.854983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.263 qpair failed and we were unable to recover it. 00:37:36.263 [2024-11-19 08:01:27.855171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.263 [2024-11-19 08:01:27.855213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.263 qpair failed and we were unable to recover it. 00:37:36.263 [2024-11-19 08:01:27.855440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.263 [2024-11-19 08:01:27.855499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.263 qpair failed and we were unable to recover it. 00:37:36.263 [2024-11-19 08:01:27.855631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.263 [2024-11-19 08:01:27.855668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.263 qpair failed and we were unable to recover it. 00:37:36.263 [2024-11-19 08:01:27.855838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.263 [2024-11-19 08:01:27.855878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.263 qpair failed and we were unable to recover it. 00:37:36.263 [2024-11-19 08:01:27.856054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.263 [2024-11-19 08:01:27.856093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.263 qpair failed and we were unable to recover it. 00:37:36.263 [2024-11-19 08:01:27.856308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.263 [2024-11-19 08:01:27.856348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.263 qpair failed and we were unable to recover it. 00:37:36.263 [2024-11-19 08:01:27.856472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.263 [2024-11-19 08:01:27.856523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.263 qpair failed and we were unable to recover it. 00:37:36.263 [2024-11-19 08:01:27.856696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.263 [2024-11-19 08:01:27.856732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.263 qpair failed and we were unable to recover it. 00:37:36.263 [2024-11-19 08:01:27.856846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.263 [2024-11-19 08:01:27.856882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.263 qpair failed and we were unable to recover it. 00:37:36.263 [2024-11-19 08:01:27.857040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.263 [2024-11-19 08:01:27.857076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.263 qpair failed and we were unable to recover it. 00:37:36.263 [2024-11-19 08:01:27.857231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.263 [2024-11-19 08:01:27.857270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.263 qpair failed and we were unable to recover it. 00:37:36.263 [2024-11-19 08:01:27.857441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.263 [2024-11-19 08:01:27.857481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.263 qpair failed and we were unable to recover it. 00:37:36.263 [2024-11-19 08:01:27.857624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.264 [2024-11-19 08:01:27.857664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.264 qpair failed and we were unable to recover it. 00:37:36.264 [2024-11-19 08:01:27.857825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.264 [2024-11-19 08:01:27.857873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.264 qpair failed and we were unable to recover it. 00:37:36.264 [2024-11-19 08:01:27.858067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.264 [2024-11-19 08:01:27.858109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.264 qpair failed and we were unable to recover it. 00:37:36.264 [2024-11-19 08:01:27.858294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.264 [2024-11-19 08:01:27.858334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.264 qpair failed and we were unable to recover it. 00:37:36.264 [2024-11-19 08:01:27.858509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.264 [2024-11-19 08:01:27.858548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.264 qpair failed and we were unable to recover it. 00:37:36.264 [2024-11-19 08:01:27.858735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.264 [2024-11-19 08:01:27.858773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.264 qpair failed and we were unable to recover it. 00:37:36.264 [2024-11-19 08:01:27.859000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.264 [2024-11-19 08:01:27.859049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.264 qpair failed and we were unable to recover it. 00:37:36.264 [2024-11-19 08:01:27.859369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.264 [2024-11-19 08:01:27.859431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.264 qpair failed and we were unable to recover it. 00:37:36.264 [2024-11-19 08:01:27.859600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.264 [2024-11-19 08:01:27.859652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.264 qpair failed and we were unable to recover it. 00:37:36.264 [2024-11-19 08:01:27.859815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.264 [2024-11-19 08:01:27.859851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.264 qpair failed and we were unable to recover it. 00:37:36.264 [2024-11-19 08:01:27.859957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.264 [2024-11-19 08:01:27.859993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.264 qpair failed and we were unable to recover it. 00:37:36.264 [2024-11-19 08:01:27.860186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.264 [2024-11-19 08:01:27.860237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.264 qpair failed and we were unable to recover it. 00:37:36.264 [2024-11-19 08:01:27.860479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.264 [2024-11-19 08:01:27.860519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.264 qpair failed and we were unable to recover it. 00:37:36.264 [2024-11-19 08:01:27.860680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.264 [2024-11-19 08:01:27.860740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.264 qpair failed and we were unable to recover it. 00:37:36.264 [2024-11-19 08:01:27.860885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.264 [2024-11-19 08:01:27.860922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.264 qpair failed and we were unable to recover it. 00:37:36.264 [2024-11-19 08:01:27.861071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.264 [2024-11-19 08:01:27.861108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.264 qpair failed and we were unable to recover it. 00:37:36.264 [2024-11-19 08:01:27.861251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.264 [2024-11-19 08:01:27.861303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.264 qpair failed and we were unable to recover it. 00:37:36.264 [2024-11-19 08:01:27.861476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.264 [2024-11-19 08:01:27.861515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.264 qpair failed and we were unable to recover it. 00:37:36.264 [2024-11-19 08:01:27.861664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.264 [2024-11-19 08:01:27.861711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.264 qpair failed and we were unable to recover it. 00:37:36.265 [2024-11-19 08:01:27.861868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.265 [2024-11-19 08:01:27.861918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.265 qpair failed and we were unable to recover it. 00:37:36.265 [2024-11-19 08:01:27.862164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.265 [2024-11-19 08:01:27.862206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.265 qpair failed and we were unable to recover it. 00:37:36.265 [2024-11-19 08:01:27.862418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.265 [2024-11-19 08:01:27.862457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.265 qpair failed and we were unable to recover it. 00:37:36.265 [2024-11-19 08:01:27.862609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.265 [2024-11-19 08:01:27.862648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.265 qpair failed and we were unable to recover it. 00:37:36.265 [2024-11-19 08:01:27.862785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.265 [2024-11-19 08:01:27.862821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.265 qpair failed and we were unable to recover it. 00:37:36.265 [2024-11-19 08:01:27.862959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.265 [2024-11-19 08:01:27.862994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.265 qpair failed and we were unable to recover it. 00:37:36.265 [2024-11-19 08:01:27.863193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.265 [2024-11-19 08:01:27.863255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.265 qpair failed and we were unable to recover it. 00:37:36.265 [2024-11-19 08:01:27.863363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.265 [2024-11-19 08:01:27.863402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.265 qpair failed and we were unable to recover it. 00:37:36.265 [2024-11-19 08:01:27.863595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.265 [2024-11-19 08:01:27.863639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.265 qpair failed and we were unable to recover it. 00:37:36.265 [2024-11-19 08:01:27.863831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.265 [2024-11-19 08:01:27.863882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.265 qpair failed and we were unable to recover it. 00:37:36.265 [2024-11-19 08:01:27.864007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.265 [2024-11-19 08:01:27.864057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.265 qpair failed and we were unable to recover it. 00:37:36.265 [2024-11-19 08:01:27.864247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.265 [2024-11-19 08:01:27.864285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.265 qpair failed and we were unable to recover it. 00:37:36.265 [2024-11-19 08:01:27.864423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.265 [2024-11-19 08:01:27.864459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.265 qpair failed and we were unable to recover it. 00:37:36.265 [2024-11-19 08:01:27.864565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.265 [2024-11-19 08:01:27.864601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.265 qpair failed and we were unable to recover it. 00:37:36.265 [2024-11-19 08:01:27.864760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.265 [2024-11-19 08:01:27.864809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.265 qpair failed and we were unable to recover it. 00:37:36.265 [2024-11-19 08:01:27.864954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.265 [2024-11-19 08:01:27.864990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.265 qpair failed and we were unable to recover it. 00:37:36.265 [2024-11-19 08:01:27.865118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.265 [2024-11-19 08:01:27.865157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.265 qpair failed and we were unable to recover it. 00:37:36.265 [2024-11-19 08:01:27.865375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.265 [2024-11-19 08:01:27.865434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.265 qpair failed and we were unable to recover it. 00:37:36.265 [2024-11-19 08:01:27.865627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.265 [2024-11-19 08:01:27.865666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.265 qpair failed and we were unable to recover it. 00:37:36.265 [2024-11-19 08:01:27.865813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.265 [2024-11-19 08:01:27.865853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.265 qpair failed and we were unable to recover it. 00:37:36.265 [2024-11-19 08:01:27.866037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.265 [2024-11-19 08:01:27.866079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.265 qpair failed and we were unable to recover it. 00:37:36.265 [2024-11-19 08:01:27.866268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.265 [2024-11-19 08:01:27.866307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.265 qpair failed and we were unable to recover it. 00:37:36.265 [2024-11-19 08:01:27.866461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.265 [2024-11-19 08:01:27.866500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.265 qpair failed and we were unable to recover it. 00:37:36.265 [2024-11-19 08:01:27.866673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.265 [2024-11-19 08:01:27.866737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.265 qpair failed and we were unable to recover it. 00:37:36.265 [2024-11-19 08:01:27.866913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.265 [2024-11-19 08:01:27.866962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.265 qpair failed and we were unable to recover it. 00:37:36.265 [2024-11-19 08:01:27.867154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.265 [2024-11-19 08:01:27.867210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.265 qpair failed and we were unable to recover it. 00:37:36.265 [2024-11-19 08:01:27.867435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.265 [2024-11-19 08:01:27.867475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.265 qpair failed and we were unable to recover it. 00:37:36.265 [2024-11-19 08:01:27.867631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.265 [2024-11-19 08:01:27.867681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.265 qpair failed and we were unable to recover it. 00:37:36.265 [2024-11-19 08:01:27.867850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.265 [2024-11-19 08:01:27.867890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.266 qpair failed and we were unable to recover it. 00:37:36.266 [2024-11-19 08:01:27.868030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.266 [2024-11-19 08:01:27.868080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.266 qpair failed and we were unable to recover it. 00:37:36.266 [2024-11-19 08:01:27.868327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.266 [2024-11-19 08:01:27.868361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.266 qpair failed and we were unable to recover it. 00:37:36.266 [2024-11-19 08:01:27.868534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.266 [2024-11-19 08:01:27.868570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.266 qpair failed and we were unable to recover it. 00:37:36.266 [2024-11-19 08:01:27.868783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.266 [2024-11-19 08:01:27.868819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.266 qpair failed and we were unable to recover it. 00:37:36.266 [2024-11-19 08:01:27.868980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.266 [2024-11-19 08:01:27.869033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.266 qpair failed and we were unable to recover it. 00:37:36.266 [2024-11-19 08:01:27.869187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.266 [2024-11-19 08:01:27.869240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.266 qpair failed and we were unable to recover it. 00:37:36.266 [2024-11-19 08:01:27.869408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.266 [2024-11-19 08:01:27.869443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.266 qpair failed and we were unable to recover it. 00:37:36.266 [2024-11-19 08:01:27.869659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.266 [2024-11-19 08:01:27.869704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.266 qpair failed and we were unable to recover it. 00:37:36.266 [2024-11-19 08:01:27.869872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.266 [2024-11-19 08:01:27.869925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.266 qpair failed and we were unable to recover it. 00:37:36.266 [2024-11-19 08:01:27.870032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.266 [2024-11-19 08:01:27.870068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.266 qpair failed and we were unable to recover it. 00:37:36.266 [2024-11-19 08:01:27.870171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.266 [2024-11-19 08:01:27.870207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.266 qpair failed and we were unable to recover it. 00:37:36.266 [2024-11-19 08:01:27.870396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.266 [2024-11-19 08:01:27.870452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.266 qpair failed and we were unable to recover it. 00:37:36.266 [2024-11-19 08:01:27.870587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.266 [2024-11-19 08:01:27.870623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.266 qpair failed and we were unable to recover it. 00:37:36.266 [2024-11-19 08:01:27.870786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.266 [2024-11-19 08:01:27.870840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.266 qpair failed and we were unable to recover it. 00:37:36.266 [2024-11-19 08:01:27.871031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.266 [2024-11-19 08:01:27.871084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.266 qpair failed and we were unable to recover it. 00:37:36.266 [2024-11-19 08:01:27.871292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.266 [2024-11-19 08:01:27.871350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.266 qpair failed and we were unable to recover it. 00:37:36.266 [2024-11-19 08:01:27.871512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.266 [2024-11-19 08:01:27.871547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.266 qpair failed and we were unable to recover it. 00:37:36.266 [2024-11-19 08:01:27.871658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.266 [2024-11-19 08:01:27.871701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.266 qpair failed and we were unable to recover it. 00:37:36.266 [2024-11-19 08:01:27.871853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.266 [2024-11-19 08:01:27.871911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.266 qpair failed and we were unable to recover it. 00:37:36.266 [2024-11-19 08:01:27.872101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.266 [2024-11-19 08:01:27.872147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.266 qpair failed and we were unable to recover it. 00:37:36.266 [2024-11-19 08:01:27.872347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.266 [2024-11-19 08:01:27.872403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.266 qpair failed and we were unable to recover it. 00:37:36.266 [2024-11-19 08:01:27.872665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.266 [2024-11-19 08:01:27.872731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.266 qpair failed and we were unable to recover it. 00:37:36.266 [2024-11-19 08:01:27.872879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.266 [2024-11-19 08:01:27.872918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.266 qpair failed and we were unable to recover it. 00:37:36.266 [2024-11-19 08:01:27.873067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.266 [2024-11-19 08:01:27.873104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.266 qpair failed and we were unable to recover it. 00:37:36.266 [2024-11-19 08:01:27.873311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.266 [2024-11-19 08:01:27.873374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.266 qpair failed and we were unable to recover it. 00:37:36.266 [2024-11-19 08:01:27.873562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.266 [2024-11-19 08:01:27.873597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.266 qpair failed and we were unable to recover it. 00:37:36.266 [2024-11-19 08:01:27.873714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.266 [2024-11-19 08:01:27.873749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.266 qpair failed and we were unable to recover it. 00:37:36.266 [2024-11-19 08:01:27.873882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.266 [2024-11-19 08:01:27.873917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.266 qpair failed and we were unable to recover it. 00:37:36.266 [2024-11-19 08:01:27.874076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.266 [2024-11-19 08:01:27.874114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.266 qpair failed and we were unable to recover it. 00:37:36.266 [2024-11-19 08:01:27.874225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.266 [2024-11-19 08:01:27.874263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.266 qpair failed and we were unable to recover it. 00:37:36.266 [2024-11-19 08:01:27.874383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.266 [2024-11-19 08:01:27.874422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.266 qpair failed and we were unable to recover it. 00:37:36.266 [2024-11-19 08:01:27.874615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.266 [2024-11-19 08:01:27.874651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.266 qpair failed and we were unable to recover it. 00:37:36.266 [2024-11-19 08:01:27.874801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.266 [2024-11-19 08:01:27.874837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.266 qpair failed and we were unable to recover it. 00:37:36.266 [2024-11-19 08:01:27.874978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.266 [2024-11-19 08:01:27.875033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.266 qpair failed and we were unable to recover it. 00:37:36.267 [2024-11-19 08:01:27.875204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.267 [2024-11-19 08:01:27.875272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.267 qpair failed and we were unable to recover it. 00:37:36.267 [2024-11-19 08:01:27.875449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.267 [2024-11-19 08:01:27.875485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.267 qpair failed and we were unable to recover it. 00:37:36.267 [2024-11-19 08:01:27.875625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.267 [2024-11-19 08:01:27.875661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.267 qpair failed and we were unable to recover it. 00:37:36.267 [2024-11-19 08:01:27.875827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.267 [2024-11-19 08:01:27.875879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.267 qpair failed and we were unable to recover it. 00:37:36.267 [2024-11-19 08:01:27.876032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.267 [2024-11-19 08:01:27.876071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.267 qpair failed and we were unable to recover it. 00:37:36.267 [2024-11-19 08:01:27.876226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.267 [2024-11-19 08:01:27.876265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.267 qpair failed and we were unable to recover it. 00:37:36.267 [2024-11-19 08:01:27.876382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.267 [2024-11-19 08:01:27.876421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.267 qpair failed and we were unable to recover it. 00:37:36.267 [2024-11-19 08:01:27.876573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.267 [2024-11-19 08:01:27.876612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.267 qpair failed and we were unable to recover it. 00:37:36.267 [2024-11-19 08:01:27.876767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.267 [2024-11-19 08:01:27.876802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.267 qpair failed and we were unable to recover it. 00:37:36.267 [2024-11-19 08:01:27.876902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.267 [2024-11-19 08:01:27.876937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.267 qpair failed and we were unable to recover it. 00:37:36.267 [2024-11-19 08:01:27.877045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.267 [2024-11-19 08:01:27.877080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.267 qpair failed and we were unable to recover it. 00:37:36.267 [2024-11-19 08:01:27.877194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.267 [2024-11-19 08:01:27.877231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.267 qpair failed and we were unable to recover it. 00:37:36.267 [2024-11-19 08:01:27.877367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.267 [2024-11-19 08:01:27.877413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.267 qpair failed and we were unable to recover it. 00:37:36.267 [2024-11-19 08:01:27.877552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.267 [2024-11-19 08:01:27.877586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.267 qpair failed and we were unable to recover it. 00:37:36.267 [2024-11-19 08:01:27.877700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.267 [2024-11-19 08:01:27.877737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.267 qpair failed and we were unable to recover it. 00:37:36.267 [2024-11-19 08:01:27.877888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.267 [2024-11-19 08:01:27.877929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.267 qpair failed and we were unable to recover it. 00:37:36.267 [2024-11-19 08:01:27.878047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.267 [2024-11-19 08:01:27.878086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.267 qpair failed and we were unable to recover it. 00:37:36.267 [2024-11-19 08:01:27.878247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.267 [2024-11-19 08:01:27.878286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.267 qpair failed and we were unable to recover it. 00:37:36.267 [2024-11-19 08:01:27.878431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.267 [2024-11-19 08:01:27.878470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.267 qpair failed and we were unable to recover it. 00:37:36.267 [2024-11-19 08:01:27.878640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.267 [2024-11-19 08:01:27.878676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.267 qpair failed and we were unable to recover it. 00:37:36.267 [2024-11-19 08:01:27.878857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.267 [2024-11-19 08:01:27.878914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.267 qpair failed and we were unable to recover it. 00:37:36.267 [2024-11-19 08:01:27.879052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.267 [2024-11-19 08:01:27.879107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.267 qpair failed and we were unable to recover it. 00:37:36.267 [2024-11-19 08:01:27.879219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.267 [2024-11-19 08:01:27.879255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.267 qpair failed and we were unable to recover it. 00:37:36.267 [2024-11-19 08:01:27.879369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.267 [2024-11-19 08:01:27.879404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.267 qpair failed and we were unable to recover it. 00:37:36.267 [2024-11-19 08:01:27.879536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.267 [2024-11-19 08:01:27.879572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.267 qpair failed and we were unable to recover it. 00:37:36.267 [2024-11-19 08:01:27.879680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.267 [2024-11-19 08:01:27.879737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.267 qpair failed and we were unable to recover it. 00:37:36.267 [2024-11-19 08:01:27.879873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.267 [2024-11-19 08:01:27.879909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.267 qpair failed and we were unable to recover it. 00:37:36.267 [2024-11-19 08:01:27.880057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.267 [2024-11-19 08:01:27.880094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.267 qpair failed and we were unable to recover it. 00:37:36.267 [2024-11-19 08:01:27.880226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.267 [2024-11-19 08:01:27.880262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.267 qpair failed and we were unable to recover it. 00:37:36.267 [2024-11-19 08:01:27.880411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.267 [2024-11-19 08:01:27.880448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.267 qpair failed and we were unable to recover it. 00:37:36.267 [2024-11-19 08:01:27.880576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.267 [2024-11-19 08:01:27.880611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.267 qpair failed and we were unable to recover it. 00:37:36.267 [2024-11-19 08:01:27.880712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.268 [2024-11-19 08:01:27.880754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.268 qpair failed and we were unable to recover it. 00:37:36.268 [2024-11-19 08:01:27.880891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.268 [2024-11-19 08:01:27.880931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.268 qpair failed and we were unable to recover it. 00:37:36.268 [2024-11-19 08:01:27.881066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.268 [2024-11-19 08:01:27.881120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.268 qpair failed and we were unable to recover it. 00:37:36.268 [2024-11-19 08:01:27.881281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.268 [2024-11-19 08:01:27.881323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.268 qpair failed and we were unable to recover it. 00:37:36.268 [2024-11-19 08:01:27.881480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.268 [2024-11-19 08:01:27.881516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.268 qpair failed and we were unable to recover it. 00:37:36.268 [2024-11-19 08:01:27.881651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.268 [2024-11-19 08:01:27.881686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.268 qpair failed and we were unable to recover it. 00:37:36.268 [2024-11-19 08:01:27.881820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.268 [2024-11-19 08:01:27.881855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.268 qpair failed and we were unable to recover it. 00:37:36.268 [2024-11-19 08:01:27.881991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.268 [2024-11-19 08:01:27.882028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.268 qpair failed and we were unable to recover it. 00:37:36.268 [2024-11-19 08:01:27.882174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.268 [2024-11-19 08:01:27.882213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.268 qpair failed and we were unable to recover it. 00:37:36.268 [2024-11-19 08:01:27.882369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.268 [2024-11-19 08:01:27.882408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.268 qpair failed and we were unable to recover it. 00:37:36.268 [2024-11-19 08:01:27.882574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.268 [2024-11-19 08:01:27.882611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.268 qpair failed and we were unable to recover it. 00:37:36.268 [2024-11-19 08:01:27.882724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.268 [2024-11-19 08:01:27.882759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.268 qpair failed and we were unable to recover it. 00:37:36.268 [2024-11-19 08:01:27.882915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.268 [2024-11-19 08:01:27.882974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.268 qpair failed and we were unable to recover it. 00:37:36.268 [2024-11-19 08:01:27.883126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.268 [2024-11-19 08:01:27.883165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.268 qpair failed and we were unable to recover it. 00:37:36.268 [2024-11-19 08:01:27.883338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.268 [2024-11-19 08:01:27.883390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.268 qpair failed and we were unable to recover it. 00:37:36.268 [2024-11-19 08:01:27.883501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.268 [2024-11-19 08:01:27.883537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.268 qpair failed and we were unable to recover it. 00:37:36.268 [2024-11-19 08:01:27.883655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.268 [2024-11-19 08:01:27.883696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.268 qpair failed and we were unable to recover it. 00:37:36.268 [2024-11-19 08:01:27.883876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.268 [2024-11-19 08:01:27.883925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.268 qpair failed and we were unable to recover it. 00:37:36.268 [2024-11-19 08:01:27.884122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.268 [2024-11-19 08:01:27.884163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.268 qpair failed and we were unable to recover it. 00:37:36.268 [2024-11-19 08:01:27.884286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.268 [2024-11-19 08:01:27.884326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.268 qpair failed and we were unable to recover it. 00:37:36.268 [2024-11-19 08:01:27.884465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.268 [2024-11-19 08:01:27.884500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.268 qpair failed and we were unable to recover it. 00:37:36.268 [2024-11-19 08:01:27.884645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.268 [2024-11-19 08:01:27.884703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.268 qpair failed and we were unable to recover it. 00:37:36.268 [2024-11-19 08:01:27.884836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.268 [2024-11-19 08:01:27.884872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.268 qpair failed and we were unable to recover it. 00:37:36.268 [2024-11-19 08:01:27.885009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.268 [2024-11-19 08:01:27.885064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.268 qpair failed and we were unable to recover it. 00:37:36.268 [2024-11-19 08:01:27.885181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.268 [2024-11-19 08:01:27.885220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.268 qpair failed and we were unable to recover it. 00:37:36.268 [2024-11-19 08:01:27.885379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.268 [2024-11-19 08:01:27.885417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.268 qpair failed and we were unable to recover it. 00:37:36.268 [2024-11-19 08:01:27.885642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.268 [2024-11-19 08:01:27.885683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.268 qpair failed and we were unable to recover it. 00:37:36.268 [2024-11-19 08:01:27.885893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.268 [2024-11-19 08:01:27.885929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.268 qpair failed and we were unable to recover it. 00:37:36.268 [2024-11-19 08:01:27.886154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.268 [2024-11-19 08:01:27.886192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.268 qpair failed and we were unable to recover it. 00:37:36.268 [2024-11-19 08:01:27.886344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.268 [2024-11-19 08:01:27.886382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.268 qpair failed and we were unable to recover it. 00:37:36.268 [2024-11-19 08:01:27.886547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.268 [2024-11-19 08:01:27.886613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.268 qpair failed and we were unable to recover it. 00:37:36.268 [2024-11-19 08:01:27.886795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.268 [2024-11-19 08:01:27.886831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.268 qpair failed and we were unable to recover it. 00:37:36.268 [2024-11-19 08:01:27.886951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.268 [2024-11-19 08:01:27.886987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.268 qpair failed and we were unable to recover it. 00:37:36.268 [2024-11-19 08:01:27.887103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.268 [2024-11-19 08:01:27.887146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.268 qpair failed and we were unable to recover it. 00:37:36.268 [2024-11-19 08:01:27.887299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.268 [2024-11-19 08:01:27.887336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.268 qpair failed and we were unable to recover it. 00:37:36.268 [2024-11-19 08:01:27.887501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.268 [2024-11-19 08:01:27.887555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.268 qpair failed and we were unable to recover it. 00:37:36.268 [2024-11-19 08:01:27.887671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.269 [2024-11-19 08:01:27.887718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.269 qpair failed and we were unable to recover it. 00:37:36.269 [2024-11-19 08:01:27.887871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.269 [2024-11-19 08:01:27.887907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.269 qpair failed and we were unable to recover it. 00:37:36.269 [2024-11-19 08:01:27.888015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.269 [2024-11-19 08:01:27.888067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.269 qpair failed and we were unable to recover it. 00:37:36.269 [2024-11-19 08:01:27.888242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.269 [2024-11-19 08:01:27.888277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.269 qpair failed and we were unable to recover it. 00:37:36.269 [2024-11-19 08:01:27.888493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.269 [2024-11-19 08:01:27.888532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.269 qpair failed and we were unable to recover it. 00:37:36.269 [2024-11-19 08:01:27.888660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.269 [2024-11-19 08:01:27.888703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.269 qpair failed and we were unable to recover it. 00:37:36.269 [2024-11-19 08:01:27.888844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.269 [2024-11-19 08:01:27.888878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.269 qpair failed and we were unable to recover it. 00:37:36.269 [2024-11-19 08:01:27.889004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.269 [2024-11-19 08:01:27.889039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.269 qpair failed and we were unable to recover it. 00:37:36.269 [2024-11-19 08:01:27.889161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.269 [2024-11-19 08:01:27.889199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.269 qpair failed and we were unable to recover it. 00:37:36.269 [2024-11-19 08:01:27.889323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.269 [2024-11-19 08:01:27.889361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.269 qpair failed and we were unable to recover it. 00:37:36.269 [2024-11-19 08:01:27.889511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.269 [2024-11-19 08:01:27.889549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.269 qpair failed and we were unable to recover it. 00:37:36.269 [2024-11-19 08:01:27.889676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.269 [2024-11-19 08:01:27.889719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.269 qpair failed and we were unable to recover it. 00:37:36.269 [2024-11-19 08:01:27.889824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.269 [2024-11-19 08:01:27.889859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.269 qpair failed and we were unable to recover it. 00:37:36.269 [2024-11-19 08:01:27.890019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.269 [2024-11-19 08:01:27.890054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.269 qpair failed and we were unable to recover it. 00:37:36.269 [2024-11-19 08:01:27.890181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.269 [2024-11-19 08:01:27.890220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.269 qpair failed and we were unable to recover it. 00:37:36.269 [2024-11-19 08:01:27.890368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.269 [2024-11-19 08:01:27.890406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.269 qpair failed and we were unable to recover it. 00:37:36.269 [2024-11-19 08:01:27.890547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.269 [2024-11-19 08:01:27.890597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.269 qpair failed and we were unable to recover it. 00:37:36.269 [2024-11-19 08:01:27.890750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.269 [2024-11-19 08:01:27.890790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.269 qpair failed and we were unable to recover it. 00:37:36.269 [2024-11-19 08:01:27.890921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.269 [2024-11-19 08:01:27.890988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.269 qpair failed and we were unable to recover it. 00:37:36.269 [2024-11-19 08:01:27.891142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.269 [2024-11-19 08:01:27.891195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.269 qpair failed and we were unable to recover it. 00:37:36.269 [2024-11-19 08:01:27.891326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.269 [2024-11-19 08:01:27.891362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.269 qpair failed and we were unable to recover it. 00:37:36.269 [2024-11-19 08:01:27.891472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.269 [2024-11-19 08:01:27.891508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.269 qpair failed and we were unable to recover it. 00:37:36.269 [2024-11-19 08:01:27.891642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.269 [2024-11-19 08:01:27.891678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.269 qpair failed and we were unable to recover it. 00:37:36.269 [2024-11-19 08:01:27.891814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.269 [2024-11-19 08:01:27.891864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.269 qpair failed and we were unable to recover it. 00:37:36.269 [2024-11-19 08:01:27.892017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.269 [2024-11-19 08:01:27.892054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.269 qpair failed and we were unable to recover it. 00:37:36.269 [2024-11-19 08:01:27.892196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.269 [2024-11-19 08:01:27.892237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.270 qpair failed and we were unable to recover it. 00:37:36.270 [2024-11-19 08:01:27.892378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.270 [2024-11-19 08:01:27.892415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.270 qpair failed and we were unable to recover it. 00:37:36.270 [2024-11-19 08:01:27.892563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.270 [2024-11-19 08:01:27.892598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.270 qpair failed and we were unable to recover it. 00:37:36.270 [2024-11-19 08:01:27.892705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.270 [2024-11-19 08:01:27.892753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.270 qpair failed and we were unable to recover it. 00:37:36.270 [2024-11-19 08:01:27.892934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.270 [2024-11-19 08:01:27.892975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.270 qpair failed and we were unable to recover it. 00:37:36.270 [2024-11-19 08:01:27.893162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.270 [2024-11-19 08:01:27.893201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.270 qpair failed and we were unable to recover it. 00:37:36.270 [2024-11-19 08:01:27.893327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.270 [2024-11-19 08:01:27.893366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.270 qpair failed and we were unable to recover it. 00:37:36.270 [2024-11-19 08:01:27.893516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.270 [2024-11-19 08:01:27.893554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.270 qpair failed and we were unable to recover it. 00:37:36.270 [2024-11-19 08:01:27.893778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.270 [2024-11-19 08:01:27.893813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.270 qpair failed and we were unable to recover it. 00:37:36.270 [2024-11-19 08:01:27.893950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.270 [2024-11-19 08:01:27.894012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.270 qpair failed and we were unable to recover it. 00:37:36.270 [2024-11-19 08:01:27.894190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.270 [2024-11-19 08:01:27.894229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.270 qpair failed and we were unable to recover it. 00:37:36.270 [2024-11-19 08:01:27.894404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.270 [2024-11-19 08:01:27.894442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.270 qpair failed and we were unable to recover it. 00:37:36.270 [2024-11-19 08:01:27.894578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.270 [2024-11-19 08:01:27.894638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.270 qpair failed and we were unable to recover it. 00:37:36.270 [2024-11-19 08:01:27.894864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.270 [2024-11-19 08:01:27.894901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.270 qpair failed and we were unable to recover it. 00:37:36.270 [2024-11-19 08:01:27.895045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.270 [2024-11-19 08:01:27.895110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.270 qpair failed and we were unable to recover it. 00:37:36.270 [2024-11-19 08:01:27.895216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.270 [2024-11-19 08:01:27.895251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.270 qpair failed and we were unable to recover it. 00:37:36.270 [2024-11-19 08:01:27.895421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.270 [2024-11-19 08:01:27.895478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.270 qpair failed and we were unable to recover it. 00:37:36.270 [2024-11-19 08:01:27.895580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.270 [2024-11-19 08:01:27.895614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.270 qpair failed and we were unable to recover it. 00:37:36.270 [2024-11-19 08:01:27.895810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.270 [2024-11-19 08:01:27.895852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.270 qpair failed and we were unable to recover it. 00:37:36.270 [2024-11-19 08:01:27.896086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.270 [2024-11-19 08:01:27.896126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.270 qpair failed and we were unable to recover it. 00:37:36.270 [2024-11-19 08:01:27.896295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.270 [2024-11-19 08:01:27.896334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.270 qpair failed and we were unable to recover it. 00:37:36.270 [2024-11-19 08:01:27.896463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.270 [2024-11-19 08:01:27.896502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.270 qpair failed and we were unable to recover it. 00:37:36.270 [2024-11-19 08:01:27.896653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.270 [2024-11-19 08:01:27.896702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.270 qpair failed and we were unable to recover it. 00:37:36.270 [2024-11-19 08:01:27.896930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.270 [2024-11-19 08:01:27.896981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.270 qpair failed and we were unable to recover it. 00:37:36.270 [2024-11-19 08:01:27.897163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.270 [2024-11-19 08:01:27.897231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.270 qpair failed and we were unable to recover it. 00:37:36.270 [2024-11-19 08:01:27.897402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.270 [2024-11-19 08:01:27.897463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.270 qpair failed and we were unable to recover it. 00:37:36.270 [2024-11-19 08:01:27.897591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.270 [2024-11-19 08:01:27.897631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.270 qpair failed and we were unable to recover it. 00:37:36.270 [2024-11-19 08:01:27.897784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.270 [2024-11-19 08:01:27.897819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.270 qpair failed and we were unable to recover it. 00:37:36.270 [2024-11-19 08:01:27.897927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.270 [2024-11-19 08:01:27.897962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.270 qpair failed and we were unable to recover it. 00:37:36.270 [2024-11-19 08:01:27.898095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.270 [2024-11-19 08:01:27.898130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.270 qpair failed and we were unable to recover it. 00:37:36.270 [2024-11-19 08:01:27.898296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.270 [2024-11-19 08:01:27.898335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.271 qpair failed and we were unable to recover it. 00:37:36.271 [2024-11-19 08:01:27.898495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.271 [2024-11-19 08:01:27.898533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.271 qpair failed and we were unable to recover it. 00:37:36.271 [2024-11-19 08:01:27.898683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.271 [2024-11-19 08:01:27.898751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.271 qpair failed and we were unable to recover it. 00:37:36.271 [2024-11-19 08:01:27.898914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.271 [2024-11-19 08:01:27.898949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.271 qpair failed and we were unable to recover it. 00:37:36.271 [2024-11-19 08:01:27.899079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.271 [2024-11-19 08:01:27.899118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.271 qpair failed and we were unable to recover it. 00:37:36.271 [2024-11-19 08:01:27.899337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.271 [2024-11-19 08:01:27.899375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.271 qpair failed and we were unable to recover it. 00:37:36.271 [2024-11-19 08:01:27.899502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.271 [2024-11-19 08:01:27.899555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.271 qpair failed and we were unable to recover it. 00:37:36.271 [2024-11-19 08:01:27.899722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.271 [2024-11-19 08:01:27.899763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.271 qpair failed and we were unable to recover it. 00:37:36.271 [2024-11-19 08:01:27.899867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.271 [2024-11-19 08:01:27.899902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.271 qpair failed and we were unable to recover it. 00:37:36.271 [2024-11-19 08:01:27.900034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.271 [2024-11-19 08:01:27.900094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.271 qpair failed and we were unable to recover it. 00:37:36.271 [2024-11-19 08:01:27.900296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.271 [2024-11-19 08:01:27.900358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.271 qpair failed and we were unable to recover it. 00:37:36.271 [2024-11-19 08:01:27.900500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.271 [2024-11-19 08:01:27.900537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.271 qpair failed and we were unable to recover it. 00:37:36.271 [2024-11-19 08:01:27.900676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.271 [2024-11-19 08:01:27.900723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.271 qpair failed and we were unable to recover it. 00:37:36.271 [2024-11-19 08:01:27.900929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.271 [2024-11-19 08:01:27.900984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.271 qpair failed and we were unable to recover it. 00:37:36.271 [2024-11-19 08:01:27.901137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.271 [2024-11-19 08:01:27.901179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.271 qpair failed and we were unable to recover it. 00:37:36.271 [2024-11-19 08:01:27.901329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.271 [2024-11-19 08:01:27.901379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.271 qpair failed and we were unable to recover it. 00:37:36.271 [2024-11-19 08:01:27.901514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.271 [2024-11-19 08:01:27.901549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.271 qpair failed and we were unable to recover it. 00:37:36.271 [2024-11-19 08:01:27.901660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.271 [2024-11-19 08:01:27.901706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.271 qpair failed and we were unable to recover it. 00:37:36.271 [2024-11-19 08:01:27.901852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.271 [2024-11-19 08:01:27.901886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.271 qpair failed and we were unable to recover it. 00:37:36.271 [2024-11-19 08:01:27.902060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.271 [2024-11-19 08:01:27.902098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.271 qpair failed and we were unable to recover it. 00:37:36.271 [2024-11-19 08:01:27.902272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.271 [2024-11-19 08:01:27.902319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.271 qpair failed and we were unable to recover it. 00:37:36.271 [2024-11-19 08:01:27.902474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.271 [2024-11-19 08:01:27.902513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.271 qpair failed and we were unable to recover it. 00:37:36.271 [2024-11-19 08:01:27.902703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.271 [2024-11-19 08:01:27.902749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.271 qpair failed and we were unable to recover it. 00:37:36.271 [2024-11-19 08:01:27.902859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.271 [2024-11-19 08:01:27.902894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.271 qpair failed and we were unable to recover it. 00:37:36.271 [2024-11-19 08:01:27.903058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.271 [2024-11-19 08:01:27.903114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.271 qpair failed and we were unable to recover it. 00:37:36.271 [2024-11-19 08:01:27.903292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.271 [2024-11-19 08:01:27.903331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.271 qpair failed and we were unable to recover it. 00:37:36.271 [2024-11-19 08:01:27.903501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.271 [2024-11-19 08:01:27.903556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.271 qpair failed and we were unable to recover it. 00:37:36.271 [2024-11-19 08:01:27.903738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.271 [2024-11-19 08:01:27.903773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.271 qpair failed and we were unable to recover it. 00:37:36.271 [2024-11-19 08:01:27.903930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.271 [2024-11-19 08:01:27.903968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.271 qpair failed and we were unable to recover it. 00:37:36.271 [2024-11-19 08:01:27.904108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.272 [2024-11-19 08:01:27.904158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.272 qpair failed and we were unable to recover it. 00:37:36.272 [2024-11-19 08:01:27.904343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.272 [2024-11-19 08:01:27.904399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.272 qpair failed and we were unable to recover it. 00:37:36.272 [2024-11-19 08:01:27.904542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.272 [2024-11-19 08:01:27.904577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.272 qpair failed and we were unable to recover it. 00:37:36.272 [2024-11-19 08:01:27.904684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.272 [2024-11-19 08:01:27.904731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.272 qpair failed and we were unable to recover it. 00:37:36.272 [2024-11-19 08:01:27.904867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.272 [2024-11-19 08:01:27.904922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.272 qpair failed and we were unable to recover it. 00:37:36.272 [2024-11-19 08:01:27.905110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.272 [2024-11-19 08:01:27.905168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.272 qpair failed and we were unable to recover it. 00:37:36.272 [2024-11-19 08:01:27.905323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.272 [2024-11-19 08:01:27.905376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.272 qpair failed and we were unable to recover it. 00:37:36.272 [2024-11-19 08:01:27.905481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.272 [2024-11-19 08:01:27.905515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.272 qpair failed and we were unable to recover it. 00:37:36.272 [2024-11-19 08:01:27.905644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.272 [2024-11-19 08:01:27.905709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.272 qpair failed and we were unable to recover it. 00:37:36.272 [2024-11-19 08:01:27.905845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.272 [2024-11-19 08:01:27.905882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.272 qpair failed and we were unable to recover it. 00:37:36.272 [2024-11-19 08:01:27.905994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.272 [2024-11-19 08:01:27.906031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.272 qpair failed and we were unable to recover it. 00:37:36.272 [2024-11-19 08:01:27.906164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.272 [2024-11-19 08:01:27.906203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.272 qpair failed and we were unable to recover it. 00:37:36.272 [2024-11-19 08:01:27.906328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.272 [2024-11-19 08:01:27.906367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.272 qpair failed and we were unable to recover it. 00:37:36.272 [2024-11-19 08:01:27.906504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.272 [2024-11-19 08:01:27.906559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.272 qpair failed and we were unable to recover it. 00:37:36.272 [2024-11-19 08:01:27.906761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.272 [2024-11-19 08:01:27.906817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.272 qpair failed and we were unable to recover it. 00:37:36.272 [2024-11-19 08:01:27.906967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.272 [2024-11-19 08:01:27.907022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.272 qpair failed and we were unable to recover it. 00:37:36.272 [2024-11-19 08:01:27.907135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.272 [2024-11-19 08:01:27.907171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.272 qpair failed and we were unable to recover it. 00:37:36.272 [2024-11-19 08:01:27.907299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.272 [2024-11-19 08:01:27.907335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.272 qpair failed and we were unable to recover it. 00:37:36.272 [2024-11-19 08:01:27.907498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.272 [2024-11-19 08:01:27.907533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.272 qpair failed and we were unable to recover it. 00:37:36.272 [2024-11-19 08:01:27.907643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.272 [2024-11-19 08:01:27.907679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.272 qpair failed and we were unable to recover it. 00:37:36.272 [2024-11-19 08:01:27.907855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.272 [2024-11-19 08:01:27.907905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.272 qpair failed and we were unable to recover it. 00:37:36.272 [2024-11-19 08:01:27.908067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.272 [2024-11-19 08:01:27.908110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.272 qpair failed and we were unable to recover it. 00:37:36.272 [2024-11-19 08:01:27.908283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.272 [2024-11-19 08:01:27.908319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.272 qpair failed and we were unable to recover it. 00:37:36.272 [2024-11-19 08:01:27.908455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.272 [2024-11-19 08:01:27.908490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.272 qpair failed and we were unable to recover it. 00:37:36.272 [2024-11-19 08:01:27.908627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.272 [2024-11-19 08:01:27.908671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.272 qpair failed and we were unable to recover it. 00:37:36.272 [2024-11-19 08:01:27.908837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.272 [2024-11-19 08:01:27.908886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.272 qpair failed and we were unable to recover it. 00:37:36.272 [2024-11-19 08:01:27.909054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.272 [2024-11-19 08:01:27.909096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.272 qpair failed and we were unable to recover it. 00:37:36.272 [2024-11-19 08:01:27.909245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.272 [2024-11-19 08:01:27.909286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.272 qpair failed and we were unable to recover it. 00:37:36.272 [2024-11-19 08:01:27.909441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.272 [2024-11-19 08:01:27.909491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.272 qpair failed and we were unable to recover it. 00:37:36.272 [2024-11-19 08:01:27.909632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.272 [2024-11-19 08:01:27.909668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.272 qpair failed and we were unable to recover it. 00:37:36.273 [2024-11-19 08:01:27.909870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.273 [2024-11-19 08:01:27.909925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.273 qpair failed and we were unable to recover it. 00:37:36.273 [2024-11-19 08:01:27.910071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.273 [2024-11-19 08:01:27.910111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.273 qpair failed and we were unable to recover it. 00:37:36.273 [2024-11-19 08:01:27.910235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.273 [2024-11-19 08:01:27.910280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.273 qpair failed and we were unable to recover it. 00:37:36.273 [2024-11-19 08:01:27.910472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.273 [2024-11-19 08:01:27.910511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.273 qpair failed and we were unable to recover it. 00:37:36.273 [2024-11-19 08:01:27.910659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.273 [2024-11-19 08:01:27.910701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.273 qpair failed and we were unable to recover it. 00:37:36.273 [2024-11-19 08:01:27.910937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.273 [2024-11-19 08:01:27.910990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.273 qpair failed and we were unable to recover it. 00:37:36.273 [2024-11-19 08:01:27.911156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.273 [2024-11-19 08:01:27.911195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.273 qpair failed and we were unable to recover it. 00:37:36.273 [2024-11-19 08:01:27.911327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.273 [2024-11-19 08:01:27.911381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.273 qpair failed and we were unable to recover it. 00:37:36.273 [2024-11-19 08:01:27.911547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.273 [2024-11-19 08:01:27.911595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.273 qpair failed and we were unable to recover it. 00:37:36.273 [2024-11-19 08:01:27.911772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.273 [2024-11-19 08:01:27.911808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.273 qpair failed and we were unable to recover it. 00:37:36.273 [2024-11-19 08:01:27.912027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.273 [2024-11-19 08:01:27.912062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.273 qpair failed and we were unable to recover it. 00:37:36.273 [2024-11-19 08:01:27.912271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.273 [2024-11-19 08:01:27.912331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.273 qpair failed and we were unable to recover it. 00:37:36.273 [2024-11-19 08:01:27.912490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.273 [2024-11-19 08:01:27.912536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.273 qpair failed and we were unable to recover it. 00:37:36.273 [2024-11-19 08:01:27.912685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.273 [2024-11-19 08:01:27.912726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.273 qpair failed and we were unable to recover it. 00:37:36.273 [2024-11-19 08:01:27.912880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.273 [2024-11-19 08:01:27.912930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.273 qpair failed and we were unable to recover it. 00:37:36.273 [2024-11-19 08:01:27.913108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.273 [2024-11-19 08:01:27.913150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.273 qpair failed and we were unable to recover it. 00:37:36.273 [2024-11-19 08:01:27.913387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.273 [2024-11-19 08:01:27.913451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.273 qpair failed and we were unable to recover it. 00:37:36.273 [2024-11-19 08:01:27.913610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.273 [2024-11-19 08:01:27.913669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.273 qpair failed and we were unable to recover it. 00:37:36.273 [2024-11-19 08:01:27.913837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.273 [2024-11-19 08:01:27.913873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.273 qpair failed and we were unable to recover it. 00:37:36.273 [2024-11-19 08:01:27.914051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.273 [2024-11-19 08:01:27.914090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.273 qpair failed and we were unable to recover it. 00:37:36.273 [2024-11-19 08:01:27.914246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.273 [2024-11-19 08:01:27.914296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.273 qpair failed and we were unable to recover it. 00:37:36.273 [2024-11-19 08:01:27.914487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.273 [2024-11-19 08:01:27.914525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.273 qpair failed and we were unable to recover it. 00:37:36.273 [2024-11-19 08:01:27.914713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.273 [2024-11-19 08:01:27.914763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.273 qpair failed and we were unable to recover it. 00:37:36.273 [2024-11-19 08:01:27.914947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.273 [2024-11-19 08:01:27.914984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.273 qpair failed and we were unable to recover it. 00:37:36.273 [2024-11-19 08:01:27.915184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.273 [2024-11-19 08:01:27.915219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.273 qpair failed and we were unable to recover it. 00:37:36.273 [2024-11-19 08:01:27.915405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.273 [2024-11-19 08:01:27.915473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.273 qpair failed and we were unable to recover it. 00:37:36.273 [2024-11-19 08:01:27.915634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.273 [2024-11-19 08:01:27.915685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.273 qpair failed and we were unable to recover it. 00:37:36.273 [2024-11-19 08:01:27.915828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.273 [2024-11-19 08:01:27.915866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.273 qpair failed and we were unable to recover it. 00:37:36.273 [2024-11-19 08:01:27.916053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.273 [2024-11-19 08:01:27.916101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.273 qpair failed and we were unable to recover it. 00:37:36.273 [2024-11-19 08:01:27.916211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.273 [2024-11-19 08:01:27.916247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.273 qpair failed and we were unable to recover it. 00:37:36.273 [2024-11-19 08:01:27.916414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.273 [2024-11-19 08:01:27.916468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.273 qpair failed and we were unable to recover it. 00:37:36.273 [2024-11-19 08:01:27.916631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.273 [2024-11-19 08:01:27.916672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.273 qpair failed and we were unable to recover it. 00:37:36.273 [2024-11-19 08:01:27.916791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.273 [2024-11-19 08:01:27.916827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.274 qpair failed and we were unable to recover it. 00:37:36.274 [2024-11-19 08:01:27.916987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.274 [2024-11-19 08:01:27.917039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.274 qpair failed and we were unable to recover it. 00:37:36.274 [2024-11-19 08:01:27.917293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.274 [2024-11-19 08:01:27.917373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.274 qpair failed and we were unable to recover it. 00:37:36.274 [2024-11-19 08:01:27.917593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.274 [2024-11-19 08:01:27.917656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.274 qpair failed and we were unable to recover it. 00:37:36.274 [2024-11-19 08:01:27.917809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.274 [2024-11-19 08:01:27.917846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.274 qpair failed and we were unable to recover it. 00:37:36.274 [2024-11-19 08:01:27.918014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.274 [2024-11-19 08:01:27.918053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.274 qpair failed and we were unable to recover it. 00:37:36.274 [2024-11-19 08:01:27.918226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.274 [2024-11-19 08:01:27.918299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.274 qpair failed and we were unable to recover it. 00:37:36.274 [2024-11-19 08:01:27.918468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.274 [2024-11-19 08:01:27.918524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.274 qpair failed and we were unable to recover it. 00:37:36.274 [2024-11-19 08:01:27.918675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.274 [2024-11-19 08:01:27.918730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.274 qpair failed and we were unable to recover it. 00:37:36.274 [2024-11-19 08:01:27.918872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.274 [2024-11-19 08:01:27.918907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.274 qpair failed and we were unable to recover it. 00:37:36.274 [2024-11-19 08:01:27.919035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.274 [2024-11-19 08:01:27.919126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.274 qpair failed and we were unable to recover it. 00:37:36.274 [2024-11-19 08:01:27.919239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.274 [2024-11-19 08:01:27.919277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.274 qpair failed and we were unable to recover it. 00:37:36.274 [2024-11-19 08:01:27.919523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.274 [2024-11-19 08:01:27.919579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.274 qpair failed and we were unable to recover it. 00:37:36.274 [2024-11-19 08:01:27.919815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.274 [2024-11-19 08:01:27.919854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.274 qpair failed and we were unable to recover it. 00:37:36.274 [2024-11-19 08:01:27.920009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.274 [2024-11-19 08:01:27.920049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.274 qpair failed and we were unable to recover it. 00:37:36.274 [2024-11-19 08:01:27.920232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.274 [2024-11-19 08:01:27.920291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.274 qpair failed and we were unable to recover it. 00:37:36.274 [2024-11-19 08:01:27.920400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.274 [2024-11-19 08:01:27.920438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.274 qpair failed and we were unable to recover it. 00:37:36.274 [2024-11-19 08:01:27.920629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.274 [2024-11-19 08:01:27.920666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.274 qpair failed and we were unable to recover it. 00:37:36.274 [2024-11-19 08:01:27.920837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.274 [2024-11-19 08:01:27.920894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.274 qpair failed and we were unable to recover it. 00:37:36.274 [2024-11-19 08:01:27.921063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.274 [2024-11-19 08:01:27.921117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.274 qpair failed and we were unable to recover it. 00:37:36.274 [2024-11-19 08:01:27.921337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.274 [2024-11-19 08:01:27.921394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.274 qpair failed and we were unable to recover it. 00:37:36.274 [2024-11-19 08:01:27.921532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.274 [2024-11-19 08:01:27.921575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.274 qpair failed and we were unable to recover it. 00:37:36.274 [2024-11-19 08:01:27.921714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.274 [2024-11-19 08:01:27.921750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.274 qpair failed and we were unable to recover it. 00:37:36.274 [2024-11-19 08:01:27.921901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.274 [2024-11-19 08:01:27.921955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.274 qpair failed and we were unable to recover it. 00:37:36.274 [2024-11-19 08:01:27.922095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.274 [2024-11-19 08:01:27.922154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.274 qpair failed and we were unable to recover it. 00:37:36.274 [2024-11-19 08:01:27.922290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.274 [2024-11-19 08:01:27.922351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.274 qpair failed and we were unable to recover it. 00:37:36.274 [2024-11-19 08:01:27.922473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.274 [2024-11-19 08:01:27.922508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.274 qpair failed and we were unable to recover it. 00:37:36.274 [2024-11-19 08:01:27.922644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.274 [2024-11-19 08:01:27.922679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.274 qpair failed and we were unable to recover it. 00:37:36.274 [2024-11-19 08:01:27.922815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.274 [2024-11-19 08:01:27.922855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.274 qpair failed and we were unable to recover it. 00:37:36.274 [2024-11-19 08:01:27.923034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.274 [2024-11-19 08:01:27.923089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.274 qpair failed and we were unable to recover it. 00:37:36.274 [2024-11-19 08:01:27.923235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.274 [2024-11-19 08:01:27.923282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.274 qpair failed and we were unable to recover it. 00:37:36.275 [2024-11-19 08:01:27.923394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.275 [2024-11-19 08:01:27.923435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.275 qpair failed and we were unable to recover it. 00:37:36.275 [2024-11-19 08:01:27.923576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.275 [2024-11-19 08:01:27.923612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.275 qpair failed and we were unable to recover it. 00:37:36.275 [2024-11-19 08:01:27.923731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.275 [2024-11-19 08:01:27.923767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.275 qpair failed and we were unable to recover it. 00:37:36.275 [2024-11-19 08:01:27.923940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.275 [2024-11-19 08:01:27.923976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.275 qpair failed and we were unable to recover it. 00:37:36.275 [2024-11-19 08:01:27.924105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.275 [2024-11-19 08:01:27.924144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.275 qpair failed and we were unable to recover it. 00:37:36.275 [2024-11-19 08:01:27.924311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.275 [2024-11-19 08:01:27.924367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.275 qpair failed and we were unable to recover it. 00:37:36.275 [2024-11-19 08:01:27.924481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.275 [2024-11-19 08:01:27.924518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.275 qpair failed and we were unable to recover it. 00:37:36.275 [2024-11-19 08:01:27.924619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.275 [2024-11-19 08:01:27.924666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.275 qpair failed and we were unable to recover it. 00:37:36.275 [2024-11-19 08:01:27.924859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.275 [2024-11-19 08:01:27.924918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.275 qpair failed and we were unable to recover it. 00:37:36.275 [2024-11-19 08:01:27.925033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.275 [2024-11-19 08:01:27.925069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.275 qpair failed and we were unable to recover it. 00:37:36.275 [2024-11-19 08:01:27.925250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.275 [2024-11-19 08:01:27.925304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.275 qpair failed and we were unable to recover it. 00:37:36.275 [2024-11-19 08:01:27.925456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.275 [2024-11-19 08:01:27.925493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.275 qpair failed and we were unable to recover it. 00:37:36.275 [2024-11-19 08:01:27.925629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.275 [2024-11-19 08:01:27.925665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.275 qpair failed and we were unable to recover it. 00:37:36.275 [2024-11-19 08:01:27.925849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.275 [2024-11-19 08:01:27.925914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.275 qpair failed and we were unable to recover it. 00:37:36.275 [2024-11-19 08:01:27.926118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.275 [2024-11-19 08:01:27.926161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.275 qpair failed and we were unable to recover it. 00:37:36.275 [2024-11-19 08:01:27.926323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.275 [2024-11-19 08:01:27.926371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.275 qpair failed and we were unable to recover it. 00:37:36.275 [2024-11-19 08:01:27.926563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.275 [2024-11-19 08:01:27.926599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.275 qpair failed and we were unable to recover it. 00:37:36.275 [2024-11-19 08:01:27.926770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.275 [2024-11-19 08:01:27.926807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.275 qpair failed and we were unable to recover it. 00:37:36.275 [2024-11-19 08:01:27.926929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.275 [2024-11-19 08:01:27.926982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.275 qpair failed and we were unable to recover it. 00:37:36.275 [2024-11-19 08:01:27.927146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.275 [2024-11-19 08:01:27.927200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.275 qpair failed and we were unable to recover it. 00:37:36.275 [2024-11-19 08:01:27.927366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.275 [2024-11-19 08:01:27.927430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.275 qpair failed and we were unable to recover it. 00:37:36.275 [2024-11-19 08:01:27.927585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.275 [2024-11-19 08:01:27.927621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.275 qpair failed and we were unable to recover it. 00:37:36.275 [2024-11-19 08:01:27.927810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.275 [2024-11-19 08:01:27.927860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.275 qpair failed and we were unable to recover it. 00:37:36.275 [2024-11-19 08:01:27.928021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.275 [2024-11-19 08:01:27.928058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.275 qpair failed and we were unable to recover it. 00:37:36.275 [2024-11-19 08:01:27.928182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.275 [2024-11-19 08:01:27.928217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.275 qpair failed and we were unable to recover it. 00:37:36.275 [2024-11-19 08:01:27.928368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.275 [2024-11-19 08:01:27.928407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.275 qpair failed and we were unable to recover it. 00:37:36.276 [2024-11-19 08:01:27.928527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.276 [2024-11-19 08:01:27.928566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.276 qpair failed and we were unable to recover it. 00:37:36.276 [2024-11-19 08:01:27.928681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.276 [2024-11-19 08:01:27.928750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.276 qpair failed and we were unable to recover it. 00:37:36.276 [2024-11-19 08:01:27.928897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.276 [2024-11-19 08:01:27.928943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.276 qpair failed and we were unable to recover it. 00:37:36.276 [2024-11-19 08:01:27.929102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.276 [2024-11-19 08:01:27.929138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.276 qpair failed and we were unable to recover it. 00:37:36.276 [2024-11-19 08:01:27.929262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.276 [2024-11-19 08:01:27.929299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.276 qpair failed and we were unable to recover it. 00:37:36.276 [2024-11-19 08:01:27.929431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.276 [2024-11-19 08:01:27.929467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.276 qpair failed and we were unable to recover it. 00:37:36.276 [2024-11-19 08:01:27.929591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.276 [2024-11-19 08:01:27.929637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.276 qpair failed and we were unable to recover it. 00:37:36.276 [2024-11-19 08:01:27.929774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.276 [2024-11-19 08:01:27.929810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.276 qpair failed and we were unable to recover it. 00:37:36.276 [2024-11-19 08:01:27.929941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.276 [2024-11-19 08:01:27.929976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.276 qpair failed and we were unable to recover it. 00:37:36.276 [2024-11-19 08:01:27.930178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.276 [2024-11-19 08:01:27.930219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.276 qpair failed and we were unable to recover it. 00:37:36.276 [2024-11-19 08:01:27.930385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.276 [2024-11-19 08:01:27.930430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.276 qpair failed and we were unable to recover it. 00:37:36.276 [2024-11-19 08:01:27.930667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.276 [2024-11-19 08:01:27.930715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.276 qpair failed and we were unable to recover it. 00:37:36.276 [2024-11-19 08:01:27.930857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.276 [2024-11-19 08:01:27.930893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.276 qpair failed and we were unable to recover it. 00:37:36.276 [2024-11-19 08:01:27.931088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.276 [2024-11-19 08:01:27.931141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.276 qpair failed and we were unable to recover it. 00:37:36.276 [2024-11-19 08:01:27.931236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.276 [2024-11-19 08:01:27.931271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.276 qpair failed and we were unable to recover it. 00:37:36.276 [2024-11-19 08:01:27.931410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.276 [2024-11-19 08:01:27.931464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.276 qpair failed and we were unable to recover it. 00:37:36.276 [2024-11-19 08:01:27.931580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.276 [2024-11-19 08:01:27.931616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.276 qpair failed and we were unable to recover it. 00:37:36.276 [2024-11-19 08:01:27.931755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.276 [2024-11-19 08:01:27.931792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.276 qpair failed and we were unable to recover it. 00:37:36.276 [2024-11-19 08:01:27.931927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.276 [2024-11-19 08:01:27.931977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.276 qpair failed and we were unable to recover it. 00:37:36.276 [2024-11-19 08:01:27.932149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.276 [2024-11-19 08:01:27.932186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.276 qpair failed and we were unable to recover it. 00:37:36.276 [2024-11-19 08:01:27.932336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.276 [2024-11-19 08:01:27.932371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.276 qpair failed and we were unable to recover it. 00:37:36.276 [2024-11-19 08:01:27.932520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.276 [2024-11-19 08:01:27.932555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.276 qpair failed and we were unable to recover it. 00:37:36.276 [2024-11-19 08:01:27.932703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.276 [2024-11-19 08:01:27.932751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.276 qpair failed and we were unable to recover it. 00:37:36.276 [2024-11-19 08:01:27.932882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.276 [2024-11-19 08:01:27.932936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.276 qpair failed and we were unable to recover it. 00:37:36.276 [2024-11-19 08:01:27.933108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.276 [2024-11-19 08:01:27.933144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.276 qpair failed and we were unable to recover it. 00:37:36.276 [2024-11-19 08:01:27.933300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.276 [2024-11-19 08:01:27.933354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.276 qpair failed and we were unable to recover it. 00:37:36.276 [2024-11-19 08:01:27.933479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.276 [2024-11-19 08:01:27.933514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.276 qpair failed and we were unable to recover it. 00:37:36.276 [2024-11-19 08:01:27.933635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.276 [2024-11-19 08:01:27.933684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.276 qpair failed and we were unable to recover it. 00:37:36.276 [2024-11-19 08:01:27.933829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.277 [2024-11-19 08:01:27.933868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.277 qpair failed and we were unable to recover it. 00:37:36.277 [2024-11-19 08:01:27.934010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.277 [2024-11-19 08:01:27.934046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.277 qpair failed and we were unable to recover it. 00:37:36.277 [2024-11-19 08:01:27.934144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.277 [2024-11-19 08:01:27.934177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.277 qpair failed and we were unable to recover it. 00:37:36.277 [2024-11-19 08:01:27.934317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.277 [2024-11-19 08:01:27.934377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.277 qpair failed and we were unable to recover it. 00:37:36.277 [2024-11-19 08:01:27.934498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.277 [2024-11-19 08:01:27.934537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.277 qpair failed and we were unable to recover it. 00:37:36.277 [2024-11-19 08:01:27.934667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.277 [2024-11-19 08:01:27.934709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.277 qpair failed and we were unable to recover it. 00:37:36.277 [2024-11-19 08:01:27.934845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.277 [2024-11-19 08:01:27.934908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.277 qpair failed and we were unable to recover it. 00:37:36.277 [2024-11-19 08:01:27.935071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.277 [2024-11-19 08:01:27.935118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.277 qpair failed and we were unable to recover it. 00:37:36.277 [2024-11-19 08:01:27.935329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.277 [2024-11-19 08:01:27.935379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.277 qpair failed and we were unable to recover it. 00:37:36.277 [2024-11-19 08:01:27.935509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.277 [2024-11-19 08:01:27.935547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.277 qpair failed and we were unable to recover it. 00:37:36.277 [2024-11-19 08:01:27.935683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.277 [2024-11-19 08:01:27.935744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.277 qpair failed and we were unable to recover it. 00:37:36.277 [2024-11-19 08:01:27.935864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.277 [2024-11-19 08:01:27.935905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.277 qpair failed and we were unable to recover it. 00:37:36.277 [2024-11-19 08:01:27.936075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.277 [2024-11-19 08:01:27.936136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.277 qpair failed and we were unable to recover it. 00:37:36.277 [2024-11-19 08:01:27.936299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.277 [2024-11-19 08:01:27.936339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.277 qpair failed and we were unable to recover it. 00:37:36.277 [2024-11-19 08:01:27.936487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.277 [2024-11-19 08:01:27.936526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.277 qpair failed and we were unable to recover it. 00:37:36.277 [2024-11-19 08:01:27.936676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.277 [2024-11-19 08:01:27.936738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.277 qpair failed and we were unable to recover it. 00:37:36.277 [2024-11-19 08:01:27.936881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.277 [2024-11-19 08:01:27.936916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.277 qpair failed and we were unable to recover it. 00:37:36.277 [2024-11-19 08:01:27.937125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.277 [2024-11-19 08:01:27.937181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.277 qpair failed and we were unable to recover it. 00:37:36.277 [2024-11-19 08:01:27.937334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.277 [2024-11-19 08:01:27.937372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.277 qpair failed and we were unable to recover it. 00:37:36.277 [2024-11-19 08:01:27.937523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.277 [2024-11-19 08:01:27.937562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.277 qpair failed and we were unable to recover it. 00:37:36.277 [2024-11-19 08:01:27.937701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.277 [2024-11-19 08:01:27.937741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.277 qpair failed and we were unable to recover it. 00:37:36.277 [2024-11-19 08:01:27.937977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.277 [2024-11-19 08:01:27.938015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.277 qpair failed and we were unable to recover it. 00:37:36.277 [2024-11-19 08:01:27.938166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.277 [2024-11-19 08:01:27.938205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.277 qpair failed and we were unable to recover it. 00:37:36.277 [2024-11-19 08:01:27.938372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.277 [2024-11-19 08:01:27.938426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.277 qpair failed and we were unable to recover it. 00:37:36.277 [2024-11-19 08:01:27.938561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.277 [2024-11-19 08:01:27.938600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.277 qpair failed and we were unable to recover it. 00:37:36.277 [2024-11-19 08:01:27.938760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.277 [2024-11-19 08:01:27.938796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.277 qpair failed and we were unable to recover it. 00:37:36.277 [2024-11-19 08:01:27.938918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.277 [2024-11-19 08:01:27.938980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.277 qpair failed and we were unable to recover it. 00:37:36.277 [2024-11-19 08:01:27.939210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.277 [2024-11-19 08:01:27.939254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.277 qpair failed and we were unable to recover it. 00:37:36.277 [2024-11-19 08:01:27.939415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.277 [2024-11-19 08:01:27.939453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.277 qpair failed and we were unable to recover it. 00:37:36.277 [2024-11-19 08:01:27.939587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.277 [2024-11-19 08:01:27.939626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.277 qpair failed and we were unable to recover it. 00:37:36.277 [2024-11-19 08:01:27.939786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.277 [2024-11-19 08:01:27.939822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.277 qpair failed and we were unable to recover it. 00:37:36.277 [2024-11-19 08:01:27.939961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.277 [2024-11-19 08:01:27.939996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.277 qpair failed and we were unable to recover it. 00:37:36.277 [2024-11-19 08:01:27.940199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.277 [2024-11-19 08:01:27.940238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.277 qpair failed and we were unable to recover it. 00:37:36.277 [2024-11-19 08:01:27.940359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.277 [2024-11-19 08:01:27.940398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.277 qpair failed and we were unable to recover it. 00:37:36.277 [2024-11-19 08:01:27.940506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.277 [2024-11-19 08:01:27.940575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.277 qpair failed and we were unable to recover it. 00:37:36.277 [2024-11-19 08:01:27.940729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.277 [2024-11-19 08:01:27.940764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.278 qpair failed and we were unable to recover it. 00:37:36.278 [2024-11-19 08:01:27.940864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.278 [2024-11-19 08:01:27.940899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.278 qpair failed and we were unable to recover it. 00:37:36.278 [2024-11-19 08:01:27.941033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.278 [2024-11-19 08:01:27.941072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.278 qpair failed and we were unable to recover it. 00:37:36.278 [2024-11-19 08:01:27.941217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.278 [2024-11-19 08:01:27.941262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.278 qpair failed and we were unable to recover it. 00:37:36.278 [2024-11-19 08:01:27.941414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.278 [2024-11-19 08:01:27.941460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.278 qpair failed and we were unable to recover it. 00:37:36.278 [2024-11-19 08:01:27.941606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.278 [2024-11-19 08:01:27.941656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.278 qpair failed and we were unable to recover it. 00:37:36.278 [2024-11-19 08:01:27.941835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.278 [2024-11-19 08:01:27.941884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.278 qpair failed and we were unable to recover it. 00:37:36.278 [2024-11-19 08:01:27.942026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.278 [2024-11-19 08:01:27.942069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.278 qpair failed and we were unable to recover it. 00:37:36.278 [2024-11-19 08:01:27.942251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.278 [2024-11-19 08:01:27.942291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.278 qpair failed and we were unable to recover it. 00:37:36.278 [2024-11-19 08:01:27.942422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.278 [2024-11-19 08:01:27.942476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.278 qpair failed and we were unable to recover it. 00:37:36.278 [2024-11-19 08:01:27.942640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.278 [2024-11-19 08:01:27.942675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.278 qpair failed and we were unable to recover it. 00:37:36.278 [2024-11-19 08:01:27.942829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.278 [2024-11-19 08:01:27.942865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.278 qpair failed and we were unable to recover it. 00:37:36.278 [2024-11-19 08:01:27.943046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.278 [2024-11-19 08:01:27.943116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.278 qpair failed and we were unable to recover it. 00:37:36.278 [2024-11-19 08:01:27.943297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.278 [2024-11-19 08:01:27.943356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.278 qpair failed and we were unable to recover it. 00:37:36.278 [2024-11-19 08:01:27.943518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.278 [2024-11-19 08:01:27.943558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.278 qpair failed and we were unable to recover it. 00:37:36.278 [2024-11-19 08:01:27.943674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.278 [2024-11-19 08:01:27.943735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.278 qpair failed and we were unable to recover it. 00:37:36.278 [2024-11-19 08:01:27.943843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.278 [2024-11-19 08:01:27.943879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.278 qpair failed and we were unable to recover it. 00:37:36.278 [2024-11-19 08:01:27.944064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.278 [2024-11-19 08:01:27.944111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.278 qpair failed and we were unable to recover it. 00:37:36.278 [2024-11-19 08:01:27.944334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.278 [2024-11-19 08:01:27.944383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.278 qpair failed and we were unable to recover it. 00:37:36.278 [2024-11-19 08:01:27.944542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.278 [2024-11-19 08:01:27.944580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.278 qpair failed and we were unable to recover it. 00:37:36.278 [2024-11-19 08:01:27.944726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.278 [2024-11-19 08:01:27.944764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.278 qpair failed and we were unable to recover it. 00:37:36.278 [2024-11-19 08:01:27.944919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.278 [2024-11-19 08:01:27.944972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.278 qpair failed and we were unable to recover it. 00:37:36.278 [2024-11-19 08:01:27.945138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.278 [2024-11-19 08:01:27.945191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.278 qpair failed and we were unable to recover it. 00:37:36.278 [2024-11-19 08:01:27.945392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.278 [2024-11-19 08:01:27.945434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.278 qpair failed and we were unable to recover it. 00:37:36.278 [2024-11-19 08:01:27.945590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.278 [2024-11-19 08:01:27.945640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.278 qpair failed and we were unable to recover it. 00:37:36.278 [2024-11-19 08:01:27.945766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.278 [2024-11-19 08:01:27.945804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.278 qpair failed and we were unable to recover it. 00:37:36.278 [2024-11-19 08:01:27.945929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.278 [2024-11-19 08:01:27.945968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.278 qpair failed and we were unable to recover it. 00:37:36.278 [2024-11-19 08:01:27.946121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.278 [2024-11-19 08:01:27.946158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.278 qpair failed and we were unable to recover it. 00:37:36.278 [2024-11-19 08:01:27.946325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.278 [2024-11-19 08:01:27.946368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.278 qpair failed and we were unable to recover it. 00:37:36.278 [2024-11-19 08:01:27.946547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.278 [2024-11-19 08:01:27.946583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.278 qpair failed and we were unable to recover it. 00:37:36.278 [2024-11-19 08:01:27.946753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.278 [2024-11-19 08:01:27.946790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.278 qpair failed and we were unable to recover it. 00:37:36.278 [2024-11-19 08:01:27.946916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.278 [2024-11-19 08:01:27.946969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.278 qpair failed and we were unable to recover it. 00:37:36.279 [2024-11-19 08:01:27.947126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.279 [2024-11-19 08:01:27.947182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.279 qpair failed and we were unable to recover it. 00:37:36.279 [2024-11-19 08:01:27.947339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.279 [2024-11-19 08:01:27.947391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.279 qpair failed and we were unable to recover it. 00:37:36.279 [2024-11-19 08:01:27.947544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.279 [2024-11-19 08:01:27.947579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.279 qpair failed and we were unable to recover it. 00:37:36.279 [2024-11-19 08:01:27.947712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.279 [2024-11-19 08:01:27.947758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.279 qpair failed and we were unable to recover it. 00:37:36.279 [2024-11-19 08:01:27.947866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.279 [2024-11-19 08:01:27.947901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.279 qpair failed and we were unable to recover it. 00:37:36.279 [2024-11-19 08:01:27.948075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.279 [2024-11-19 08:01:27.948129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.279 qpair failed and we were unable to recover it. 00:37:36.279 [2024-11-19 08:01:27.948295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.279 [2024-11-19 08:01:27.948336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.279 qpair failed and we were unable to recover it. 00:37:36.279 [2024-11-19 08:01:27.948512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.279 [2024-11-19 08:01:27.948553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.279 qpair failed and we were unable to recover it. 00:37:36.279 [2024-11-19 08:01:27.948668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.279 [2024-11-19 08:01:27.948726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.279 qpair failed and we were unable to recover it. 00:37:36.279 [2024-11-19 08:01:27.948884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.279 [2024-11-19 08:01:27.948934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.279 qpair failed and we were unable to recover it. 00:37:36.279 [2024-11-19 08:01:27.949156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.279 [2024-11-19 08:01:27.949197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.279 qpair failed and we were unable to recover it. 00:37:36.279 [2024-11-19 08:01:27.949313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.279 [2024-11-19 08:01:27.949352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.279 qpair failed and we were unable to recover it. 00:37:36.279 [2024-11-19 08:01:27.949493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.279 [2024-11-19 08:01:27.949532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.279 qpair failed and we were unable to recover it. 00:37:36.279 [2024-11-19 08:01:27.949733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.279 [2024-11-19 08:01:27.949784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.279 qpair failed and we were unable to recover it. 00:37:36.279 [2024-11-19 08:01:27.949970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.279 [2024-11-19 08:01:27.950024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.279 qpair failed and we were unable to recover it. 00:37:36.279 [2024-11-19 08:01:27.950148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.279 [2024-11-19 08:01:27.950188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.279 qpair failed and we were unable to recover it. 00:37:36.279 [2024-11-19 08:01:27.950363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.279 [2024-11-19 08:01:27.950401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.279 qpair failed and we were unable to recover it. 00:37:36.279 [2024-11-19 08:01:27.950642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.279 [2024-11-19 08:01:27.950677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.279 qpair failed and we were unable to recover it. 00:37:36.279 [2024-11-19 08:01:27.950832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.279 [2024-11-19 08:01:27.950867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.279 qpair failed and we were unable to recover it. 00:37:36.279 [2024-11-19 08:01:27.951007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.279 [2024-11-19 08:01:27.951044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.279 qpair failed and we were unable to recover it. 00:37:36.279 [2024-11-19 08:01:27.951244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.279 [2024-11-19 08:01:27.951282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.279 qpair failed and we were unable to recover it. 00:37:36.279 [2024-11-19 08:01:27.951441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.279 [2024-11-19 08:01:27.951482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.279 qpair failed and we were unable to recover it. 00:37:36.279 [2024-11-19 08:01:27.951623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.279 [2024-11-19 08:01:27.951662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.279 qpair failed and we were unable to recover it. 00:37:36.279 [2024-11-19 08:01:27.951822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.279 [2024-11-19 08:01:27.951858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.279 qpair failed and we were unable to recover it. 00:37:36.279 [2024-11-19 08:01:27.951977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.279 [2024-11-19 08:01:27.952014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.279 qpair failed and we were unable to recover it. 00:37:36.279 [2024-11-19 08:01:27.952175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.279 [2024-11-19 08:01:27.952226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.279 qpair failed and we were unable to recover it. 00:37:36.279 [2024-11-19 08:01:27.952451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.279 [2024-11-19 08:01:27.952488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.279 qpair failed and we were unable to recover it. 00:37:36.279 [2024-11-19 08:01:27.952612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.279 [2024-11-19 08:01:27.952649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.279 qpair failed and we were unable to recover it. 00:37:36.279 [2024-11-19 08:01:27.952810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.280 [2024-11-19 08:01:27.952845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.280 qpair failed and we were unable to recover it. 00:37:36.280 [2024-11-19 08:01:27.953075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.280 [2024-11-19 08:01:27.953113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.280 qpair failed and we were unable to recover it. 00:37:36.280 [2024-11-19 08:01:27.953332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.280 [2024-11-19 08:01:27.953399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.280 qpair failed and we were unable to recover it. 00:37:36.280 [2024-11-19 08:01:27.953635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.280 [2024-11-19 08:01:27.953683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.280 qpair failed and we were unable to recover it. 00:37:36.280 [2024-11-19 08:01:27.953882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.280 [2024-11-19 08:01:27.953917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.280 qpair failed and we were unable to recover it. 00:37:36.280 [2024-11-19 08:01:27.954074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.280 [2024-11-19 08:01:27.954112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.280 qpair failed and we were unable to recover it. 00:37:36.280 [2024-11-19 08:01:27.954270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.280 [2024-11-19 08:01:27.954310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.280 qpair failed and we were unable to recover it. 00:37:36.280 [2024-11-19 08:01:27.954504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.280 [2024-11-19 08:01:27.954543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.280 qpair failed and we were unable to recover it. 00:37:36.280 [2024-11-19 08:01:27.954678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.280 [2024-11-19 08:01:27.954720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.280 qpair failed and we were unable to recover it. 00:37:36.280 [2024-11-19 08:01:27.954891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.280 [2024-11-19 08:01:27.954924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.280 qpair failed and we were unable to recover it. 00:37:36.280 [2024-11-19 08:01:27.955076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.280 [2024-11-19 08:01:27.955126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.280 qpair failed and we were unable to recover it. 00:37:36.280 [2024-11-19 08:01:27.955298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.280 [2024-11-19 08:01:27.955333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.280 qpair failed and we were unable to recover it. 00:37:36.280 [2024-11-19 08:01:27.955502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.280 [2024-11-19 08:01:27.955540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.280 qpair failed and we were unable to recover it. 00:37:36.280 [2024-11-19 08:01:27.955732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.280 [2024-11-19 08:01:27.955767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.280 qpair failed and we were unable to recover it. 00:37:36.280 [2024-11-19 08:01:27.955896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.280 [2024-11-19 08:01:27.955945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.280 qpair failed and we were unable to recover it. 00:37:36.280 [2024-11-19 08:01:27.956119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.280 [2024-11-19 08:01:27.956158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.280 qpair failed and we were unable to recover it. 00:37:36.280 [2024-11-19 08:01:27.956305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.280 [2024-11-19 08:01:27.956345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.280 qpair failed and we were unable to recover it. 00:37:36.280 [2024-11-19 08:01:27.956464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.280 [2024-11-19 08:01:27.956503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.280 qpair failed and we were unable to recover it. 00:37:36.280 [2024-11-19 08:01:27.956667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.280 [2024-11-19 08:01:27.956745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.280 qpair failed and we were unable to recover it. 00:37:36.280 [2024-11-19 08:01:27.956885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.280 [2024-11-19 08:01:27.956940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.280 qpair failed and we were unable to recover it. 00:37:36.280 [2024-11-19 08:01:27.957079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.280 [2024-11-19 08:01:27.957136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.280 qpair failed and we were unable to recover it. 00:37:36.280 [2024-11-19 08:01:27.957294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.280 [2024-11-19 08:01:27.957357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.280 qpair failed and we were unable to recover it. 00:37:36.280 [2024-11-19 08:01:27.957496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.280 [2024-11-19 08:01:27.957531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.280 qpair failed and we were unable to recover it. 00:37:36.280 [2024-11-19 08:01:27.957660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.280 [2024-11-19 08:01:27.957717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.280 qpair failed and we were unable to recover it. 00:37:36.280 [2024-11-19 08:01:27.957855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.280 [2024-11-19 08:01:27.957892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.280 qpair failed and we were unable to recover it. 00:37:36.280 [2024-11-19 08:01:27.958037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.280 [2024-11-19 08:01:27.958077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.280 qpair failed and we were unable to recover it. 00:37:36.280 [2024-11-19 08:01:27.958237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.280 [2024-11-19 08:01:27.958276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.280 qpair failed and we were unable to recover it. 00:37:36.280 [2024-11-19 08:01:27.958421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.280 [2024-11-19 08:01:27.958458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.280 qpair failed and we were unable to recover it. 00:37:36.280 [2024-11-19 08:01:27.958605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.280 [2024-11-19 08:01:27.958643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.280 qpair failed and we were unable to recover it. 00:37:36.280 [2024-11-19 08:01:27.958788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.280 [2024-11-19 08:01:27.958824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.280 qpair failed and we were unable to recover it. 00:37:36.280 [2024-11-19 08:01:27.958982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.280 [2024-11-19 08:01:27.959019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.280 qpair failed and we were unable to recover it. 00:37:36.280 [2024-11-19 08:01:27.959166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.280 [2024-11-19 08:01:27.959209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.280 qpair failed and we were unable to recover it. 00:37:36.280 [2024-11-19 08:01:27.959401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.281 [2024-11-19 08:01:27.959439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.281 qpair failed and we were unable to recover it. 00:37:36.281 [2024-11-19 08:01:27.959596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.281 [2024-11-19 08:01:27.959653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.281 qpair failed and we were unable to recover it. 00:37:36.281 [2024-11-19 08:01:27.959804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.281 [2024-11-19 08:01:27.959854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.281 qpair failed and we were unable to recover it. 00:37:36.281 [2024-11-19 08:01:27.959980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.281 [2024-11-19 08:01:27.960043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.281 qpair failed and we were unable to recover it. 00:37:36.281 [2024-11-19 08:01:27.960161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.281 [2024-11-19 08:01:27.960226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.281 qpair failed and we were unable to recover it. 00:37:36.281 [2024-11-19 08:01:27.960328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.281 [2024-11-19 08:01:27.960365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.281 qpair failed and we were unable to recover it. 00:37:36.281 [2024-11-19 08:01:27.960502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.281 [2024-11-19 08:01:27.960539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.281 qpair failed and we were unable to recover it. 00:37:36.281 [2024-11-19 08:01:27.960682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.281 [2024-11-19 08:01:27.960745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.281 qpair failed and we were unable to recover it. 00:37:36.281 [2024-11-19 08:01:27.960857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.281 [2024-11-19 08:01:27.960893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.281 qpair failed and we were unable to recover it. 00:37:36.281 [2024-11-19 08:01:27.961050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.281 [2024-11-19 08:01:27.961087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.281 qpair failed and we were unable to recover it. 00:37:36.281 [2024-11-19 08:01:27.961264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.281 [2024-11-19 08:01:27.961310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.281 qpair failed and we were unable to recover it. 00:37:36.281 [2024-11-19 08:01:27.961432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.281 [2024-11-19 08:01:27.961468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.281 qpair failed and we were unable to recover it. 00:37:36.281 [2024-11-19 08:01:27.961620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.281 [2024-11-19 08:01:27.961655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.281 qpair failed and we were unable to recover it. 00:37:36.281 [2024-11-19 08:01:27.961774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.281 [2024-11-19 08:01:27.961810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.281 qpair failed and we were unable to recover it. 00:37:36.281 [2024-11-19 08:01:27.961941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.281 [2024-11-19 08:01:27.961990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.281 qpair failed and we were unable to recover it. 00:37:36.281 [2024-11-19 08:01:27.962136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.281 [2024-11-19 08:01:27.962192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.281 qpair failed and we were unable to recover it. 00:37:36.281 [2024-11-19 08:01:27.962342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.281 [2024-11-19 08:01:27.962395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.281 qpair failed and we were unable to recover it. 00:37:36.281 [2024-11-19 08:01:27.962558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.281 [2024-11-19 08:01:27.962593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.281 qpair failed and we were unable to recover it. 00:37:36.281 [2024-11-19 08:01:27.962757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.281 [2024-11-19 08:01:27.962793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.281 qpair failed and we were unable to recover it. 00:37:36.281 [2024-11-19 08:01:27.962901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.281 [2024-11-19 08:01:27.962936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.281 qpair failed and we were unable to recover it. 00:37:36.281 [2024-11-19 08:01:27.963071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.281 [2024-11-19 08:01:27.963112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.281 qpair failed and we were unable to recover it. 00:37:36.281 [2024-11-19 08:01:27.963256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.281 [2024-11-19 08:01:27.963308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.281 qpair failed and we were unable to recover it. 00:37:36.281 [2024-11-19 08:01:27.963455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.281 [2024-11-19 08:01:27.963493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.281 qpair failed and we were unable to recover it. 00:37:36.281 [2024-11-19 08:01:27.963646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.281 [2024-11-19 08:01:27.963685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.281 qpair failed and we were unable to recover it. 00:37:36.281 [2024-11-19 08:01:27.963883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.282 [2024-11-19 08:01:27.963930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.282 qpair failed and we were unable to recover it. 00:37:36.282 [2024-11-19 08:01:27.964036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.282 [2024-11-19 08:01:27.964089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.282 qpair failed and we were unable to recover it. 00:37:36.282 [2024-11-19 08:01:27.964207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.282 [2024-11-19 08:01:27.964245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.282 qpair failed and we were unable to recover it. 00:37:36.282 [2024-11-19 08:01:27.964370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.282 [2024-11-19 08:01:27.964416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.282 qpair failed and we were unable to recover it. 00:37:36.282 [2024-11-19 08:01:27.964576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.282 [2024-11-19 08:01:27.964613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.282 qpair failed and we were unable to recover it. 00:37:36.282 [2024-11-19 08:01:27.964750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.282 [2024-11-19 08:01:27.964786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.282 qpair failed and we were unable to recover it. 00:37:36.282 [2024-11-19 08:01:27.964891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.282 [2024-11-19 08:01:27.964926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.282 qpair failed and we were unable to recover it. 00:37:36.282 [2024-11-19 08:01:27.965047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.282 [2024-11-19 08:01:27.965084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.282 qpair failed and we were unable to recover it. 00:37:36.282 [2024-11-19 08:01:27.965221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.282 [2024-11-19 08:01:27.965257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.282 qpair failed and we were unable to recover it. 00:37:36.282 [2024-11-19 08:01:27.965448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.282 [2024-11-19 08:01:27.965520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.282 qpair failed and we were unable to recover it. 00:37:36.282 [2024-11-19 08:01:27.965668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.282 [2024-11-19 08:01:27.965716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.282 qpair failed and we were unable to recover it. 00:37:36.282 [2024-11-19 08:01:27.965835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.282 [2024-11-19 08:01:27.965871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.282 qpair failed and we were unable to recover it. 00:37:36.282 [2024-11-19 08:01:27.965999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.282 [2024-11-19 08:01:27.966050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.282 qpair failed and we were unable to recover it. 00:37:36.282 [2024-11-19 08:01:27.966202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.282 [2024-11-19 08:01:27.966255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.282 qpair failed and we were unable to recover it. 00:37:36.282 [2024-11-19 08:01:27.966371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.282 [2024-11-19 08:01:27.966417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.282 qpair failed and we were unable to recover it. 00:37:36.282 [2024-11-19 08:01:27.966556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.282 [2024-11-19 08:01:27.966593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.282 qpair failed and we were unable to recover it. 00:37:36.282 [2024-11-19 08:01:27.966748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.282 [2024-11-19 08:01:27.966809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.282 qpair failed and we were unable to recover it. 00:37:36.282 [2024-11-19 08:01:27.966932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.282 [2024-11-19 08:01:27.966970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.282 qpair failed and we were unable to recover it. 00:37:36.282 [2024-11-19 08:01:27.967143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.282 [2024-11-19 08:01:27.967179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.282 qpair failed and we were unable to recover it. 00:37:36.282 [2024-11-19 08:01:27.967291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.282 [2024-11-19 08:01:27.967326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.282 qpair failed and we were unable to recover it. 00:37:36.282 [2024-11-19 08:01:27.967473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.282 [2024-11-19 08:01:27.967510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.282 qpair failed and we were unable to recover it. 00:37:36.282 [2024-11-19 08:01:27.967658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.282 [2024-11-19 08:01:27.967702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.282 qpair failed and we were unable to recover it. 00:37:36.282 [2024-11-19 08:01:27.967823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.282 [2024-11-19 08:01:27.967859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.282 qpair failed and we were unable to recover it. 00:37:36.282 [2024-11-19 08:01:27.968022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.282 [2024-11-19 08:01:27.968057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.282 qpair failed and we were unable to recover it. 00:37:36.282 [2024-11-19 08:01:27.968190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.282 [2024-11-19 08:01:27.968234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.282 qpair failed and we were unable to recover it. 00:37:36.282 [2024-11-19 08:01:27.968342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.282 [2024-11-19 08:01:27.968378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.282 qpair failed and we were unable to recover it. 00:37:36.282 [2024-11-19 08:01:27.968485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.282 [2024-11-19 08:01:27.968522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.282 qpair failed and we were unable to recover it. 00:37:36.282 [2024-11-19 08:01:27.968686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.282 [2024-11-19 08:01:27.968748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.282 qpair failed and we were unable to recover it. 00:37:36.282 [2024-11-19 08:01:27.968887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.282 [2024-11-19 08:01:27.968938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.282 qpair failed and we were unable to recover it. 00:37:36.282 [2024-11-19 08:01:27.969093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.282 [2024-11-19 08:01:27.969131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.282 qpair failed and we were unable to recover it. 00:37:36.282 [2024-11-19 08:01:27.969308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.282 [2024-11-19 08:01:27.969345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.282 qpair failed and we were unable to recover it. 00:37:36.282 [2024-11-19 08:01:27.969482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.282 [2024-11-19 08:01:27.969519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.282 qpair failed and we were unable to recover it. 00:37:36.283 [2024-11-19 08:01:27.969653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.283 [2024-11-19 08:01:27.969709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.283 qpair failed and we were unable to recover it. 00:37:36.283 [2024-11-19 08:01:27.969856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.283 [2024-11-19 08:01:27.969894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.283 qpair failed and we were unable to recover it. 00:37:36.283 [2024-11-19 08:01:27.970059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.283 [2024-11-19 08:01:27.970104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.283 qpair failed and we were unable to recover it. 00:37:36.283 [2024-11-19 08:01:27.970227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.283 [2024-11-19 08:01:27.970263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.283 qpair failed and we were unable to recover it. 00:37:36.283 [2024-11-19 08:01:27.970387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.283 [2024-11-19 08:01:27.970436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.283 qpair failed and we were unable to recover it. 00:37:36.283 [2024-11-19 08:01:27.970584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.283 [2024-11-19 08:01:27.970623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.283 qpair failed and we were unable to recover it. 00:37:36.283 [2024-11-19 08:01:27.970793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.283 [2024-11-19 08:01:27.970843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.283 qpair failed and we were unable to recover it. 00:37:36.283 [2024-11-19 08:01:27.971002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.283 [2024-11-19 08:01:27.971050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.283 qpair failed and we were unable to recover it. 00:37:36.283 [2024-11-19 08:01:27.971191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.283 [2024-11-19 08:01:27.971235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.283 qpair failed and we were unable to recover it. 00:37:36.283 [2024-11-19 08:01:27.971344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.283 [2024-11-19 08:01:27.971379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.283 qpair failed and we were unable to recover it. 00:37:36.283 [2024-11-19 08:01:27.971493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.283 [2024-11-19 08:01:27.971538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.283 qpair failed and we were unable to recover it. 00:37:36.283 [2024-11-19 08:01:27.971666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.283 [2024-11-19 08:01:27.971735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.283 qpair failed and we were unable to recover it. 00:37:36.283 [2024-11-19 08:01:27.971880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.283 [2024-11-19 08:01:27.971918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.283 qpair failed and we were unable to recover it. 00:37:36.283 [2024-11-19 08:01:27.972044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.283 [2024-11-19 08:01:27.972088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.283 qpair failed and we were unable to recover it. 00:37:36.283 [2024-11-19 08:01:27.972195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.283 [2024-11-19 08:01:27.972229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.283 qpair failed and we were unable to recover it. 00:37:36.283 [2024-11-19 08:01:27.972346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.283 [2024-11-19 08:01:27.972381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.283 qpair failed and we were unable to recover it. 00:37:36.283 [2024-11-19 08:01:27.972579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.283 [2024-11-19 08:01:27.972629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.283 qpair failed and we were unable to recover it. 00:37:36.283 [2024-11-19 08:01:27.972774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.283 [2024-11-19 08:01:27.972824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.283 qpair failed and we were unable to recover it. 00:37:36.283 [2024-11-19 08:01:27.972951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.283 [2024-11-19 08:01:27.972990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.283 qpair failed and we were unable to recover it. 00:37:36.283 [2024-11-19 08:01:27.973220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.283 [2024-11-19 08:01:27.973256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.283 qpair failed and we were unable to recover it. 00:37:36.283 [2024-11-19 08:01:27.973424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.283 [2024-11-19 08:01:27.973464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.283 qpair failed and we were unable to recover it. 00:37:36.283 [2024-11-19 08:01:27.973599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.283 [2024-11-19 08:01:27.973634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.283 qpair failed and we were unable to recover it. 00:37:36.283 [2024-11-19 08:01:27.973787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.283 [2024-11-19 08:01:27.973824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.283 qpair failed and we were unable to recover it. 00:37:36.283 [2024-11-19 08:01:27.973941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.283 [2024-11-19 08:01:27.973991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.283 qpair failed and we were unable to recover it. 00:37:36.283 [2024-11-19 08:01:27.974152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.283 [2024-11-19 08:01:27.974206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.283 qpair failed and we were unable to recover it. 00:37:36.283 [2024-11-19 08:01:27.974390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.283 [2024-11-19 08:01:27.974439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.283 qpair failed and we were unable to recover it. 00:37:36.283 [2024-11-19 08:01:27.974578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.283 [2024-11-19 08:01:27.974614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.283 qpair failed and we were unable to recover it. 00:37:36.283 [2024-11-19 08:01:27.974731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.283 [2024-11-19 08:01:27.974768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.283 qpair failed and we were unable to recover it. 00:37:36.283 [2024-11-19 08:01:27.974885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.283 [2024-11-19 08:01:27.974921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.283 qpair failed and we were unable to recover it. 00:37:36.283 [2024-11-19 08:01:27.975028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.283 [2024-11-19 08:01:27.975065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.283 qpair failed and we were unable to recover it. 00:37:36.284 [2024-11-19 08:01:27.975182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.284 [2024-11-19 08:01:27.975217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.284 qpair failed and we were unable to recover it. 00:37:36.284 [2024-11-19 08:01:27.975357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.284 [2024-11-19 08:01:27.975399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.284 qpair failed and we were unable to recover it. 00:37:36.284 [2024-11-19 08:01:27.975505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.284 [2024-11-19 08:01:27.975548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.284 qpair failed and we were unable to recover it. 00:37:36.284 [2024-11-19 08:01:27.975715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.284 [2024-11-19 08:01:27.975765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.284 qpair failed and we were unable to recover it. 00:37:36.284 [2024-11-19 08:01:27.975930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.284 [2024-11-19 08:01:27.975979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.284 qpair failed and we were unable to recover it. 00:37:36.284 [2024-11-19 08:01:27.976105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.284 [2024-11-19 08:01:27.976142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.284 qpair failed and we were unable to recover it. 00:37:36.284 [2024-11-19 08:01:27.976287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.284 [2024-11-19 08:01:27.976329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.284 qpair failed and we were unable to recover it. 00:37:36.284 [2024-11-19 08:01:27.976449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.284 [2024-11-19 08:01:27.976484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.284 qpair failed and we were unable to recover it. 00:37:36.284 [2024-11-19 08:01:27.976636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.284 [2024-11-19 08:01:27.976674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.284 qpair failed and we were unable to recover it. 00:37:36.284 [2024-11-19 08:01:27.976853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.284 [2024-11-19 08:01:27.976901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.284 qpair failed and we were unable to recover it. 00:37:36.284 [2024-11-19 08:01:27.977074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.284 [2024-11-19 08:01:27.977124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.284 qpair failed and we were unable to recover it. 00:37:36.284 [2024-11-19 08:01:27.977274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.284 [2024-11-19 08:01:27.977313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.284 qpair failed and we were unable to recover it. 00:37:36.284 [2024-11-19 08:01:27.977436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.284 [2024-11-19 08:01:27.977473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.284 qpair failed and we were unable to recover it. 00:37:36.284 [2024-11-19 08:01:27.977583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.284 [2024-11-19 08:01:27.977619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.284 qpair failed and we were unable to recover it. 00:37:36.284 [2024-11-19 08:01:27.977734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.284 [2024-11-19 08:01:27.977770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.284 qpair failed and we were unable to recover it. 00:37:36.284 [2024-11-19 08:01:27.977895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.284 [2024-11-19 08:01:27.977945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.284 qpair failed and we were unable to recover it. 00:37:36.284 [2024-11-19 08:01:27.978090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.284 [2024-11-19 08:01:27.978127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.284 qpair failed and we were unable to recover it. 00:37:36.284 [2024-11-19 08:01:27.978287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.284 [2024-11-19 08:01:27.978322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.284 qpair failed and we were unable to recover it. 00:37:36.284 [2024-11-19 08:01:27.978435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.284 [2024-11-19 08:01:27.978470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.284 qpair failed and we were unable to recover it. 00:37:36.284 [2024-11-19 08:01:27.978603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.284 [2024-11-19 08:01:27.978654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.284 qpair failed and we were unable to recover it. 00:37:36.284 [2024-11-19 08:01:27.978803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.284 [2024-11-19 08:01:27.978841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.284 qpair failed and we were unable to recover it. 00:37:36.284 [2024-11-19 08:01:27.978957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.284 [2024-11-19 08:01:27.978993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.284 qpair failed and we were unable to recover it. 00:37:36.284 [2024-11-19 08:01:27.979139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.284 [2024-11-19 08:01:27.979175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.284 qpair failed and we were unable to recover it. 00:37:36.284 [2024-11-19 08:01:27.979289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.284 [2024-11-19 08:01:27.979325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.284 qpair failed and we were unable to recover it. 00:37:36.284 [2024-11-19 08:01:27.979480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.284 [2024-11-19 08:01:27.979517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.284 qpair failed and we were unable to recover it. 00:37:36.284 [2024-11-19 08:01:27.979661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.284 [2024-11-19 08:01:27.979703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.284 qpair failed and we were unable to recover it. 00:37:36.284 [2024-11-19 08:01:27.979880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.284 [2024-11-19 08:01:27.979916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.284 qpair failed and we were unable to recover it. 00:37:36.284 [2024-11-19 08:01:27.980030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.284 [2024-11-19 08:01:27.980072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.284 qpair failed and we were unable to recover it. 00:37:36.284 [2024-11-19 08:01:27.980226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.284 [2024-11-19 08:01:27.980272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.284 qpair failed and we were unable to recover it. 00:37:36.284 [2024-11-19 08:01:27.980394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.284 [2024-11-19 08:01:27.980430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.284 qpair failed and we were unable to recover it. 00:37:36.284 [2024-11-19 08:01:27.980540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.284 [2024-11-19 08:01:27.980577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.284 qpair failed and we were unable to recover it. 00:37:36.285 [2024-11-19 08:01:27.980767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.285 [2024-11-19 08:01:27.980817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.285 qpair failed and we were unable to recover it. 00:37:36.285 [2024-11-19 08:01:27.980974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.285 [2024-11-19 08:01:27.981035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.285 qpair failed and we were unable to recover it. 00:37:36.285 [2024-11-19 08:01:27.981154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.285 [2024-11-19 08:01:27.981200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.285 qpair failed and we were unable to recover it. 00:37:36.285 [2024-11-19 08:01:27.981320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.285 [2024-11-19 08:01:27.981356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.285 qpair failed and we were unable to recover it. 00:37:36.285 [2024-11-19 08:01:27.981527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.285 [2024-11-19 08:01:27.981564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.285 qpair failed and we were unable to recover it. 00:37:36.285 [2024-11-19 08:01:27.981672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.285 [2024-11-19 08:01:27.981731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.285 qpair failed and we were unable to recover it. 00:37:36.285 [2024-11-19 08:01:27.981879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.285 [2024-11-19 08:01:27.981915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.285 qpair failed and we were unable to recover it. 00:37:36.285 [2024-11-19 08:01:27.982027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.285 [2024-11-19 08:01:27.982062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.285 qpair failed and we were unable to recover it. 00:37:36.285 [2024-11-19 08:01:27.982197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.285 [2024-11-19 08:01:27.982233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.285 qpair failed and we were unable to recover it. 00:37:36.285 [2024-11-19 08:01:27.982372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.285 [2024-11-19 08:01:27.982408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.285 qpair failed and we were unable to recover it. 00:37:36.285 [2024-11-19 08:01:27.982541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.285 [2024-11-19 08:01:27.982577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.285 qpair failed and we were unable to recover it. 00:37:36.285 [2024-11-19 08:01:27.982712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.285 [2024-11-19 08:01:27.982755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.285 qpair failed and we were unable to recover it. 00:37:36.285 [2024-11-19 08:01:27.982866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.285 [2024-11-19 08:01:27.982904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.285 qpair failed and we were unable to recover it. 00:37:36.285 [2024-11-19 08:01:27.983060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.285 [2024-11-19 08:01:27.983110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.285 qpair failed and we were unable to recover it. 00:37:36.285 [2024-11-19 08:01:27.983235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.285 [2024-11-19 08:01:27.983272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.285 qpair failed and we were unable to recover it. 00:37:36.285 [2024-11-19 08:01:27.983380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.285 [2024-11-19 08:01:27.983415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.285 qpair failed and we were unable to recover it. 00:37:36.285 [2024-11-19 08:01:27.983530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.285 [2024-11-19 08:01:27.983565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.285 qpair failed and we were unable to recover it. 00:37:36.285 [2024-11-19 08:01:27.983701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.285 [2024-11-19 08:01:27.983753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.285 qpair failed and we were unable to recover it. 00:37:36.285 [2024-11-19 08:01:27.983887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.285 [2024-11-19 08:01:27.983922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.285 qpair failed and we were unable to recover it. 00:37:36.285 [2024-11-19 08:01:27.984041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.285 [2024-11-19 08:01:27.984075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.285 qpair failed and we were unable to recover it. 00:37:36.285 [2024-11-19 08:01:27.984222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.285 [2024-11-19 08:01:27.984257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.285 qpair failed and we were unable to recover it. 00:37:36.285 [2024-11-19 08:01:27.984393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.285 [2024-11-19 08:01:27.984453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.285 qpair failed and we were unable to recover it. 00:37:36.285 [2024-11-19 08:01:27.984647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.285 [2024-11-19 08:01:27.984683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.285 qpair failed and we were unable to recover it. 00:37:36.285 [2024-11-19 08:01:27.984855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.285 [2024-11-19 08:01:27.984891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.285 qpair failed and we were unable to recover it. 00:37:36.285 [2024-11-19 08:01:27.985037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.285 [2024-11-19 08:01:27.985073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.285 qpair failed and we were unable to recover it. 00:37:36.285 [2024-11-19 08:01:27.985236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.285 [2024-11-19 08:01:27.985271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.285 qpair failed and we were unable to recover it. 00:37:36.285 [2024-11-19 08:01:27.985429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.285 [2024-11-19 08:01:27.985468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.285 qpair failed and we were unable to recover it. 00:37:36.285 [2024-11-19 08:01:27.985632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.285 [2024-11-19 08:01:27.985667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.285 qpair failed and we were unable to recover it. 00:37:36.285 [2024-11-19 08:01:27.985841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.285 [2024-11-19 08:01:27.985877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.285 qpair failed and we were unable to recover it. 00:37:36.285 [2024-11-19 08:01:27.986020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.285 [2024-11-19 08:01:27.986055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.285 qpair failed and we were unable to recover it. 00:37:36.285 [2024-11-19 08:01:27.986189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.285 [2024-11-19 08:01:27.986224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.285 qpair failed and we were unable to recover it. 00:37:36.285 [2024-11-19 08:01:27.986391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.286 [2024-11-19 08:01:27.986425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.286 qpair failed and we were unable to recover it. 00:37:36.286 [2024-11-19 08:01:27.986525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.286 [2024-11-19 08:01:27.986560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.286 qpair failed and we were unable to recover it. 00:37:36.286 [2024-11-19 08:01:27.986713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.286 [2024-11-19 08:01:27.986754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.286 qpair failed and we were unable to recover it. 00:37:36.286 [2024-11-19 08:01:27.986853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.286 [2024-11-19 08:01:27.986889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.286 qpair failed and we were unable to recover it. 00:37:36.286 [2024-11-19 08:01:27.987078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.286 [2024-11-19 08:01:27.987128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.286 qpair failed and we were unable to recover it. 00:37:36.286 [2024-11-19 08:01:27.987269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.286 [2024-11-19 08:01:27.987310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.286 qpair failed and we were unable to recover it. 00:37:36.286 [2024-11-19 08:01:27.987461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.286 [2024-11-19 08:01:27.987500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.286 qpair failed and we were unable to recover it. 00:37:36.286 [2024-11-19 08:01:27.987643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.286 [2024-11-19 08:01:27.987682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.286 qpair failed and we were unable to recover it. 00:37:36.286 [2024-11-19 08:01:27.987837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.286 [2024-11-19 08:01:27.987873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.286 qpair failed and we were unable to recover it. 00:37:36.286 [2024-11-19 08:01:27.988002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.286 [2024-11-19 08:01:27.988052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.286 qpair failed and we were unable to recover it. 00:37:36.286 [2024-11-19 08:01:27.988275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.286 [2024-11-19 08:01:27.988311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.286 qpair failed and we were unable to recover it. 00:37:36.286 [2024-11-19 08:01:27.988530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.286 [2024-11-19 08:01:27.988571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.286 qpair failed and we were unable to recover it. 00:37:36.286 [2024-11-19 08:01:27.988749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.286 [2024-11-19 08:01:27.988786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.286 qpair failed and we were unable to recover it. 00:37:36.286 [2024-11-19 08:01:27.988927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.286 [2024-11-19 08:01:27.988962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.286 qpair failed and we were unable to recover it. 00:37:36.286 [2024-11-19 08:01:27.989091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.286 [2024-11-19 08:01:27.989126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.286 qpair failed and we were unable to recover it. 00:37:36.286 [2024-11-19 08:01:27.989260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.286 [2024-11-19 08:01:27.989295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.286 qpair failed and we were unable to recover it. 00:37:36.286 [2024-11-19 08:01:27.989435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.286 [2024-11-19 08:01:27.989469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.286 qpair failed and we were unable to recover it. 00:37:36.286 [2024-11-19 08:01:27.989605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.286 [2024-11-19 08:01:27.989640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.286 qpair failed and we were unable to recover it. 00:37:36.286 [2024-11-19 08:01:27.989787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.286 [2024-11-19 08:01:27.989837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.286 qpair failed and we were unable to recover it. 00:37:36.286 [2024-11-19 08:01:27.990013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.286 [2024-11-19 08:01:27.990051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.286 qpair failed and we were unable to recover it. 00:37:36.286 [2024-11-19 08:01:27.990164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.286 [2024-11-19 08:01:27.990201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.286 qpair failed and we were unable to recover it. 00:37:36.286 [2024-11-19 08:01:27.990370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.286 [2024-11-19 08:01:27.990407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.286 qpair failed and we were unable to recover it. 00:37:36.286 [2024-11-19 08:01:27.990509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.286 [2024-11-19 08:01:27.990561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.286 qpair failed and we were unable to recover it. 00:37:36.286 [2024-11-19 08:01:27.990745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.286 [2024-11-19 08:01:27.990782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.286 qpair failed and we were unable to recover it. 00:37:36.286 [2024-11-19 08:01:27.990924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.286 [2024-11-19 08:01:27.990959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.286 qpair failed and we were unable to recover it. 00:37:36.286 [2024-11-19 08:01:27.991136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.286 [2024-11-19 08:01:27.991204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.286 qpair failed and we were unable to recover it. 00:37:36.286 [2024-11-19 08:01:27.991420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.286 [2024-11-19 08:01:27.991470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.286 qpair failed and we were unable to recover it. 00:37:36.286 [2024-11-19 08:01:27.991634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.286 [2024-11-19 08:01:27.991670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.286 qpair failed and we were unable to recover it. 00:37:36.286 [2024-11-19 08:01:27.991787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.286 [2024-11-19 08:01:27.991823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.286 qpair failed and we were unable to recover it. 00:37:36.287 [2024-11-19 08:01:27.991964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.287 [2024-11-19 08:01:27.991999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.287 qpair failed and we were unable to recover it. 00:37:36.287 [2024-11-19 08:01:27.992105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.287 [2024-11-19 08:01:27.992141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.287 qpair failed and we were unable to recover it. 00:37:36.287 [2024-11-19 08:01:27.992674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.287 [2024-11-19 08:01:27.992742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.287 qpair failed and we were unable to recover it. 00:37:36.287 [2024-11-19 08:01:27.992885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.287 [2024-11-19 08:01:27.992922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.287 qpair failed and we were unable to recover it. 00:37:36.287 [2024-11-19 08:01:27.993077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.287 [2024-11-19 08:01:27.993117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.287 qpair failed and we were unable to recover it. 00:37:36.287 [2024-11-19 08:01:27.993291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.287 [2024-11-19 08:01:27.993330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.287 qpair failed and we were unable to recover it. 00:37:36.287 [2024-11-19 08:01:27.993458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.287 [2024-11-19 08:01:27.993512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.287 qpair failed and we were unable to recover it. 00:37:36.287 [2024-11-19 08:01:27.993682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.287 [2024-11-19 08:01:27.993759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.287 qpair failed and we were unable to recover it. 00:37:36.287 [2024-11-19 08:01:27.993918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.287 [2024-11-19 08:01:27.993960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.287 qpair failed and we were unable to recover it. 00:37:36.287 [2024-11-19 08:01:27.994076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.287 [2024-11-19 08:01:27.994111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.287 qpair failed and we were unable to recover it. 00:37:36.287 [2024-11-19 08:01:27.994248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.287 [2024-11-19 08:01:27.994284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.287 qpair failed and we were unable to recover it. 00:37:36.287 [2024-11-19 08:01:27.994383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.287 [2024-11-19 08:01:27.994418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.287 qpair failed and we were unable to recover it. 00:37:36.287 [2024-11-19 08:01:27.994586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.287 [2024-11-19 08:01:27.994625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.287 qpair failed and we were unable to recover it. 00:37:36.287 [2024-11-19 08:01:27.994808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.287 [2024-11-19 08:01:27.994845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.287 qpair failed and we were unable to recover it. 00:37:36.287 [2024-11-19 08:01:27.994981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.287 [2024-11-19 08:01:27.995031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.287 qpair failed and we were unable to recover it. 00:37:36.287 [2024-11-19 08:01:27.995194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.287 [2024-11-19 08:01:27.995235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.287 qpair failed and we were unable to recover it. 00:37:36.287 [2024-11-19 08:01:27.995427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.287 [2024-11-19 08:01:27.995468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.287 qpair failed and we were unable to recover it. 00:37:36.287 [2024-11-19 08:01:27.995636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.287 [2024-11-19 08:01:27.995676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.287 qpair failed and we were unable to recover it. 00:37:36.287 [2024-11-19 08:01:27.995830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.287 [2024-11-19 08:01:27.995879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.287 qpair failed and we were unable to recover it. 00:37:36.287 [2024-11-19 08:01:27.996021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.287 [2024-11-19 08:01:27.996062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.287 qpair failed and we were unable to recover it. 00:37:36.287 [2024-11-19 08:01:27.996197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.287 [2024-11-19 08:01:27.996252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.287 qpair failed and we were unable to recover it. 00:37:36.287 [2024-11-19 08:01:27.996415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.287 [2024-11-19 08:01:27.996471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.287 qpair failed and we were unable to recover it. 00:37:36.287 [2024-11-19 08:01:27.996603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.287 [2024-11-19 08:01:27.996639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.287 qpair failed and we were unable to recover it. 00:37:36.287 [2024-11-19 08:01:27.996766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.287 [2024-11-19 08:01:27.996801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.287 qpair failed and we were unable to recover it. 00:37:36.287 [2024-11-19 08:01:27.996946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.287 [2024-11-19 08:01:27.996981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.287 qpair failed and we were unable to recover it. 00:37:36.287 [2024-11-19 08:01:27.997134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.287 [2024-11-19 08:01:27.997173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.287 qpair failed and we were unable to recover it. 00:37:36.287 [2024-11-19 08:01:27.997287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.287 [2024-11-19 08:01:27.997326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.287 qpair failed and we were unable to recover it. 00:37:36.287 [2024-11-19 08:01:27.997479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.287 [2024-11-19 08:01:27.997537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.287 qpair failed and we were unable to recover it. 00:37:36.287 [2024-11-19 08:01:27.997702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.287 [2024-11-19 08:01:27.997753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.287 qpair failed and we were unable to recover it. 00:37:36.287 [2024-11-19 08:01:27.997891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.287 [2024-11-19 08:01:27.997928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.287 qpair failed and we were unable to recover it. 00:37:36.287 [2024-11-19 08:01:27.998153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.287 [2024-11-19 08:01:27.998206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.287 qpair failed and we were unable to recover it. 00:37:36.287 [2024-11-19 08:01:27.998359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.287 [2024-11-19 08:01:27.998423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.287 qpair failed and we were unable to recover it. 00:37:36.287 [2024-11-19 08:01:27.998559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.288 [2024-11-19 08:01:27.998595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.288 qpair failed and we were unable to recover it. 00:37:36.288 [2024-11-19 08:01:27.998746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.288 [2024-11-19 08:01:27.998786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.288 qpair failed and we were unable to recover it. 00:37:36.288 [2024-11-19 08:01:27.998924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.288 [2024-11-19 08:01:27.998967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.288 qpair failed and we were unable to recover it. 00:37:36.288 [2024-11-19 08:01:27.999185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.288 [2024-11-19 08:01:27.999220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.288 qpair failed and we were unable to recover it. 00:37:36.288 [2024-11-19 08:01:27.999385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.288 [2024-11-19 08:01:27.999440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.288 qpair failed and we were unable to recover it. 00:37:36.288 [2024-11-19 08:01:27.999572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.288 [2024-11-19 08:01:27.999611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.288 qpair failed and we were unable to recover it. 00:37:36.288 [2024-11-19 08:01:27.999756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.288 [2024-11-19 08:01:27.999791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.288 qpair failed and we were unable to recover it. 00:37:36.288 [2024-11-19 08:01:27.999925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.288 [2024-11-19 08:01:27.999974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.288 qpair failed and we were unable to recover it. 00:37:36.288 [2024-11-19 08:01:28.000113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.288 [2024-11-19 08:01:28.000152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.288 qpair failed and we were unable to recover it. 00:37:36.288 [2024-11-19 08:01:28.000296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.288 [2024-11-19 08:01:28.000335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.288 qpair failed and we were unable to recover it. 00:37:36.288 [2024-11-19 08:01:28.000462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.288 [2024-11-19 08:01:28.000515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.288 qpair failed and we were unable to recover it. 00:37:36.288 [2024-11-19 08:01:28.000700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.288 [2024-11-19 08:01:28.000752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.288 qpair failed and we were unable to recover it. 00:37:36.288 [2024-11-19 08:01:28.000898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.288 [2024-11-19 08:01:28.000961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.288 qpair failed and we were unable to recover it. 00:37:36.288 [2024-11-19 08:01:28.001096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.288 [2024-11-19 08:01:28.001138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.288 qpair failed and we were unable to recover it. 00:37:36.288 [2024-11-19 08:01:28.001289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.288 [2024-11-19 08:01:28.001328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.288 qpair failed and we were unable to recover it. 00:37:36.288 [2024-11-19 08:01:28.001491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.288 [2024-11-19 08:01:28.001531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.288 qpair failed and we were unable to recover it. 00:37:36.288 [2024-11-19 08:01:28.001698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.288 [2024-11-19 08:01:28.001745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.288 qpair failed and we were unable to recover it. 00:37:36.288 [2024-11-19 08:01:28.001862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.288 [2024-11-19 08:01:28.001898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.288 qpair failed and we were unable to recover it. 00:37:36.288 [2024-11-19 08:01:28.002060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.288 [2024-11-19 08:01:28.002098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.288 qpair failed and we were unable to recover it. 00:37:36.288 [2024-11-19 08:01:28.002219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.288 [2024-11-19 08:01:28.002258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.288 qpair failed and we were unable to recover it. 00:37:36.288 [2024-11-19 08:01:28.002513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.288 [2024-11-19 08:01:28.002567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.288 qpair failed and we were unable to recover it. 00:37:36.288 [2024-11-19 08:01:28.002713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.288 [2024-11-19 08:01:28.002771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.288 qpair failed and we were unable to recover it. 00:37:36.288 [2024-11-19 08:01:28.002882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.288 [2024-11-19 08:01:28.002917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.288 qpair failed and we were unable to recover it. 00:37:36.288 [2024-11-19 08:01:28.003076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.288 [2024-11-19 08:01:28.003114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.288 qpair failed and we were unable to recover it. 00:37:36.288 [2024-11-19 08:01:28.003292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.288 [2024-11-19 08:01:28.003331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.288 qpair failed and we were unable to recover it. 00:37:36.288 [2024-11-19 08:01:28.003465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.288 [2024-11-19 08:01:28.003504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.288 qpair failed and we were unable to recover it. 00:37:36.288 [2024-11-19 08:01:28.003630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.288 [2024-11-19 08:01:28.003665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.288 qpair failed and we were unable to recover it. 00:37:36.288 [2024-11-19 08:01:28.003810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.288 [2024-11-19 08:01:28.003860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.288 qpair failed and we were unable to recover it. 00:37:36.288 [2024-11-19 08:01:28.004012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.288 [2024-11-19 08:01:28.004051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.288 qpair failed and we were unable to recover it. 00:37:36.288 [2024-11-19 08:01:28.004236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.288 [2024-11-19 08:01:28.004301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.288 qpair failed and we were unable to recover it. 00:37:36.288 [2024-11-19 08:01:28.004465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.288 [2024-11-19 08:01:28.004528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.288 qpair failed and we were unable to recover it. 00:37:36.288 [2024-11-19 08:01:28.004709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.288 [2024-11-19 08:01:28.004770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.288 qpair failed and we were unable to recover it. 00:37:36.289 [2024-11-19 08:01:28.004908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.289 [2024-11-19 08:01:28.004952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.289 qpair failed and we were unable to recover it. 00:37:36.289 [2024-11-19 08:01:28.005060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.289 [2024-11-19 08:01:28.005095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.289 qpair failed and we were unable to recover it. 00:37:36.289 [2024-11-19 08:01:28.005234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.289 [2024-11-19 08:01:28.005270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.289 qpair failed and we were unable to recover it. 00:37:36.289 [2024-11-19 08:01:28.005396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.289 [2024-11-19 08:01:28.005446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.289 qpair failed and we were unable to recover it. 00:37:36.289 [2024-11-19 08:01:28.005558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.289 [2024-11-19 08:01:28.005596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.289 qpair failed and we were unable to recover it. 00:37:36.289 [2024-11-19 08:01:28.005717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.289 [2024-11-19 08:01:28.005762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.289 qpair failed and we were unable to recover it. 00:37:36.289 [2024-11-19 08:01:28.005892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.289 [2024-11-19 08:01:28.005932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.289 qpair failed and we were unable to recover it. 00:37:36.289 [2024-11-19 08:01:28.006073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.289 [2024-11-19 08:01:28.006109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.289 qpair failed and we were unable to recover it. 00:37:36.289 [2024-11-19 08:01:28.006287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.289 [2024-11-19 08:01:28.006337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.289 qpair failed and we were unable to recover it. 00:37:36.289 [2024-11-19 08:01:28.006452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.289 [2024-11-19 08:01:28.006490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.289 qpair failed and we were unable to recover it. 00:37:36.289 [2024-11-19 08:01:28.006681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.289 [2024-11-19 08:01:28.006751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.289 qpair failed and we were unable to recover it. 00:37:36.289 [2024-11-19 08:01:28.006889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.289 [2024-11-19 08:01:28.006931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.289 qpair failed and we were unable to recover it. 00:37:36.289 [2024-11-19 08:01:28.007064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.289 [2024-11-19 08:01:28.007103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.289 qpair failed and we were unable to recover it. 00:37:36.289 [2024-11-19 08:01:28.007247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.289 [2024-11-19 08:01:28.007311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.289 qpair failed and we were unable to recover it. 00:37:36.289 [2024-11-19 08:01:28.007444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.289 [2024-11-19 08:01:28.007509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.289 qpair failed and we were unable to recover it. 00:37:36.289 [2024-11-19 08:01:28.007697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.289 [2024-11-19 08:01:28.007747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.289 qpair failed and we were unable to recover it. 00:37:36.289 [2024-11-19 08:01:28.007863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.289 [2024-11-19 08:01:28.007900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.289 qpair failed and we were unable to recover it. 00:37:36.289 [2024-11-19 08:01:28.008003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.289 [2024-11-19 08:01:28.008039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.289 qpair failed and we were unable to recover it. 00:37:36.289 [2024-11-19 08:01:28.008194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.289 [2024-11-19 08:01:28.008264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.289 qpair failed and we were unable to recover it. 00:37:36.289 [2024-11-19 08:01:28.008434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.289 [2024-11-19 08:01:28.008492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.289 qpair failed and we were unable to recover it. 00:37:36.289 [2024-11-19 08:01:28.008646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.289 [2024-11-19 08:01:28.008687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.289 qpair failed and we were unable to recover it. 00:37:36.289 [2024-11-19 08:01:28.008945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.289 [2024-11-19 08:01:28.008980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.289 qpair failed and we were unable to recover it. 00:37:36.289 [2024-11-19 08:01:28.009104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.289 [2024-11-19 08:01:28.009159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.289 qpair failed and we were unable to recover it. 00:37:36.289 [2024-11-19 08:01:28.009294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.289 [2024-11-19 08:01:28.009333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.289 qpair failed and we were unable to recover it. 00:37:36.289 [2024-11-19 08:01:28.009552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.289 [2024-11-19 08:01:28.009608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.289 qpair failed and we were unable to recover it. 00:37:36.289 [2024-11-19 08:01:28.009748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.289 [2024-11-19 08:01:28.009784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.289 qpair failed and we were unable to recover it. 00:37:36.289 [2024-11-19 08:01:28.009948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.289 [2024-11-19 08:01:28.009999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.289 qpair failed and we were unable to recover it. 00:37:36.289 [2024-11-19 08:01:28.010194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.289 [2024-11-19 08:01:28.010255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.289 qpair failed and we were unable to recover it. 00:37:36.289 [2024-11-19 08:01:28.010398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.289 [2024-11-19 08:01:28.010458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.289 qpair failed and we were unable to recover it. 00:37:36.289 [2024-11-19 08:01:28.010614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.289 [2024-11-19 08:01:28.010655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.289 qpair failed and we were unable to recover it. 00:37:36.289 [2024-11-19 08:01:28.010849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.289 [2024-11-19 08:01:28.010898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.289 qpair failed and we were unable to recover it. 00:37:36.290 [2024-11-19 08:01:28.011083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.290 [2024-11-19 08:01:28.011138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.290 qpair failed and we were unable to recover it. 00:37:36.290 [2024-11-19 08:01:28.011256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.290 [2024-11-19 08:01:28.011299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.290 qpair failed and we were unable to recover it. 00:37:36.290 [2024-11-19 08:01:28.011474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.290 [2024-11-19 08:01:28.011514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.290 qpair failed and we were unable to recover it. 00:37:36.290 [2024-11-19 08:01:28.011678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.290 [2024-11-19 08:01:28.011728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.290 qpair failed and we were unable to recover it. 00:37:36.290 [2024-11-19 08:01:28.011860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.290 [2024-11-19 08:01:28.011909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.290 qpair failed and we were unable to recover it. 00:37:36.290 [2024-11-19 08:01:28.012130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.290 [2024-11-19 08:01:28.012188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.290 qpair failed and we were unable to recover it. 00:37:36.290 [2024-11-19 08:01:28.012366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.290 [2024-11-19 08:01:28.012422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.290 qpair failed and we were unable to recover it. 00:37:36.290 [2024-11-19 08:01:28.012536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.290 [2024-11-19 08:01:28.012575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.290 qpair failed and we were unable to recover it. 00:37:36.290 [2024-11-19 08:01:28.012745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.290 [2024-11-19 08:01:28.012781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.290 qpair failed and we were unable to recover it. 00:37:36.290 [2024-11-19 08:01:28.012944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.290 [2024-11-19 08:01:28.012980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.290 qpair failed and we were unable to recover it. 00:37:36.290 [2024-11-19 08:01:28.013139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.290 [2024-11-19 08:01:28.013178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.290 qpair failed and we were unable to recover it. 00:37:36.290 [2024-11-19 08:01:28.013384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.290 [2024-11-19 08:01:28.013424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.290 qpair failed and we were unable to recover it. 00:37:36.290 [2024-11-19 08:01:28.013593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.290 [2024-11-19 08:01:28.013631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.290 qpair failed and we were unable to recover it. 00:37:36.290 [2024-11-19 08:01:28.013789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.290 [2024-11-19 08:01:28.013824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.290 qpair failed and we were unable to recover it. 00:37:36.290 [2024-11-19 08:01:28.013959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.290 [2024-11-19 08:01:28.014013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.290 qpair failed and we were unable to recover it. 00:37:36.290 [2024-11-19 08:01:28.014151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.290 [2024-11-19 08:01:28.014187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.290 qpair failed and we were unable to recover it. 00:37:36.290 [2024-11-19 08:01:28.014305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.290 [2024-11-19 08:01:28.014361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.290 qpair failed and we were unable to recover it. 00:37:36.290 [2024-11-19 08:01:28.014578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.290 [2024-11-19 08:01:28.014649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.290 qpair failed and we were unable to recover it. 00:37:36.290 [2024-11-19 08:01:28.014799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.290 [2024-11-19 08:01:28.014838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.290 qpair failed and we were unable to recover it. 00:37:36.290 [2024-11-19 08:01:28.014976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.290 [2024-11-19 08:01:28.015013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.290 qpair failed and we were unable to recover it. 00:37:36.290 [2024-11-19 08:01:28.015172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.290 [2024-11-19 08:01:28.015232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.290 qpair failed and we were unable to recover it. 00:37:36.290 [2024-11-19 08:01:28.015417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.290 [2024-11-19 08:01:28.015480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.290 qpair failed and we were unable to recover it. 00:37:36.290 [2024-11-19 08:01:28.015596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.290 [2024-11-19 08:01:28.015652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.290 qpair failed and we were unable to recover it. 00:37:36.290 [2024-11-19 08:01:28.015862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.290 [2024-11-19 08:01:28.015898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.290 qpair failed and we were unable to recover it. 00:37:36.290 [2024-11-19 08:01:28.016049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.290 [2024-11-19 08:01:28.016119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.290 qpair failed and we were unable to recover it. 00:37:36.290 [2024-11-19 08:01:28.016275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.290 [2024-11-19 08:01:28.016343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.290 qpair failed and we were unable to recover it. 00:37:36.290 [2024-11-19 08:01:28.016527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.290 [2024-11-19 08:01:28.016568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.290 qpair failed and we were unable to recover it. 00:37:36.290 [2024-11-19 08:01:28.016678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.290 [2024-11-19 08:01:28.016739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.290 qpair failed and we were unable to recover it. 00:37:36.290 [2024-11-19 08:01:28.016892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.290 [2024-11-19 08:01:28.016940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.290 qpair failed and we were unable to recover it. 00:37:36.290 [2024-11-19 08:01:28.017170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.290 [2024-11-19 08:01:28.017225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.290 qpair failed and we were unable to recover it. 00:37:36.290 [2024-11-19 08:01:28.017444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.290 [2024-11-19 08:01:28.017500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.290 qpair failed and we were unable to recover it. 00:37:36.290 [2024-11-19 08:01:28.017645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.291 [2024-11-19 08:01:28.017683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.291 qpair failed and we were unable to recover it. 00:37:36.291 [2024-11-19 08:01:28.017851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.291 [2024-11-19 08:01:28.017890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.291 qpair failed and we were unable to recover it. 00:37:36.291 [2024-11-19 08:01:28.018036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.291 [2024-11-19 08:01:28.018091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.291 qpair failed and we were unable to recover it. 00:37:36.291 [2024-11-19 08:01:28.018248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.291 [2024-11-19 08:01:28.018303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.291 qpair failed and we were unable to recover it. 00:37:36.291 [2024-11-19 08:01:28.018482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.291 [2024-11-19 08:01:28.018539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.291 qpair failed and we were unable to recover it. 00:37:36.291 [2024-11-19 08:01:28.018685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.291 [2024-11-19 08:01:28.018739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.291 qpair failed and we were unable to recover it. 00:37:36.291 [2024-11-19 08:01:28.018873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.291 [2024-11-19 08:01:28.018909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.291 qpair failed and we were unable to recover it. 00:37:36.291 [2024-11-19 08:01:28.019016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.291 [2024-11-19 08:01:28.019052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.291 qpair failed and we were unable to recover it. 00:37:36.291 [2024-11-19 08:01:28.019178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.291 [2024-11-19 08:01:28.019218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.291 qpair failed and we were unable to recover it. 00:37:36.291 [2024-11-19 08:01:28.019455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.291 [2024-11-19 08:01:28.019528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.291 qpair failed and we were unable to recover it. 00:37:36.291 [2024-11-19 08:01:28.019668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.291 [2024-11-19 08:01:28.019713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.291 qpair failed and we were unable to recover it. 00:37:36.291 [2024-11-19 08:01:28.019876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.291 [2024-11-19 08:01:28.019911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.291 qpair failed and we were unable to recover it. 00:37:36.291 [2024-11-19 08:01:28.020040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.291 [2024-11-19 08:01:28.020098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.291 qpair failed and we were unable to recover it. 00:37:36.291 [2024-11-19 08:01:28.020240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.291 [2024-11-19 08:01:28.020296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.291 qpair failed and we were unable to recover it. 00:37:36.291 [2024-11-19 08:01:28.020433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.291 [2024-11-19 08:01:28.020489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.291 qpair failed and we were unable to recover it. 00:37:36.291 [2024-11-19 08:01:28.020624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.291 [2024-11-19 08:01:28.020666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.291 qpair failed and we were unable to recover it. 00:37:36.291 [2024-11-19 08:01:28.020824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.291 [2024-11-19 08:01:28.020859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.291 qpair failed and we were unable to recover it. 00:37:36.291 [2024-11-19 08:01:28.021020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.291 [2024-11-19 08:01:28.021059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.291 qpair failed and we were unable to recover it. 00:37:36.291 [2024-11-19 08:01:28.021234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.291 [2024-11-19 08:01:28.021297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.291 qpair failed and we were unable to recover it. 00:37:36.291 [2024-11-19 08:01:28.021463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.291 [2024-11-19 08:01:28.021526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.291 qpair failed and we were unable to recover it. 00:37:36.291 [2024-11-19 08:01:28.021679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.291 [2024-11-19 08:01:28.021748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.291 qpair failed and we were unable to recover it. 00:37:36.291 [2024-11-19 08:01:28.021890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.291 [2024-11-19 08:01:28.021942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.291 qpair failed and we were unable to recover it. 00:37:36.291 [2024-11-19 08:01:28.022106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.291 [2024-11-19 08:01:28.022165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.291 qpair failed and we were unable to recover it. 00:37:36.291 [2024-11-19 08:01:28.022338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.291 [2024-11-19 08:01:28.022395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.291 qpair failed and we were unable to recover it. 00:37:36.291 [2024-11-19 08:01:28.022570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.291 [2024-11-19 08:01:28.022623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.291 qpair failed and we were unable to recover it. 00:37:36.291 [2024-11-19 08:01:28.022731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.291 [2024-11-19 08:01:28.022767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.291 qpair failed and we were unable to recover it. 00:37:36.291 [2024-11-19 08:01:28.022906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.291 [2024-11-19 08:01:28.022967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.291 qpair failed and we were unable to recover it. 00:37:36.291 [2024-11-19 08:01:28.023141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.291 [2024-11-19 08:01:28.023191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.291 qpair failed and we were unable to recover it. 00:37:36.291 [2024-11-19 08:01:28.023333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.291 [2024-11-19 08:01:28.023370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.291 qpair failed and we were unable to recover it. 00:37:36.291 [2024-11-19 08:01:28.023535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.291 [2024-11-19 08:01:28.023572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.291 qpair failed and we were unable to recover it. 00:37:36.291 [2024-11-19 08:01:28.023736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.291 [2024-11-19 08:01:28.023786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.291 qpair failed and we were unable to recover it. 00:37:36.291 [2024-11-19 08:01:28.023953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.291 [2024-11-19 08:01:28.024030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.291 qpair failed and we were unable to recover it. 00:37:36.292 [2024-11-19 08:01:28.024150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.292 [2024-11-19 08:01:28.024190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.292 qpair failed and we were unable to recover it. 00:37:36.292 [2024-11-19 08:01:28.024333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.292 [2024-11-19 08:01:28.024390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.292 qpair failed and we were unable to recover it. 00:37:36.292 [2024-11-19 08:01:28.024515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.292 [2024-11-19 08:01:28.024554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.292 qpair failed and we were unable to recover it. 00:37:36.292 [2024-11-19 08:01:28.024671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.292 [2024-11-19 08:01:28.024741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.292 qpair failed and we were unable to recover it. 00:37:36.292 [2024-11-19 08:01:28.024852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.292 [2024-11-19 08:01:28.024887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.292 qpair failed and we were unable to recover it. 00:37:36.292 [2024-11-19 08:01:28.024996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.292 [2024-11-19 08:01:28.025031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.292 qpair failed and we were unable to recover it. 00:37:36.292 [2024-11-19 08:01:28.025131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.292 [2024-11-19 08:01:28.025186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.292 qpair failed and we were unable to recover it. 00:37:36.292 [2024-11-19 08:01:28.025307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.292 [2024-11-19 08:01:28.025346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.292 qpair failed and we were unable to recover it. 00:37:36.292 [2024-11-19 08:01:28.025499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.292 [2024-11-19 08:01:28.025558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.292 qpair failed and we were unable to recover it. 00:37:36.292 [2024-11-19 08:01:28.025706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.292 [2024-11-19 08:01:28.025756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.292 qpair failed and we were unable to recover it. 00:37:36.292 [2024-11-19 08:01:28.025898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.292 [2024-11-19 08:01:28.025942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.292 qpair failed and we were unable to recover it. 00:37:36.292 [2024-11-19 08:01:28.026086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.292 [2024-11-19 08:01:28.026123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.292 qpair failed and we were unable to recover it. 00:37:36.292 [2024-11-19 08:01:28.026247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.292 [2024-11-19 08:01:28.026286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.292 qpair failed and we were unable to recover it. 00:37:36.292 [2024-11-19 08:01:28.026440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.292 [2024-11-19 08:01:28.026479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.292 qpair failed and we were unable to recover it. 00:37:36.292 [2024-11-19 08:01:28.026603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.292 [2024-11-19 08:01:28.026643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.292 qpair failed and we were unable to recover it. 00:37:36.292 [2024-11-19 08:01:28.026812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.292 [2024-11-19 08:01:28.026848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.292 qpair failed and we were unable to recover it. 00:37:36.292 [2024-11-19 08:01:28.027018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.292 [2024-11-19 08:01:28.027056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.292 qpair failed and we were unable to recover it. 00:37:36.292 [2024-11-19 08:01:28.027182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.292 [2024-11-19 08:01:28.027221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.292 qpair failed and we were unable to recover it. 00:37:36.292 [2024-11-19 08:01:28.027340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.292 [2024-11-19 08:01:28.027392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.292 qpair failed and we were unable to recover it. 00:37:36.292 [2024-11-19 08:01:28.027540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.292 [2024-11-19 08:01:28.027582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.292 qpair failed and we were unable to recover it. 00:37:36.292 [2024-11-19 08:01:28.027753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.292 [2024-11-19 08:01:28.027791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.292 qpair failed and we were unable to recover it. 00:37:36.292 [2024-11-19 08:01:28.027954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.292 [2024-11-19 08:01:28.028004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.292 qpair failed and we were unable to recover it. 00:37:36.292 [2024-11-19 08:01:28.028132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.292 [2024-11-19 08:01:28.028190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.292 qpair failed and we were unable to recover it. 00:37:36.292 [2024-11-19 08:01:28.028392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.292 [2024-11-19 08:01:28.028448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.292 qpair failed and we were unable to recover it. 00:37:36.292 [2024-11-19 08:01:28.028557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.292 [2024-11-19 08:01:28.028593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.292 qpair failed and we were unable to recover it. 00:37:36.292 [2024-11-19 08:01:28.028754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.292 [2024-11-19 08:01:28.028804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.292 qpair failed and we were unable to recover it. 00:37:36.292 [2024-11-19 08:01:28.028973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.292 [2024-11-19 08:01:28.029014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.292 qpair failed and we were unable to recover it. 00:37:36.292 [2024-11-19 08:01:28.029228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.293 [2024-11-19 08:01:28.029287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.293 qpair failed and we were unable to recover it. 00:37:36.293 [2024-11-19 08:01:28.029514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.293 [2024-11-19 08:01:28.029577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.293 qpair failed and we were unable to recover it. 00:37:36.293 [2024-11-19 08:01:28.029712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.293 [2024-11-19 08:01:28.029775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.293 qpair failed and we were unable to recover it. 00:37:36.293 [2024-11-19 08:01:28.029931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.293 [2024-11-19 08:01:28.029988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.293 qpair failed and we were unable to recover it. 00:37:36.293 [2024-11-19 08:01:28.030162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.293 [2024-11-19 08:01:28.030223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.293 qpair failed and we were unable to recover it. 00:37:36.293 [2024-11-19 08:01:28.030351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.293 [2024-11-19 08:01:28.030411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.293 qpair failed and we were unable to recover it. 00:37:36.293 [2024-11-19 08:01:28.030549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.293 [2024-11-19 08:01:28.030585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.293 qpair failed and we were unable to recover it. 00:37:36.293 [2024-11-19 08:01:28.030730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.293 [2024-11-19 08:01:28.030766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.293 qpair failed and we were unable to recover it. 00:37:36.293 [2024-11-19 08:01:28.030951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.293 [2024-11-19 08:01:28.031005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.293 qpair failed and we were unable to recover it. 00:37:36.293 [2024-11-19 08:01:28.031150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.293 [2024-11-19 08:01:28.031214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.293 qpair failed and we were unable to recover it. 00:37:36.293 [2024-11-19 08:01:28.031376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.293 [2024-11-19 08:01:28.031436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.293 qpair failed and we were unable to recover it. 00:37:36.293 [2024-11-19 08:01:28.031585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.293 [2024-11-19 08:01:28.031624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.293 qpair failed and we were unable to recover it. 00:37:36.293 [2024-11-19 08:01:28.031771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.293 [2024-11-19 08:01:28.031812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.293 qpair failed and we were unable to recover it. 00:37:36.293 [2024-11-19 08:01:28.031929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.293 [2024-11-19 08:01:28.031976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.293 qpair failed and we were unable to recover it. 00:37:36.293 [2024-11-19 08:01:28.032089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.293 [2024-11-19 08:01:28.032130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.293 qpair failed and we were unable to recover it. 00:37:36.293 [2024-11-19 08:01:28.032253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.293 [2024-11-19 08:01:28.032298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.293 qpair failed and we were unable to recover it. 00:37:36.293 [2024-11-19 08:01:28.032470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.293 [2024-11-19 08:01:28.032518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.293 qpair failed and we were unable to recover it. 00:37:36.293 [2024-11-19 08:01:28.032702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.293 [2024-11-19 08:01:28.032776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.293 qpair failed and we were unable to recover it. 00:37:36.293 [2024-11-19 08:01:28.032922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.293 [2024-11-19 08:01:28.032971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.293 qpair failed and we were unable to recover it. 00:37:36.293 [2024-11-19 08:01:28.033090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.293 [2024-11-19 08:01:28.033130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.293 qpair failed and we were unable to recover it. 00:37:36.293 [2024-11-19 08:01:28.033268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.293 [2024-11-19 08:01:28.033307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.293 qpair failed and we were unable to recover it. 00:37:36.293 [2024-11-19 08:01:28.033464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.293 [2024-11-19 08:01:28.033500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.293 qpair failed and we were unable to recover it. 00:37:36.293 [2024-11-19 08:01:28.033656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.293 [2024-11-19 08:01:28.033699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.293 qpair failed and we were unable to recover it. 00:37:36.293 [2024-11-19 08:01:28.033819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.293 [2024-11-19 08:01:28.033855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.293 qpair failed and we were unable to recover it. 00:37:36.293 [2024-11-19 08:01:28.034012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.293 [2024-11-19 08:01:28.034068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.293 qpair failed and we were unable to recover it. 00:37:36.293 [2024-11-19 08:01:28.034243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.293 [2024-11-19 08:01:28.034284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.293 qpair failed and we were unable to recover it. 00:37:36.293 [2024-11-19 08:01:28.034412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.293 [2024-11-19 08:01:28.034448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.293 qpair failed and we were unable to recover it. 00:37:36.293 [2024-11-19 08:01:28.034588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.293 [2024-11-19 08:01:28.034624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.293 qpair failed and we were unable to recover it. 00:37:36.293 [2024-11-19 08:01:28.034798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.294 [2024-11-19 08:01:28.034855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.294 qpair failed and we were unable to recover it. 00:37:36.294 [2024-11-19 08:01:28.035022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.294 [2024-11-19 08:01:28.035066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.294 qpair failed and we were unable to recover it. 00:37:36.294 [2024-11-19 08:01:28.035255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.294 [2024-11-19 08:01:28.035316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.294 qpair failed and we were unable to recover it. 00:37:36.294 [2024-11-19 08:01:28.035547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.294 [2024-11-19 08:01:28.035588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.294 qpair failed and we were unable to recover it. 00:37:36.294 [2024-11-19 08:01:28.035761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.294 [2024-11-19 08:01:28.035798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.294 qpair failed and we were unable to recover it. 00:37:36.294 [2024-11-19 08:01:28.035941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.294 [2024-11-19 08:01:28.035977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.294 qpair failed and we were unable to recover it. 00:37:36.294 [2024-11-19 08:01:28.036136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.294 [2024-11-19 08:01:28.036196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.294 qpair failed and we were unable to recover it. 00:37:36.294 [2024-11-19 08:01:28.036308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.294 [2024-11-19 08:01:28.036345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.294 qpair failed and we were unable to recover it. 00:37:36.294 [2024-11-19 08:01:28.036479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.294 [2024-11-19 08:01:28.036525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.294 qpair failed and we were unable to recover it. 00:37:36.294 [2024-11-19 08:01:28.036678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.294 [2024-11-19 08:01:28.036722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.294 qpair failed and we were unable to recover it. 00:37:36.294 [2024-11-19 08:01:28.036879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.294 [2024-11-19 08:01:28.036933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.294 qpair failed and we were unable to recover it. 00:37:36.294 [2024-11-19 08:01:28.037085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.294 [2024-11-19 08:01:28.037140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.294 qpair failed and we were unable to recover it. 00:37:36.294 [2024-11-19 08:01:28.037324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.294 [2024-11-19 08:01:28.037385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.294 qpair failed and we were unable to recover it. 00:37:36.294 [2024-11-19 08:01:28.037509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.294 [2024-11-19 08:01:28.037549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.294 qpair failed and we were unable to recover it. 00:37:36.294 [2024-11-19 08:01:28.037709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.294 [2024-11-19 08:01:28.037746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.294 qpair failed and we were unable to recover it. 00:37:36.294 [2024-11-19 08:01:28.037866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.294 [2024-11-19 08:01:28.037904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.294 qpair failed and we were unable to recover it. 00:37:36.294 [2024-11-19 08:01:28.038041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.294 [2024-11-19 08:01:28.038079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.294 qpair failed and we were unable to recover it. 00:37:36.294 [2024-11-19 08:01:28.038200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.294 [2024-11-19 08:01:28.038240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.294 qpair failed and we were unable to recover it. 00:37:36.294 [2024-11-19 08:01:28.038417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.294 [2024-11-19 08:01:28.038457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.294 qpair failed and we were unable to recover it. 00:37:36.294 [2024-11-19 08:01:28.038605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.294 [2024-11-19 08:01:28.038645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.294 qpair failed and we were unable to recover it. 00:37:36.294 [2024-11-19 08:01:28.038858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.294 [2024-11-19 08:01:28.038896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.294 qpair failed and we were unable to recover it. 00:37:36.294 [2024-11-19 08:01:28.039008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.294 [2024-11-19 08:01:28.039044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.294 qpair failed and we were unable to recover it. 00:37:36.294 [2024-11-19 08:01:28.039178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.294 [2024-11-19 08:01:28.039232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.294 qpair failed and we were unable to recover it. 00:37:36.294 [2024-11-19 08:01:28.039356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.294 [2024-11-19 08:01:28.039396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.294 qpair failed and we were unable to recover it. 00:37:36.294 [2024-11-19 08:01:28.039519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.294 [2024-11-19 08:01:28.039567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.294 qpair failed and we were unable to recover it. 00:37:36.294 [2024-11-19 08:01:28.039721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.294 [2024-11-19 08:01:28.039776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.294 qpair failed and we were unable to recover it. 00:37:36.294 [2024-11-19 08:01:28.039910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.294 [2024-11-19 08:01:28.039946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.294 qpair failed and we were unable to recover it. 00:37:36.294 [2024-11-19 08:01:28.040108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.294 [2024-11-19 08:01:28.040150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.294 qpair failed and we were unable to recover it. 00:37:36.294 [2024-11-19 08:01:28.040311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.294 [2024-11-19 08:01:28.040370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.294 qpair failed and we were unable to recover it. 00:37:36.294 [2024-11-19 08:01:28.040508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.294 [2024-11-19 08:01:28.040546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.294 qpair failed and we were unable to recover it. 00:37:36.294 [2024-11-19 08:01:28.040694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.294 [2024-11-19 08:01:28.040731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.294 qpair failed and we were unable to recover it. 00:37:36.294 [2024-11-19 08:01:28.040860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.294 [2024-11-19 08:01:28.040910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.294 qpair failed and we were unable to recover it. 00:37:36.294 [2024-11-19 08:01:28.041059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.294 [2024-11-19 08:01:28.041095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.294 qpair failed and we were unable to recover it. 00:37:36.294 [2024-11-19 08:01:28.041289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.294 [2024-11-19 08:01:28.041330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.294 qpair failed and we were unable to recover it. 00:37:36.294 [2024-11-19 08:01:28.041477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.295 [2024-11-19 08:01:28.041516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.295 qpair failed and we were unable to recover it. 00:37:36.295 [2024-11-19 08:01:28.041710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.295 [2024-11-19 08:01:28.041747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.295 qpair failed and we were unable to recover it. 00:37:36.295 [2024-11-19 08:01:28.041881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.295 [2024-11-19 08:01:28.041918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.295 qpair failed and we were unable to recover it. 00:37:36.295 [2024-11-19 08:01:28.042069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.295 [2024-11-19 08:01:28.042126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.295 qpair failed and we were unable to recover it. 00:37:36.295 [2024-11-19 08:01:28.042254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.295 [2024-11-19 08:01:28.042294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.295 qpair failed and we were unable to recover it. 00:37:36.295 [2024-11-19 08:01:28.042447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.295 [2024-11-19 08:01:28.042487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.295 qpair failed and we were unable to recover it. 00:37:36.295 [2024-11-19 08:01:28.042646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.295 [2024-11-19 08:01:28.042686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.295 qpair failed and we were unable to recover it. 00:37:36.295 [2024-11-19 08:01:28.042824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.295 [2024-11-19 08:01:28.042860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.295 qpair failed and we were unable to recover it. 00:37:36.295 [2024-11-19 08:01:28.043001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.295 [2024-11-19 08:01:28.043055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.295 qpair failed and we were unable to recover it. 00:37:36.295 [2024-11-19 08:01:28.043231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.295 [2024-11-19 08:01:28.043291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.295 qpair failed and we were unable to recover it. 00:37:36.295 [2024-11-19 08:01:28.043433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.295 [2024-11-19 08:01:28.043473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.295 qpair failed and we were unable to recover it. 00:37:36.295 [2024-11-19 08:01:28.043654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.295 [2024-11-19 08:01:28.043715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.295 qpair failed and we were unable to recover it. 00:37:36.295 [2024-11-19 08:01:28.043829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.295 [2024-11-19 08:01:28.043866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.295 qpair failed and we were unable to recover it. 00:37:36.295 [2024-11-19 08:01:28.044065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.295 [2024-11-19 08:01:28.044120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.295 qpair failed and we were unable to recover it. 00:37:36.295 [2024-11-19 08:01:28.044302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.295 [2024-11-19 08:01:28.044360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.295 qpair failed and we were unable to recover it. 00:37:36.295 [2024-11-19 08:01:28.044540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.295 [2024-11-19 08:01:28.044596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.295 qpair failed and we were unable to recover it. 00:37:36.295 [2024-11-19 08:01:28.044707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.295 [2024-11-19 08:01:28.044743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.295 qpair failed and we were unable to recover it. 00:37:36.295 [2024-11-19 08:01:28.044871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.295 [2024-11-19 08:01:28.044920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.295 qpair failed and we were unable to recover it. 00:37:36.295 [2024-11-19 08:01:28.045151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.295 [2024-11-19 08:01:28.045194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.295 qpair failed and we were unable to recover it. 00:37:36.295 [2024-11-19 08:01:28.045396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.295 [2024-11-19 08:01:28.045455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.295 qpair failed and we were unable to recover it. 00:37:36.295 [2024-11-19 08:01:28.045580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.295 [2024-11-19 08:01:28.045619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.295 qpair failed and we were unable to recover it. 00:37:36.295 [2024-11-19 08:01:28.045802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.295 [2024-11-19 08:01:28.045853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.295 qpair failed and we were unable to recover it. 00:37:36.295 [2024-11-19 08:01:28.045975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.295 [2024-11-19 08:01:28.046014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.295 qpair failed and we were unable to recover it. 00:37:36.295 [2024-11-19 08:01:28.046178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.295 [2024-11-19 08:01:28.046234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.295 qpair failed and we were unable to recover it. 00:37:36.295 [2024-11-19 08:01:28.046391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.295 [2024-11-19 08:01:28.046447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.295 qpair failed and we were unable to recover it. 00:37:36.295 [2024-11-19 08:01:28.046556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.295 [2024-11-19 08:01:28.046592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.295 qpair failed and we were unable to recover it. 00:37:36.295 [2024-11-19 08:01:28.046774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.295 [2024-11-19 08:01:28.046830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.295 qpair failed and we were unable to recover it. 00:37:36.295 [2024-11-19 08:01:28.046966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.295 [2024-11-19 08:01:28.047007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.295 qpair failed and we were unable to recover it. 00:37:36.295 [2024-11-19 08:01:28.047174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.295 [2024-11-19 08:01:28.047234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.296 qpair failed and we were unable to recover it. 00:37:36.296 [2024-11-19 08:01:28.047365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.296 [2024-11-19 08:01:28.047401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.296 qpair failed and we were unable to recover it. 00:37:36.296 [2024-11-19 08:01:28.047570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.296 [2024-11-19 08:01:28.047617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.296 qpair failed and we were unable to recover it. 00:37:36.296 [2024-11-19 08:01:28.047767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.296 [2024-11-19 08:01:28.047807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.296 qpair failed and we were unable to recover it. 00:37:36.296 [2024-11-19 08:01:28.047917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.296 [2024-11-19 08:01:28.047954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.296 qpair failed and we were unable to recover it. 00:37:36.296 [2024-11-19 08:01:28.048134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.296 [2024-11-19 08:01:28.048173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.296 qpair failed and we were unable to recover it. 00:37:36.296 [2024-11-19 08:01:28.048324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.296 [2024-11-19 08:01:28.048364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.296 qpair failed and we were unable to recover it. 00:37:36.296 [2024-11-19 08:01:28.048515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.296 [2024-11-19 08:01:28.048556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.296 qpair failed and we were unable to recover it. 00:37:36.296 [2024-11-19 08:01:28.048676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.296 [2024-11-19 08:01:28.048724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.296 qpair failed and we were unable to recover it. 00:37:36.296 [2024-11-19 08:01:28.048921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.296 [2024-11-19 08:01:28.048959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.296 qpair failed and we were unable to recover it. 00:37:36.296 [2024-11-19 08:01:28.049126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.296 [2024-11-19 08:01:28.049181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.296 qpair failed and we were unable to recover it. 00:37:36.296 [2024-11-19 08:01:28.049318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.296 [2024-11-19 08:01:28.049358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.296 qpair failed and we were unable to recover it. 00:37:36.296 [2024-11-19 08:01:28.049467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.296 [2024-11-19 08:01:28.049507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.296 qpair failed and we were unable to recover it. 00:37:36.296 [2024-11-19 08:01:28.049672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.296 [2024-11-19 08:01:28.049721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.296 qpair failed and we were unable to recover it. 00:37:36.296 [2024-11-19 08:01:28.049918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.296 [2024-11-19 08:01:28.049969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.296 qpair failed and we were unable to recover it. 00:37:36.296 [2024-11-19 08:01:28.050080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.296 [2024-11-19 08:01:28.050119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.296 qpair failed and we were unable to recover it. 00:37:36.296 [2024-11-19 08:01:28.050240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.296 [2024-11-19 08:01:28.050279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.296 qpair failed and we were unable to recover it. 00:37:36.296 [2024-11-19 08:01:28.050424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.296 [2024-11-19 08:01:28.050463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.296 qpair failed and we were unable to recover it. 00:37:36.296 [2024-11-19 08:01:28.050599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.296 [2024-11-19 08:01:28.050665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.296 qpair failed and we were unable to recover it. 00:37:36.296 [2024-11-19 08:01:28.050855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.296 [2024-11-19 08:01:28.050906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.296 qpair failed and we were unable to recover it. 00:37:36.296 [2024-11-19 08:01:28.051069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.296 [2024-11-19 08:01:28.051123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.296 qpair failed and we were unable to recover it. 00:37:36.296 [2024-11-19 08:01:28.051275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.296 [2024-11-19 08:01:28.051327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.296 qpair failed and we were unable to recover it. 00:37:36.296 [2024-11-19 08:01:28.051464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.296 [2024-11-19 08:01:28.051500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.296 qpair failed and we were unable to recover it. 00:37:36.296 [2024-11-19 08:01:28.051636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.296 [2024-11-19 08:01:28.051671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.296 qpair failed and we were unable to recover it. 00:37:36.296 [2024-11-19 08:01:28.051883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.296 [2024-11-19 08:01:28.051919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.296 qpair failed and we were unable to recover it. 00:37:36.296 [2024-11-19 08:01:28.052055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.296 [2024-11-19 08:01:28.052096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.296 qpair failed and we were unable to recover it. 00:37:36.296 [2024-11-19 08:01:28.052210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.296 [2024-11-19 08:01:28.052245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.296 qpair failed and we were unable to recover it. 00:37:36.296 [2024-11-19 08:01:28.052383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.296 [2024-11-19 08:01:28.052420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.296 qpair failed and we were unable to recover it. 00:37:36.296 [2024-11-19 08:01:28.052532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.297 [2024-11-19 08:01:28.052569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.297 qpair failed and we were unable to recover it. 00:37:36.297 [2024-11-19 08:01:28.052715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.297 [2024-11-19 08:01:28.052766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.297 qpair failed and we were unable to recover it. 00:37:36.297 [2024-11-19 08:01:28.052928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.297 [2024-11-19 08:01:28.052983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.297 qpair failed and we were unable to recover it. 00:37:36.297 [2024-11-19 08:01:28.053114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.297 [2024-11-19 08:01:28.053167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.297 qpair failed and we were unable to recover it. 00:37:36.297 [2024-11-19 08:01:28.053317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.297 [2024-11-19 08:01:28.053368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.297 qpair failed and we were unable to recover it. 00:37:36.297 [2024-11-19 08:01:28.053531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.297 [2024-11-19 08:01:28.053566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.297 qpair failed and we were unable to recover it. 00:37:36.297 [2024-11-19 08:01:28.053705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.297 [2024-11-19 08:01:28.053741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.297 qpair failed and we were unable to recover it. 00:37:36.297 [2024-11-19 08:01:28.053862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.297 [2024-11-19 08:01:28.053899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.297 qpair failed and we were unable to recover it. 00:37:36.297 [2024-11-19 08:01:28.054076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.297 [2024-11-19 08:01:28.054119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.297 qpair failed and we were unable to recover it. 00:37:36.297 [2024-11-19 08:01:28.054273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.297 [2024-11-19 08:01:28.054311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.297 qpair failed and we were unable to recover it. 00:37:36.297 [2024-11-19 08:01:28.054456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.297 [2024-11-19 08:01:28.054495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.297 qpair failed and we were unable to recover it. 00:37:36.297 [2024-11-19 08:01:28.054649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.297 [2024-11-19 08:01:28.054686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.297 qpair failed and we were unable to recover it. 00:37:36.297 [2024-11-19 08:01:28.054826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.297 [2024-11-19 08:01:28.054879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.297 qpair failed and we were unable to recover it. 00:37:36.297 [2024-11-19 08:01:28.055064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.297 [2024-11-19 08:01:28.055116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.297 qpair failed and we were unable to recover it. 00:37:36.297 [2024-11-19 08:01:28.055239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.297 [2024-11-19 08:01:28.055298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.297 qpair failed and we were unable to recover it. 00:37:36.297 [2024-11-19 08:01:28.055435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.297 [2024-11-19 08:01:28.055472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.297 qpair failed and we were unable to recover it. 00:37:36.297 [2024-11-19 08:01:28.055608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.297 [2024-11-19 08:01:28.055644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.297 qpair failed and we were unable to recover it. 00:37:36.297 [2024-11-19 08:01:28.055813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.297 [2024-11-19 08:01:28.055851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.297 qpair failed and we were unable to recover it. 00:37:36.297 [2024-11-19 08:01:28.056007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.297 [2024-11-19 08:01:28.056057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.297 qpair failed and we were unable to recover it. 00:37:36.297 [2024-11-19 08:01:28.056171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.297 [2024-11-19 08:01:28.056209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.297 qpair failed and we were unable to recover it. 00:37:36.297 [2024-11-19 08:01:28.056370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.297 [2024-11-19 08:01:28.056406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.297 qpair failed and we were unable to recover it. 00:37:36.297 [2024-11-19 08:01:28.056545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.297 [2024-11-19 08:01:28.056581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.297 qpair failed and we were unable to recover it. 00:37:36.297 [2024-11-19 08:01:28.056719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.297 [2024-11-19 08:01:28.056757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.297 qpair failed and we were unable to recover it. 00:37:36.297 [2024-11-19 08:01:28.056889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.297 [2024-11-19 08:01:28.056943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.297 qpair failed and we were unable to recover it. 00:37:36.297 [2024-11-19 08:01:28.057126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.297 [2024-11-19 08:01:28.057164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.297 qpair failed and we were unable to recover it. 00:37:36.297 [2024-11-19 08:01:28.057267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.297 [2024-11-19 08:01:28.057304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.297 qpair failed and we were unable to recover it. 00:37:36.297 [2024-11-19 08:01:28.057440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.298 [2024-11-19 08:01:28.057485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.298 qpair failed and we were unable to recover it. 00:37:36.298 [2024-11-19 08:01:28.057652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.298 [2024-11-19 08:01:28.057693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.298 qpair failed and we were unable to recover it. 00:37:36.298 [2024-11-19 08:01:28.057827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.298 [2024-11-19 08:01:28.057878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.298 qpair failed and we were unable to recover it. 00:37:36.298 [2024-11-19 08:01:28.057998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.298 [2024-11-19 08:01:28.058034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.298 qpair failed and we were unable to recover it. 00:37:36.298 [2024-11-19 08:01:28.058150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.298 [2024-11-19 08:01:28.058186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.298 qpair failed and we were unable to recover it. 00:37:36.298 [2024-11-19 08:01:28.058351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.298 [2024-11-19 08:01:28.058387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.298 qpair failed and we were unable to recover it. 00:37:36.298 [2024-11-19 08:01:28.058488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.298 [2024-11-19 08:01:28.058523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.298 qpair failed and we were unable to recover it. 00:37:36.298 [2024-11-19 08:01:28.058681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.298 [2024-11-19 08:01:28.058743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.298 qpair failed and we were unable to recover it. 00:37:36.298 [2024-11-19 08:01:28.058854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.298 [2024-11-19 08:01:28.058891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.298 qpair failed and we were unable to recover it. 00:37:36.298 [2024-11-19 08:01:28.059007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.298 [2024-11-19 08:01:28.059048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.298 qpair failed and we were unable to recover it. 00:37:36.298 [2024-11-19 08:01:28.059224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.298 [2024-11-19 08:01:28.059261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.298 qpair failed and we were unable to recover it. 00:37:36.298 [2024-11-19 08:01:28.059401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.298 [2024-11-19 08:01:28.059438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.298 qpair failed and we were unable to recover it. 00:37:36.298 [2024-11-19 08:01:28.059567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.298 [2024-11-19 08:01:28.059604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.298 qpair failed and we were unable to recover it. 00:37:36.298 [2024-11-19 08:01:28.059727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.298 [2024-11-19 08:01:28.059765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.298 qpair failed and we were unable to recover it. 00:37:36.298 [2024-11-19 08:01:28.059926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.298 [2024-11-19 08:01:28.059962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.298 qpair failed and we were unable to recover it. 00:37:36.298 [2024-11-19 08:01:28.060095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.298 [2024-11-19 08:01:28.060132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.298 qpair failed and we were unable to recover it. 00:37:36.298 [2024-11-19 08:01:28.060248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.298 [2024-11-19 08:01:28.060285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.298 qpair failed and we were unable to recover it. 00:37:36.298 [2024-11-19 08:01:28.060426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.298 [2024-11-19 08:01:28.060462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.298 qpair failed and we were unable to recover it. 00:37:36.298 [2024-11-19 08:01:28.060599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.298 [2024-11-19 08:01:28.060635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.298 qpair failed and we were unable to recover it. 00:37:36.298 [2024-11-19 08:01:28.060751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.298 [2024-11-19 08:01:28.060788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.298 qpair failed and we were unable to recover it. 00:37:36.298 [2024-11-19 08:01:28.060952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.298 [2024-11-19 08:01:28.060988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.298 qpair failed and we were unable to recover it. 00:37:36.298 [2024-11-19 08:01:28.061143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.298 [2024-11-19 08:01:28.061180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.298 qpair failed and we were unable to recover it. 00:37:36.298 [2024-11-19 08:01:28.061318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.298 [2024-11-19 08:01:28.061354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.298 qpair failed and we were unable to recover it. 00:37:36.298 [2024-11-19 08:01:28.061466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.298 [2024-11-19 08:01:28.061504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.298 qpair failed and we were unable to recover it. 00:37:36.298 [2024-11-19 08:01:28.061653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.298 [2024-11-19 08:01:28.061711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.298 qpair failed and we were unable to recover it. 00:37:36.298 [2024-11-19 08:01:28.061848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.298 [2024-11-19 08:01:28.061898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.298 qpair failed and we were unable to recover it. 00:37:36.298 [2024-11-19 08:01:28.062124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.298 [2024-11-19 08:01:28.062174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.298 qpair failed and we were unable to recover it. 00:37:36.298 [2024-11-19 08:01:28.062293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.299 [2024-11-19 08:01:28.062342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.299 qpair failed and we were unable to recover it. 00:37:36.299 [2024-11-19 08:01:28.062508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.299 [2024-11-19 08:01:28.062550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.299 qpair failed and we were unable to recover it. 00:37:36.299 [2024-11-19 08:01:28.062681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.299 [2024-11-19 08:01:28.062731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.299 qpair failed and we were unable to recover it. 00:37:36.299 [2024-11-19 08:01:28.062840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.299 [2024-11-19 08:01:28.062876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.299 qpair failed and we were unable to recover it. 00:37:36.299 [2024-11-19 08:01:28.063006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.299 [2024-11-19 08:01:28.063057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.299 qpair failed and we were unable to recover it. 00:37:36.299 [2024-11-19 08:01:28.063178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.299 [2024-11-19 08:01:28.063219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.299 qpair failed and we were unable to recover it. 00:37:36.299 [2024-11-19 08:01:28.063367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.299 [2024-11-19 08:01:28.063405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.299 qpair failed and we were unable to recover it. 00:37:36.299 [2024-11-19 08:01:28.063563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.299 [2024-11-19 08:01:28.063599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.299 qpair failed and we were unable to recover it. 00:37:36.299 [2024-11-19 08:01:28.063715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.299 [2024-11-19 08:01:28.063752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.299 qpair failed and we were unable to recover it. 00:37:36.299 [2024-11-19 08:01:28.063863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.299 [2024-11-19 08:01:28.063899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.299 qpair failed and we were unable to recover it. 00:37:36.299 [2024-11-19 08:01:28.064043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.299 [2024-11-19 08:01:28.064081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.299 qpair failed and we were unable to recover it. 00:37:36.299 [2024-11-19 08:01:28.064197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.299 [2024-11-19 08:01:28.064233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.299 qpair failed and we were unable to recover it. 00:37:36.299 [2024-11-19 08:01:28.064370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.299 [2024-11-19 08:01:28.064407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.299 qpair failed and we were unable to recover it. 00:37:36.299 [2024-11-19 08:01:28.064572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.299 [2024-11-19 08:01:28.064608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.299 qpair failed and we were unable to recover it. 00:37:36.299 [2024-11-19 08:01:28.064752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.299 [2024-11-19 08:01:28.064790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.299 qpair failed and we were unable to recover it. 00:37:36.299 [2024-11-19 08:01:28.064909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.299 [2024-11-19 08:01:28.064944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.299 qpair failed and we were unable to recover it. 00:37:36.299 [2024-11-19 08:01:28.065092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.299 [2024-11-19 08:01:28.065128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.299 qpair failed and we were unable to recover it. 00:37:36.299 [2024-11-19 08:01:28.065281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.299 [2024-11-19 08:01:28.065331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.299 qpair failed and we were unable to recover it. 00:37:36.299 [2024-11-19 08:01:28.065468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.299 [2024-11-19 08:01:28.065504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.299 qpair failed and we were unable to recover it. 00:37:36.299 [2024-11-19 08:01:28.065651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.299 [2024-11-19 08:01:28.065687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.299 qpair failed and we were unable to recover it. 00:37:36.299 [2024-11-19 08:01:28.065811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.299 [2024-11-19 08:01:28.065847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.299 qpair failed and we were unable to recover it. 00:37:36.299 [2024-11-19 08:01:28.065954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.299 [2024-11-19 08:01:28.065990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.299 qpair failed and we were unable to recover it. 00:37:36.299 [2024-11-19 08:01:28.066150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.299 [2024-11-19 08:01:28.066187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.299 qpair failed and we were unable to recover it. 00:37:36.299 [2024-11-19 08:01:28.066327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.299 [2024-11-19 08:01:28.066364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.299 qpair failed and we were unable to recover it. 00:37:36.299 [2024-11-19 08:01:28.066500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.299 [2024-11-19 08:01:28.066536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.299 qpair failed and we were unable to recover it. 00:37:36.299 [2024-11-19 08:01:28.066708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.299 [2024-11-19 08:01:28.066758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.299 qpair failed and we were unable to recover it. 00:37:36.299 [2024-11-19 08:01:28.066894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.299 [2024-11-19 08:01:28.066932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.299 qpair failed and we were unable to recover it. 00:37:36.299 [2024-11-19 08:01:28.067100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.299 [2024-11-19 08:01:28.067136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.299 qpair failed and we were unable to recover it. 00:37:36.299 [2024-11-19 08:01:28.067242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.299 [2024-11-19 08:01:28.067278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.299 qpair failed and we were unable to recover it. 00:37:36.299 [2024-11-19 08:01:28.067414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.299 [2024-11-19 08:01:28.067450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.299 qpair failed and we were unable to recover it. 00:37:36.299 [2024-11-19 08:01:28.067555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.299 [2024-11-19 08:01:28.067591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.299 qpair failed and we were unable to recover it. 00:37:36.299 [2024-11-19 08:01:28.067729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.299 [2024-11-19 08:01:28.067765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.299 qpair failed and we were unable to recover it. 00:37:36.299 [2024-11-19 08:01:28.067900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.299 [2024-11-19 08:01:28.067963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.299 qpair failed and we were unable to recover it. 00:37:36.299 [2024-11-19 08:01:28.068139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.300 [2024-11-19 08:01:28.068177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.300 qpair failed and we were unable to recover it. 00:37:36.300 [2024-11-19 08:01:28.068356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.300 [2024-11-19 08:01:28.068407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.300 qpair failed and we were unable to recover it. 00:37:36.300 [2024-11-19 08:01:28.068552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.300 [2024-11-19 08:01:28.068591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.300 qpair failed and we were unable to recover it. 00:37:36.300 [2024-11-19 08:01:28.068735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.300 [2024-11-19 08:01:28.068773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.300 qpair failed and we were unable to recover it. 00:37:36.300 [2024-11-19 08:01:28.068940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.300 [2024-11-19 08:01:28.068976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.300 qpair failed and we were unable to recover it. 00:37:36.300 [2024-11-19 08:01:28.069118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.300 [2024-11-19 08:01:28.069155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.300 qpair failed and we were unable to recover it. 00:37:36.300 [2024-11-19 08:01:28.069294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.300 [2024-11-19 08:01:28.069329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.300 qpair failed and we were unable to recover it. 00:37:36.300 [2024-11-19 08:01:28.069442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.300 [2024-11-19 08:01:28.069477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.300 qpair failed and we were unable to recover it. 00:37:36.300 [2024-11-19 08:01:28.069639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.300 [2024-11-19 08:01:28.069679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.300 qpair failed and we were unable to recover it. 00:37:36.300 [2024-11-19 08:01:28.069825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.300 [2024-11-19 08:01:28.069860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.300 qpair failed and we were unable to recover it. 00:37:36.300 [2024-11-19 08:01:28.070015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.300 [2024-11-19 08:01:28.070065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.300 qpair failed and we were unable to recover it. 00:37:36.300 [2024-11-19 08:01:28.070208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.300 [2024-11-19 08:01:28.070247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.300 qpair failed and we were unable to recover it. 00:37:36.300 [2024-11-19 08:01:28.070377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.300 [2024-11-19 08:01:28.070414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.300 qpair failed and we were unable to recover it. 00:37:36.300 [2024-11-19 08:01:28.070553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.300 [2024-11-19 08:01:28.070590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.300 qpair failed and we were unable to recover it. 00:37:36.300 [2024-11-19 08:01:28.070709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.300 [2024-11-19 08:01:28.070758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.300 qpair failed and we were unable to recover it. 00:37:36.300 [2024-11-19 08:01:28.070892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.300 [2024-11-19 08:01:28.070941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.300 qpair failed and we were unable to recover it. 00:37:36.300 [2024-11-19 08:01:28.071109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.300 [2024-11-19 08:01:28.071147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.300 qpair failed and we were unable to recover it. 00:37:36.300 [2024-11-19 08:01:28.071282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.300 [2024-11-19 08:01:28.071318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.300 qpair failed and we were unable to recover it. 00:37:36.300 [2024-11-19 08:01:28.071444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.300 [2024-11-19 08:01:28.071480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.300 qpair failed and we were unable to recover it. 00:37:36.300 [2024-11-19 08:01:28.071626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.300 [2024-11-19 08:01:28.071675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.300 qpair failed and we were unable to recover it. 00:37:36.300 [2024-11-19 08:01:28.071833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.300 [2024-11-19 08:01:28.071871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.300 qpair failed and we were unable to recover it. 00:37:36.300 [2024-11-19 08:01:28.072014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.300 [2024-11-19 08:01:28.072051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.300 qpair failed and we were unable to recover it. 00:37:36.300 [2024-11-19 08:01:28.072196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.300 [2024-11-19 08:01:28.072232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.300 qpair failed and we were unable to recover it. 00:37:36.300 [2024-11-19 08:01:28.072364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.300 [2024-11-19 08:01:28.072399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.300 qpair failed and we were unable to recover it. 00:37:36.300 [2024-11-19 08:01:28.072566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.300 [2024-11-19 08:01:28.072603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.300 qpair failed and we were unable to recover it. 00:37:36.300 [2024-11-19 08:01:28.072783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.300 [2024-11-19 08:01:28.072832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.300 qpair failed and we were unable to recover it. 00:37:36.300 [2024-11-19 08:01:28.072974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.300 [2024-11-19 08:01:28.073014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.300 qpair failed and we were unable to recover it. 00:37:36.300 [2024-11-19 08:01:28.073152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.300 [2024-11-19 08:01:28.073189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.300 qpair failed and we were unable to recover it. 00:37:36.300 [2024-11-19 08:01:28.073323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.300 [2024-11-19 08:01:28.073359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.300 qpair failed and we were unable to recover it. 00:37:36.300 [2024-11-19 08:01:28.073459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.300 [2024-11-19 08:01:28.073495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.300 qpair failed and we were unable to recover it. 00:37:36.300 [2024-11-19 08:01:28.073636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.300 [2024-11-19 08:01:28.073674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.300 qpair failed and we were unable to recover it. 00:37:36.300 [2024-11-19 08:01:28.073799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.300 [2024-11-19 08:01:28.073837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.301 qpair failed and we were unable to recover it. 00:37:36.301 [2024-11-19 08:01:28.073976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.301 [2024-11-19 08:01:28.074016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.301 qpair failed and we were unable to recover it. 00:37:36.301 [2024-11-19 08:01:28.074156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.301 [2024-11-19 08:01:28.074193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.301 qpair failed and we were unable to recover it. 00:37:36.301 [2024-11-19 08:01:28.074327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.301 [2024-11-19 08:01:28.074363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.301 qpair failed and we were unable to recover it. 00:37:36.301 [2024-11-19 08:01:28.074514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.301 [2024-11-19 08:01:28.074551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.301 qpair failed and we were unable to recover it. 00:37:36.301 [2024-11-19 08:01:28.074661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.301 [2024-11-19 08:01:28.074706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.301 qpair failed and we were unable to recover it. 00:37:36.301 [2024-11-19 08:01:28.074823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.301 [2024-11-19 08:01:28.074859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.301 qpair failed and we were unable to recover it. 00:37:36.301 [2024-11-19 08:01:28.075013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.301 [2024-11-19 08:01:28.075049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.301 qpair failed and we were unable to recover it. 00:37:36.301 [2024-11-19 08:01:28.075183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.301 [2024-11-19 08:01:28.075219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.301 qpair failed and we were unable to recover it. 00:37:36.301 [2024-11-19 08:01:28.075352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.301 [2024-11-19 08:01:28.075387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.301 qpair failed and we were unable to recover it. 00:37:36.301 [2024-11-19 08:01:28.075527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.301 [2024-11-19 08:01:28.075563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.301 qpair failed and we were unable to recover it. 00:37:36.301 [2024-11-19 08:01:28.075695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.301 [2024-11-19 08:01:28.075746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.301 qpair failed and we were unable to recover it. 00:37:36.301 [2024-11-19 08:01:28.075866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.301 [2024-11-19 08:01:28.075905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.301 qpair failed and we were unable to recover it. 00:37:36.301 [2024-11-19 08:01:28.076066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.301 [2024-11-19 08:01:28.076102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.301 qpair failed and we were unable to recover it. 00:37:36.301 [2024-11-19 08:01:28.076229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.301 [2024-11-19 08:01:28.076267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.301 qpair failed and we were unable to recover it. 00:37:36.301 [2024-11-19 08:01:28.076436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.301 [2024-11-19 08:01:28.076471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.301 qpair failed and we were unable to recover it. 00:37:36.301 [2024-11-19 08:01:28.076577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.301 [2024-11-19 08:01:28.076613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.301 qpair failed and we were unable to recover it. 00:37:36.301 [2024-11-19 08:01:28.076746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.301 [2024-11-19 08:01:28.076783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.301 qpair failed and we were unable to recover it. 00:37:36.301 [2024-11-19 08:01:28.076917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.301 [2024-11-19 08:01:28.076952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.301 qpair failed and we were unable to recover it. 00:37:36.301 [2024-11-19 08:01:28.077119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.301 [2024-11-19 08:01:28.077155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.301 qpair failed and we were unable to recover it. 00:37:36.301 [2024-11-19 08:01:28.077319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.301 [2024-11-19 08:01:28.077355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.301 qpair failed and we were unable to recover it. 00:37:36.301 [2024-11-19 08:01:28.077473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.301 [2024-11-19 08:01:28.077509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.301 qpair failed and we were unable to recover it. 00:37:36.301 [2024-11-19 08:01:28.077649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.301 [2024-11-19 08:01:28.077695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.301 qpair failed and we were unable to recover it. 00:37:36.301 [2024-11-19 08:01:28.077817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.301 [2024-11-19 08:01:28.077853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.301 qpair failed and we were unable to recover it. 00:37:36.301 [2024-11-19 08:01:28.077965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.301 [2024-11-19 08:01:28.078000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.301 qpair failed and we were unable to recover it. 00:37:36.301 [2024-11-19 08:01:28.078217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.301 [2024-11-19 08:01:28.078252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.301 qpair failed and we were unable to recover it. 00:37:36.301 [2024-11-19 08:01:28.078364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.301 [2024-11-19 08:01:28.078399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.301 qpair failed and we were unable to recover it. 00:37:36.301 [2024-11-19 08:01:28.078539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.301 [2024-11-19 08:01:28.078574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.301 qpair failed and we were unable to recover it. 00:37:36.301 [2024-11-19 08:01:28.078708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.301 [2024-11-19 08:01:28.078758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.301 qpair failed and we were unable to recover it. 00:37:36.301 [2024-11-19 08:01:28.078941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.301 [2024-11-19 08:01:28.078979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.301 qpair failed and we were unable to recover it. 00:37:36.301 [2024-11-19 08:01:28.079123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.301 [2024-11-19 08:01:28.079160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.301 qpair failed and we were unable to recover it. 00:37:36.301 [2024-11-19 08:01:28.079302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.302 [2024-11-19 08:01:28.079338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.302 qpair failed and we were unable to recover it. 00:37:36.302 [2024-11-19 08:01:28.079445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.302 [2024-11-19 08:01:28.079481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.302 qpair failed and we were unable to recover it. 00:37:36.302 [2024-11-19 08:01:28.079609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.302 [2024-11-19 08:01:28.079660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.302 qpair failed and we were unable to recover it. 00:37:36.302 [2024-11-19 08:01:28.079817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.302 [2024-11-19 08:01:28.079854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.302 qpair failed and we were unable to recover it. 00:37:36.302 [2024-11-19 08:01:28.079995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.302 [2024-11-19 08:01:28.080029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.302 qpair failed and we were unable to recover it. 00:37:36.302 [2024-11-19 08:01:28.080132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.302 [2024-11-19 08:01:28.080168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.302 qpair failed and we were unable to recover it. 00:37:36.302 [2024-11-19 08:01:28.080305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.302 [2024-11-19 08:01:28.080339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.302 qpair failed and we were unable to recover it. 00:37:36.302 [2024-11-19 08:01:28.080469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.302 [2024-11-19 08:01:28.080503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.302 qpair failed and we were unable to recover it. 00:37:36.302 [2024-11-19 08:01:28.080609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.302 [2024-11-19 08:01:28.080652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.302 qpair failed and we were unable to recover it. 00:37:36.302 [2024-11-19 08:01:28.080849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.302 [2024-11-19 08:01:28.080899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.302 qpair failed and we were unable to recover it. 00:37:36.302 [2024-11-19 08:01:28.081030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.302 [2024-11-19 08:01:28.081079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.302 qpair failed and we were unable to recover it. 00:37:36.302 [2024-11-19 08:01:28.081229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.302 [2024-11-19 08:01:28.081269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.302 qpair failed and we were unable to recover it. 00:37:36.302 [2024-11-19 08:01:28.081435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.302 [2024-11-19 08:01:28.081473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.302 qpair failed and we were unable to recover it. 00:37:36.302 [2024-11-19 08:01:28.081614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.302 [2024-11-19 08:01:28.081656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.302 qpair failed and we were unable to recover it. 00:37:36.302 [2024-11-19 08:01:28.081815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.302 [2024-11-19 08:01:28.081865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.302 qpair failed and we were unable to recover it. 00:37:36.302 [2024-11-19 08:01:28.082005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.302 [2024-11-19 08:01:28.082056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.302 qpair failed and we were unable to recover it. 00:37:36.302 [2024-11-19 08:01:28.082201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.302 [2024-11-19 08:01:28.082240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.302 qpair failed and we were unable to recover it. 00:37:36.302 [2024-11-19 08:01:28.082383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.302 [2024-11-19 08:01:28.082419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.302 qpair failed and we were unable to recover it. 00:37:36.302 [2024-11-19 08:01:28.082548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.302 [2024-11-19 08:01:28.082584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.302 qpair failed and we were unable to recover it. 00:37:36.302 [2024-11-19 08:01:28.082749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.302 [2024-11-19 08:01:28.082786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.302 qpair failed and we were unable to recover it. 00:37:36.302 [2024-11-19 08:01:28.082899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.302 [2024-11-19 08:01:28.082936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.302 qpair failed and we were unable to recover it. 00:37:36.302 [2024-11-19 08:01:28.083076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.302 [2024-11-19 08:01:28.083111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.302 qpair failed and we were unable to recover it. 00:37:36.302 [2024-11-19 08:01:28.083274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.302 [2024-11-19 08:01:28.083309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.302 qpair failed and we were unable to recover it. 00:37:36.302 [2024-11-19 08:01:28.083422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.302 [2024-11-19 08:01:28.083458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.302 qpair failed and we were unable to recover it. 00:37:36.302 [2024-11-19 08:01:28.083635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.302 [2024-11-19 08:01:28.083685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.302 qpair failed and we were unable to recover it. 00:37:36.302 [2024-11-19 08:01:28.083831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.302 [2024-11-19 08:01:28.083881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.302 qpair failed and we were unable to recover it. 00:37:36.302 [2024-11-19 08:01:28.084054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.302 [2024-11-19 08:01:28.084091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.302 qpair failed and we were unable to recover it. 00:37:36.302 [2024-11-19 08:01:28.084209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.302 [2024-11-19 08:01:28.084245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.302 qpair failed and we were unable to recover it. 00:37:36.302 [2024-11-19 08:01:28.084352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.302 [2024-11-19 08:01:28.084387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.302 qpair failed and we were unable to recover it. 00:37:36.302 [2024-11-19 08:01:28.084497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.302 [2024-11-19 08:01:28.084539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.302 qpair failed and we were unable to recover it. 00:37:36.302 [2024-11-19 08:01:28.084710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.303 [2024-11-19 08:01:28.084747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.303 qpair failed and we were unable to recover it. 00:37:36.303 [2024-11-19 08:01:28.084885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.303 [2024-11-19 08:01:28.084921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.303 qpair failed and we were unable to recover it. 00:37:36.303 [2024-11-19 08:01:28.085049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.303 [2024-11-19 08:01:28.085085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.303 qpair failed and we were unable to recover it. 00:37:36.303 [2024-11-19 08:01:28.085236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.303 [2024-11-19 08:01:28.085271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.303 qpair failed and we were unable to recover it. 00:37:36.303 [2024-11-19 08:01:28.085410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.303 [2024-11-19 08:01:28.085445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.303 qpair failed and we were unable to recover it. 00:37:36.303 [2024-11-19 08:01:28.085594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.303 [2024-11-19 08:01:28.085645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.303 qpair failed and we were unable to recover it. 00:37:36.303 [2024-11-19 08:01:28.085819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.303 [2024-11-19 08:01:28.085857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.303 qpair failed and we were unable to recover it. 00:37:36.303 [2024-11-19 08:01:28.085970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.303 [2024-11-19 08:01:28.086006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.303 qpair failed and we were unable to recover it. 00:37:36.303 [2024-11-19 08:01:28.086166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.303 [2024-11-19 08:01:28.086201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.303 qpair failed and we were unable to recover it. 00:37:36.303 [2024-11-19 08:01:28.086313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.303 [2024-11-19 08:01:28.086349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.303 qpair failed and we were unable to recover it. 00:37:36.303 [2024-11-19 08:01:28.086490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.303 [2024-11-19 08:01:28.086527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.303 qpair failed and we were unable to recover it. 00:37:36.303 [2024-11-19 08:01:28.086699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.303 [2024-11-19 08:01:28.086736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.303 qpair failed and we were unable to recover it. 00:37:36.303 [2024-11-19 08:01:28.086896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.303 [2024-11-19 08:01:28.086947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.303 qpair failed and we were unable to recover it. 00:37:36.303 [2024-11-19 08:01:28.087120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.303 [2024-11-19 08:01:28.087159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.303 qpair failed and we were unable to recover it. 00:37:36.303 [2024-11-19 08:01:28.087269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.303 [2024-11-19 08:01:28.087306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.303 qpair failed and we were unable to recover it. 00:37:36.303 [2024-11-19 08:01:28.087471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.303 [2024-11-19 08:01:28.087506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.303 qpair failed and we were unable to recover it. 00:37:36.303 [2024-11-19 08:01:28.087656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.303 [2024-11-19 08:01:28.087699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.303 qpair failed and we were unable to recover it. 00:37:36.303 [2024-11-19 08:01:28.087820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.303 [2024-11-19 08:01:28.087856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.303 qpair failed and we were unable to recover it. 00:37:36.303 [2024-11-19 08:01:28.087993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.303 [2024-11-19 08:01:28.088028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.303 qpair failed and we were unable to recover it. 00:37:36.303 [2024-11-19 08:01:28.088190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.303 [2024-11-19 08:01:28.088226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.303 qpair failed and we were unable to recover it. 00:37:36.303 [2024-11-19 08:01:28.088336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.303 [2024-11-19 08:01:28.088374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.303 qpair failed and we were unable to recover it. 00:37:36.303 [2024-11-19 08:01:28.088504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.303 [2024-11-19 08:01:28.088542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.303 qpair failed and we were unable to recover it. 00:37:36.303 [2024-11-19 08:01:28.088656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.303 [2024-11-19 08:01:28.088708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.303 qpair failed and we were unable to recover it. 00:37:36.303 [2024-11-19 08:01:28.088815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.303 [2024-11-19 08:01:28.088855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.303 qpair failed and we were unable to recover it. 00:37:36.303 [2024-11-19 08:01:28.088989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.303 [2024-11-19 08:01:28.089025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.303 qpair failed and we were unable to recover it. 00:37:36.303 [2024-11-19 08:01:28.089161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.303 [2024-11-19 08:01:28.089197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.303 qpair failed and we were unable to recover it. 00:37:36.303 [2024-11-19 08:01:28.089309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.303 [2024-11-19 08:01:28.089346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.303 qpair failed and we were unable to recover it. 00:37:36.303 [2024-11-19 08:01:28.089492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.303 [2024-11-19 08:01:28.089530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.303 qpair failed and we were unable to recover it. 00:37:36.303 [2024-11-19 08:01:28.089674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.303 [2024-11-19 08:01:28.089721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.303 qpair failed and we were unable to recover it. 00:37:36.303 [2024-11-19 08:01:28.089864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.304 [2024-11-19 08:01:28.089900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.304 qpair failed and we were unable to recover it. 00:37:36.304 [2024-11-19 08:01:28.090007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.304 [2024-11-19 08:01:28.090045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.304 qpair failed and we were unable to recover it. 00:37:36.304 [2024-11-19 08:01:28.090185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.304 [2024-11-19 08:01:28.090223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.304 qpair failed and we were unable to recover it. 00:37:36.304 [2024-11-19 08:01:28.090331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.304 [2024-11-19 08:01:28.090368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.304 qpair failed and we were unable to recover it. 00:37:36.304 [2024-11-19 08:01:28.090541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.304 [2024-11-19 08:01:28.090577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.304 qpair failed and we were unable to recover it. 00:37:36.304 [2024-11-19 08:01:28.090699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.304 [2024-11-19 08:01:28.090735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.304 qpair failed and we were unable to recover it. 00:37:36.304 [2024-11-19 08:01:28.090874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.304 [2024-11-19 08:01:28.090912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.304 qpair failed and we were unable to recover it. 00:37:36.304 [2024-11-19 08:01:28.091083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.304 [2024-11-19 08:01:28.091119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.304 qpair failed and we were unable to recover it. 00:37:36.304 [2024-11-19 08:01:28.091284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.304 [2024-11-19 08:01:28.091319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.304 qpair failed and we were unable to recover it. 00:37:36.304 [2024-11-19 08:01:28.091455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.304 [2024-11-19 08:01:28.091491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.304 qpair failed and we were unable to recover it. 00:37:36.304 [2024-11-19 08:01:28.091631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.304 [2024-11-19 08:01:28.091667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.304 qpair failed and we were unable to recover it. 00:37:36.304 [2024-11-19 08:01:28.091831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.304 [2024-11-19 08:01:28.091881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.304 qpair failed and we were unable to recover it. 00:37:36.304 [2024-11-19 08:01:28.092058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.304 [2024-11-19 08:01:28.092096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.304 qpair failed and we were unable to recover it. 00:37:36.304 [2024-11-19 08:01:28.092237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.304 [2024-11-19 08:01:28.092274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.304 qpair failed and we were unable to recover it. 00:37:36.304 [2024-11-19 08:01:28.092419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.304 [2024-11-19 08:01:28.092455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.304 qpair failed and we were unable to recover it. 00:37:36.304 [2024-11-19 08:01:28.092583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.304 [2024-11-19 08:01:28.092634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.304 qpair failed and we were unable to recover it. 00:37:36.304 [2024-11-19 08:01:28.092776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.304 [2024-11-19 08:01:28.092827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.304 qpair failed and we were unable to recover it. 00:37:36.304 [2024-11-19 08:01:28.092956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.304 [2024-11-19 08:01:28.093007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.304 qpair failed and we were unable to recover it. 00:37:36.304 [2024-11-19 08:01:28.093126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.304 [2024-11-19 08:01:28.093166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.304 qpair failed and we were unable to recover it. 00:37:36.304 [2024-11-19 08:01:28.093356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.304 [2024-11-19 08:01:28.093393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.304 qpair failed and we were unable to recover it. 00:37:36.304 [2024-11-19 08:01:28.093494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.304 [2024-11-19 08:01:28.093530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.304 qpair failed and we were unable to recover it. 00:37:36.304 [2024-11-19 08:01:28.093678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.304 [2024-11-19 08:01:28.093721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.304 qpair failed and we were unable to recover it. 00:37:36.304 [2024-11-19 08:01:28.093840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.304 [2024-11-19 08:01:28.093881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.304 qpair failed and we were unable to recover it. 00:37:36.304 [2024-11-19 08:01:28.094005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.304 [2024-11-19 08:01:28.094058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.304 qpair failed and we were unable to recover it. 00:37:36.304 [2024-11-19 08:01:28.094213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.304 [2024-11-19 08:01:28.094249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.304 qpair failed and we were unable to recover it. 00:37:36.304 [2024-11-19 08:01:28.094389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.304 [2024-11-19 08:01:28.094425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.304 qpair failed and we were unable to recover it. 00:37:36.304 [2024-11-19 08:01:28.094592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.304 [2024-11-19 08:01:28.094632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.304 qpair failed and we were unable to recover it. 00:37:36.304 [2024-11-19 08:01:28.094781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.304 [2024-11-19 08:01:28.094827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.304 qpair failed and we were unable to recover it. 00:37:36.304 [2024-11-19 08:01:28.094950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.304 [2024-11-19 08:01:28.095000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.304 qpair failed and we were unable to recover it. 00:37:36.304 [2024-11-19 08:01:28.095228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.304 [2024-11-19 08:01:28.095266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.304 qpair failed and we were unable to recover it. 00:37:36.304 [2024-11-19 08:01:28.095420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.304 [2024-11-19 08:01:28.095456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.304 qpair failed and we were unable to recover it. 00:37:36.304 [2024-11-19 08:01:28.095597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.304 [2024-11-19 08:01:28.095632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.304 qpair failed and we were unable to recover it. 00:37:36.304 [2024-11-19 08:01:28.095776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.304 [2024-11-19 08:01:28.095827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.304 qpair failed and we were unable to recover it. 00:37:36.304 [2024-11-19 08:01:28.095994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.304 [2024-11-19 08:01:28.096060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.304 qpair failed and we were unable to recover it. 00:37:36.304 [2024-11-19 08:01:28.096220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.304 [2024-11-19 08:01:28.096265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.304 qpair failed and we were unable to recover it. 00:37:36.304 [2024-11-19 08:01:28.096438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.305 [2024-11-19 08:01:28.096475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.305 qpair failed and we were unable to recover it. 00:37:36.305 [2024-11-19 08:01:28.096591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.305 [2024-11-19 08:01:28.096628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.305 qpair failed and we were unable to recover it. 00:37:36.305 [2024-11-19 08:01:28.096800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.305 [2024-11-19 08:01:28.096836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.305 qpair failed and we were unable to recover it. 00:37:36.305 [2024-11-19 08:01:28.096947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.305 [2024-11-19 08:01:28.096983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.305 qpair failed and we were unable to recover it. 00:37:36.305 [2024-11-19 08:01:28.097161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.305 [2024-11-19 08:01:28.097200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.305 qpair failed and we were unable to recover it. 00:37:36.305 [2024-11-19 08:01:28.097339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.305 [2024-11-19 08:01:28.097376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.305 qpair failed and we were unable to recover it. 00:37:36.305 [2024-11-19 08:01:28.097482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.305 [2024-11-19 08:01:28.097528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.305 qpair failed and we were unable to recover it. 00:37:36.305 [2024-11-19 08:01:28.097666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.305 [2024-11-19 08:01:28.097709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.305 qpair failed and we were unable to recover it. 00:37:36.305 [2024-11-19 08:01:28.097865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.305 [2024-11-19 08:01:28.097915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.305 qpair failed and we were unable to recover it. 00:37:36.305 [2024-11-19 08:01:28.098081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.305 [2024-11-19 08:01:28.098131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.305 qpair failed and we were unable to recover it. 00:37:36.305 [2024-11-19 08:01:28.098303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.305 [2024-11-19 08:01:28.098341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.305 qpair failed and we were unable to recover it. 00:37:36.305 [2024-11-19 08:01:28.098471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.305 [2024-11-19 08:01:28.098507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.305 qpair failed and we were unable to recover it. 00:37:36.305 [2024-11-19 08:01:28.098620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.305 [2024-11-19 08:01:28.098654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.305 qpair failed and we were unable to recover it. 00:37:36.305 [2024-11-19 08:01:28.098785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.305 [2024-11-19 08:01:28.098822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.305 qpair failed and we were unable to recover it. 00:37:36.305 [2024-11-19 08:01:28.098925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.305 [2024-11-19 08:01:28.098960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.305 qpair failed and we were unable to recover it. 00:37:36.305 [2024-11-19 08:01:28.099064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.305 [2024-11-19 08:01:28.099100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.305 qpair failed and we were unable to recover it. 00:37:36.305 [2024-11-19 08:01:28.099241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.305 [2024-11-19 08:01:28.099276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.305 qpair failed and we were unable to recover it. 00:37:36.305 [2024-11-19 08:01:28.099403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.305 [2024-11-19 08:01:28.099437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.305 qpair failed and we were unable to recover it. 00:37:36.305 [2024-11-19 08:01:28.099549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.305 [2024-11-19 08:01:28.099589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.305 qpair failed and we were unable to recover it. 00:37:36.305 [2024-11-19 08:01:28.099748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.305 [2024-11-19 08:01:28.099798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.305 qpair failed and we were unable to recover it. 00:37:36.305 [2024-11-19 08:01:28.099916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.305 [2024-11-19 08:01:28.099954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.305 qpair failed and we were unable to recover it. 00:37:36.305 [2024-11-19 08:01:28.100091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.305 [2024-11-19 08:01:28.100127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.305 qpair failed and we were unable to recover it. 00:37:36.305 [2024-11-19 08:01:28.100288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.305 [2024-11-19 08:01:28.100324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.305 qpair failed and we were unable to recover it. 00:37:36.305 [2024-11-19 08:01:28.100463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.305 [2024-11-19 08:01:28.100498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.305 qpair failed and we were unable to recover it. 00:37:36.305 [2024-11-19 08:01:28.100638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.305 [2024-11-19 08:01:28.100675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.305 qpair failed and we were unable to recover it. 00:37:36.305 [2024-11-19 08:01:28.100853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.305 [2024-11-19 08:01:28.100891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.305 qpair failed and we were unable to recover it. 00:37:36.305 [2024-11-19 08:01:28.101075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.305 [2024-11-19 08:01:28.101116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.305 qpair failed and we were unable to recover it. 00:37:36.305 [2024-11-19 08:01:28.101267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.305 [2024-11-19 08:01:28.101304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.305 qpair failed and we were unable to recover it. 00:37:36.305 [2024-11-19 08:01:28.101412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.305 [2024-11-19 08:01:28.101460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.305 qpair failed and we were unable to recover it. 00:37:36.305 [2024-11-19 08:01:28.101598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.305 [2024-11-19 08:01:28.101633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.305 qpair failed and we were unable to recover it. 00:37:36.305 [2024-11-19 08:01:28.101742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.305 [2024-11-19 08:01:28.101779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.305 qpair failed and we were unable to recover it. 00:37:36.305 [2024-11-19 08:01:28.101918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.305 [2024-11-19 08:01:28.101954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.305 qpair failed and we were unable to recover it. 00:37:36.305 [2024-11-19 08:01:28.102088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.305 [2024-11-19 08:01:28.102123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.305 qpair failed and we were unable to recover it. 00:37:36.305 [2024-11-19 08:01:28.102285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.305 [2024-11-19 08:01:28.102321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.305 qpair failed and we were unable to recover it. 00:37:36.306 [2024-11-19 08:01:28.102434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.306 [2024-11-19 08:01:28.102484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.306 qpair failed and we were unable to recover it. 00:37:36.306 [2024-11-19 08:01:28.102605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.306 [2024-11-19 08:01:28.102643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.306 qpair failed and we were unable to recover it. 00:37:36.306 [2024-11-19 08:01:28.102809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.306 [2024-11-19 08:01:28.102860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.306 qpair failed and we were unable to recover it. 00:37:36.306 [2024-11-19 08:01:28.103029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.306 [2024-11-19 08:01:28.103068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.306 qpair failed and we were unable to recover it. 00:37:36.306 [2024-11-19 08:01:28.103182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.306 [2024-11-19 08:01:28.103219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.306 qpair failed and we were unable to recover it. 00:37:36.306 [2024-11-19 08:01:28.103362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.306 [2024-11-19 08:01:28.103403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.306 qpair failed and we were unable to recover it. 00:37:36.306 [2024-11-19 08:01:28.103573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.306 [2024-11-19 08:01:28.103610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.306 qpair failed and we were unable to recover it. 00:37:36.306 [2024-11-19 08:01:28.103749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.306 [2024-11-19 08:01:28.103799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.306 qpair failed and we were unable to recover it. 00:37:36.306 [2024-11-19 08:01:28.103936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.306 [2024-11-19 08:01:28.103986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.306 qpair failed and we were unable to recover it. 00:37:36.306 [2024-11-19 08:01:28.104113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.306 [2024-11-19 08:01:28.104153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.306 qpair failed and we were unable to recover it. 00:37:36.306 [2024-11-19 08:01:28.104319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.306 [2024-11-19 08:01:28.104355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.306 qpair failed and we were unable to recover it. 00:37:36.306 [2024-11-19 08:01:28.104471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.306 [2024-11-19 08:01:28.104508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.306 qpair failed and we were unable to recover it. 00:37:36.306 [2024-11-19 08:01:28.104673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.306 [2024-11-19 08:01:28.104715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.306 qpair failed and we were unable to recover it. 00:37:36.306 [2024-11-19 08:01:28.104880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.306 [2024-11-19 08:01:28.104917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.306 qpair failed and we were unable to recover it. 00:37:36.306 [2024-11-19 08:01:28.105060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.306 [2024-11-19 08:01:28.105096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.306 qpair failed and we were unable to recover it. 00:37:36.306 [2024-11-19 08:01:28.105220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.306 [2024-11-19 08:01:28.105271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.306 qpair failed and we were unable to recover it. 00:37:36.306 [2024-11-19 08:01:28.105417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.306 [2024-11-19 08:01:28.105456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.306 qpair failed and we were unable to recover it. 00:37:36.306 [2024-11-19 08:01:28.105623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.306 [2024-11-19 08:01:28.105660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.306 qpair failed and we were unable to recover it. 00:37:36.306 [2024-11-19 08:01:28.105833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.306 [2024-11-19 08:01:28.105870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.306 qpair failed and we were unable to recover it. 00:37:36.306 [2024-11-19 08:01:28.106029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.306 [2024-11-19 08:01:28.106065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.306 qpair failed and we were unable to recover it. 00:37:36.306 [2024-11-19 08:01:28.106197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.306 [2024-11-19 08:01:28.106233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.306 qpair failed and we were unable to recover it. 00:37:36.306 [2024-11-19 08:01:28.106362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.306 [2024-11-19 08:01:28.106399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.306 qpair failed and we were unable to recover it. 00:37:36.306 [2024-11-19 08:01:28.106502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.306 [2024-11-19 08:01:28.106537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.306 qpair failed and we were unable to recover it. 00:37:36.306 [2024-11-19 08:01:28.106668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.306 [2024-11-19 08:01:28.106730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.306 qpair failed and we were unable to recover it. 00:37:36.306 [2024-11-19 08:01:28.106879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.306 [2024-11-19 08:01:28.106917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.306 qpair failed and we were unable to recover it. 00:37:36.306 [2024-11-19 08:01:28.107058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.306 [2024-11-19 08:01:28.107095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.306 qpair failed and we were unable to recover it. 00:37:36.306 [2024-11-19 08:01:28.107203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.306 [2024-11-19 08:01:28.107240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.306 qpair failed and we were unable to recover it. 00:37:36.306 [2024-11-19 08:01:28.107338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.306 [2024-11-19 08:01:28.107374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.306 qpair failed and we were unable to recover it. 00:37:36.306 [2024-11-19 08:01:28.107541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.306 [2024-11-19 08:01:28.107578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.306 qpair failed and we were unable to recover it. 00:37:36.306 [2024-11-19 08:01:28.107739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.306 [2024-11-19 08:01:28.107775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.306 qpair failed and we were unable to recover it. 00:37:36.306 [2024-11-19 08:01:28.107924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.306 [2024-11-19 08:01:28.107974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.306 qpair failed and we were unable to recover it. 00:37:36.306 [2024-11-19 08:01:28.108120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.306 [2024-11-19 08:01:28.108158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.306 qpair failed and we were unable to recover it. 00:37:36.306 [2024-11-19 08:01:28.108281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.306 [2024-11-19 08:01:28.108318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.306 qpair failed and we were unable to recover it. 00:37:36.306 [2024-11-19 08:01:28.108456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.306 [2024-11-19 08:01:28.108491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.306 qpair failed and we were unable to recover it. 00:37:36.306 [2024-11-19 08:01:28.108610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.306 [2024-11-19 08:01:28.108647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.306 qpair failed and we were unable to recover it. 00:37:36.307 [2024-11-19 08:01:28.108766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.307 [2024-11-19 08:01:28.108803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.307 qpair failed and we were unable to recover it. 00:37:36.307 [2024-11-19 08:01:28.108939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.307 [2024-11-19 08:01:28.108975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.307 qpair failed and we were unable to recover it. 00:37:36.307 [2024-11-19 08:01:28.109114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.307 [2024-11-19 08:01:28.109150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.307 qpair failed and we were unable to recover it. 00:37:36.307 [2024-11-19 08:01:28.109291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.307 [2024-11-19 08:01:28.109327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.307 qpair failed and we were unable to recover it. 00:37:36.307 [2024-11-19 08:01:28.109432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.307 [2024-11-19 08:01:28.109469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.307 qpair failed and we were unable to recover it. 00:37:36.307 [2024-11-19 08:01:28.109584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.307 [2024-11-19 08:01:28.109621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.307 qpair failed and we were unable to recover it. 00:37:36.307 [2024-11-19 08:01:28.109861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.307 [2024-11-19 08:01:28.109911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.307 qpair failed and we were unable to recover it. 00:37:36.307 [2024-11-19 08:01:28.110060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.307 [2024-11-19 08:01:28.110098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.307 qpair failed and we were unable to recover it. 00:37:36.307 [2024-11-19 08:01:28.110234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.307 [2024-11-19 08:01:28.110270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.307 qpair failed and we were unable to recover it. 00:37:36.307 [2024-11-19 08:01:28.110405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.307 [2024-11-19 08:01:28.110441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.307 qpair failed and we were unable to recover it. 00:37:36.307 [2024-11-19 08:01:28.110625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.307 [2024-11-19 08:01:28.110682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.307 qpair failed and we were unable to recover it. 00:37:36.307 [2024-11-19 08:01:28.110886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.307 [2024-11-19 08:01:28.110936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.307 qpair failed and we were unable to recover it. 00:37:36.307 [2024-11-19 08:01:28.111081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.307 [2024-11-19 08:01:28.111117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.307 qpair failed and we were unable to recover it. 00:37:36.307 [2024-11-19 08:01:28.111254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.307 [2024-11-19 08:01:28.111290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.307 qpair failed and we were unable to recover it. 00:37:36.307 [2024-11-19 08:01:28.111467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.307 [2024-11-19 08:01:28.111502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.307 qpair failed and we were unable to recover it. 00:37:36.307 [2024-11-19 08:01:28.111652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.307 [2024-11-19 08:01:28.111695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.307 qpair failed and we were unable to recover it. 00:37:36.307 [2024-11-19 08:01:28.111801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.307 [2024-11-19 08:01:28.111836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.307 qpair failed and we were unable to recover it. 00:37:36.307 [2024-11-19 08:01:28.111989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.307 [2024-11-19 08:01:28.112039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.307 qpair failed and we were unable to recover it. 00:37:36.307 [2024-11-19 08:01:28.112159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.307 [2024-11-19 08:01:28.112197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.307 qpair failed and we were unable to recover it. 00:37:36.307 [2024-11-19 08:01:28.112318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.307 [2024-11-19 08:01:28.112364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.307 qpair failed and we were unable to recover it. 00:37:36.307 [2024-11-19 08:01:28.112537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.307 [2024-11-19 08:01:28.112574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.307 qpair failed and we were unable to recover it. 00:37:36.307 [2024-11-19 08:01:28.112674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.307 [2024-11-19 08:01:28.112717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.307 qpair failed and we were unable to recover it. 00:37:36.307 [2024-11-19 08:01:28.112849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.307 [2024-11-19 08:01:28.112885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.307 qpair failed and we were unable to recover it. 00:37:36.307 [2024-11-19 08:01:28.113021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.307 [2024-11-19 08:01:28.113057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.307 qpair failed and we were unable to recover it. 00:37:36.307 [2024-11-19 08:01:28.113190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.307 [2024-11-19 08:01:28.113227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.307 qpair failed and we were unable to recover it. 00:37:36.307 [2024-11-19 08:01:28.113359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.307 [2024-11-19 08:01:28.113394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.307 qpair failed and we were unable to recover it. 00:37:36.307 [2024-11-19 08:01:28.113563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.307 [2024-11-19 08:01:28.113598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.307 qpair failed and we were unable to recover it. 00:37:36.307 [2024-11-19 08:01:28.113736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.307 [2024-11-19 08:01:28.113772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.307 qpair failed and we were unable to recover it. 00:37:36.307 [2024-11-19 08:01:28.113910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.307 [2024-11-19 08:01:28.113945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.307 qpair failed and we were unable to recover it. 00:37:36.307 [2024-11-19 08:01:28.114113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.307 [2024-11-19 08:01:28.114148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.307 qpair failed and we were unable to recover it. 00:37:36.307 [2024-11-19 08:01:28.114277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.307 [2024-11-19 08:01:28.114312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.307 qpair failed and we were unable to recover it. 00:37:36.307 [2024-11-19 08:01:28.114417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.307 [2024-11-19 08:01:28.114452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.307 qpair failed and we were unable to recover it. 00:37:36.307 [2024-11-19 08:01:28.114609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.307 [2024-11-19 08:01:28.114660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.307 qpair failed and we were unable to recover it. 00:37:36.307 [2024-11-19 08:01:28.114799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.307 [2024-11-19 08:01:28.114849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.307 qpair failed and we were unable to recover it. 00:37:36.307 [2024-11-19 08:01:28.114998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.307 [2024-11-19 08:01:28.115037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.308 qpair failed and we were unable to recover it. 00:37:36.308 [2024-11-19 08:01:28.115256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.308 [2024-11-19 08:01:28.115292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.308 qpair failed and we were unable to recover it. 00:37:36.308 [2024-11-19 08:01:28.115455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.308 [2024-11-19 08:01:28.115491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.308 qpair failed and we were unable to recover it. 00:37:36.308 [2024-11-19 08:01:28.115603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.308 [2024-11-19 08:01:28.115645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.308 qpair failed and we were unable to recover it. 00:37:36.308 [2024-11-19 08:01:28.115799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.308 [2024-11-19 08:01:28.115835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.308 qpair failed and we were unable to recover it. 00:37:36.308 [2024-11-19 08:01:28.115934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.308 [2024-11-19 08:01:28.115969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.308 qpair failed and we were unable to recover it. 00:37:36.308 [2024-11-19 08:01:28.116107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.308 [2024-11-19 08:01:28.116151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.308 qpair failed and we were unable to recover it. 00:37:36.308 [2024-11-19 08:01:28.116295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.308 [2024-11-19 08:01:28.116331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.308 qpair failed and we were unable to recover it. 00:37:36.308 [2024-11-19 08:01:28.116493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.308 [2024-11-19 08:01:28.116527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.308 qpair failed and we were unable to recover it. 00:37:36.308 [2024-11-19 08:01:28.116683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.308 [2024-11-19 08:01:28.116740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.308 qpair failed and we were unable to recover it. 00:37:36.308 [2024-11-19 08:01:28.116900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.308 [2024-11-19 08:01:28.116950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.308 qpair failed and we were unable to recover it. 00:37:36.308 [2024-11-19 08:01:28.117127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.308 [2024-11-19 08:01:28.117165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.308 qpair failed and we were unable to recover it. 00:37:36.308 [2024-11-19 08:01:28.117277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.308 [2024-11-19 08:01:28.117314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.308 qpair failed and we were unable to recover it. 00:37:36.308 [2024-11-19 08:01:28.117449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.308 [2024-11-19 08:01:28.117485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.308 qpair failed and we were unable to recover it. 00:37:36.308 [2024-11-19 08:01:28.117659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.308 [2024-11-19 08:01:28.117701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.308 qpair failed and we were unable to recover it. 00:37:36.308 [2024-11-19 08:01:28.117827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.308 [2024-11-19 08:01:28.117863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.308 qpair failed and we were unable to recover it. 00:37:36.308 [2024-11-19 08:01:28.118007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.308 [2024-11-19 08:01:28.118047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.308 qpair failed and we were unable to recover it. 00:37:36.308 [2024-11-19 08:01:28.118149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.308 [2024-11-19 08:01:28.118184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.308 qpair failed and we were unable to recover it. 00:37:36.308 [2024-11-19 08:01:28.118323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.308 [2024-11-19 08:01:28.118358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.308 qpair failed and we were unable to recover it. 00:37:36.308 [2024-11-19 08:01:28.118539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.308 [2024-11-19 08:01:28.118579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.308 qpair failed and we were unable to recover it. 00:37:36.308 [2024-11-19 08:01:28.118740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.308 [2024-11-19 08:01:28.118791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.308 qpair failed and we were unable to recover it. 00:37:36.308 [2024-11-19 08:01:28.118939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.308 [2024-11-19 08:01:28.118978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.308 qpair failed and we were unable to recover it. 00:37:36.308 [2024-11-19 08:01:28.119160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.308 [2024-11-19 08:01:28.119198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.308 qpair failed and we were unable to recover it. 00:37:36.308 [2024-11-19 08:01:28.119365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.308 [2024-11-19 08:01:28.119433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.308 qpair failed and we were unable to recover it. 00:37:36.308 [2024-11-19 08:01:28.119565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.308 [2024-11-19 08:01:28.119602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.308 qpair failed and we were unable to recover it. 00:37:36.308 [2024-11-19 08:01:28.119760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.308 [2024-11-19 08:01:28.119810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.308 qpair failed and we were unable to recover it. 00:37:36.308 [2024-11-19 08:01:28.119944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.308 [2024-11-19 08:01:28.119994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.308 qpair failed and we were unable to recover it. 00:37:36.308 [2024-11-19 08:01:28.120167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.308 [2024-11-19 08:01:28.120204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.308 qpair failed and we were unable to recover it. 00:37:36.308 [2024-11-19 08:01:28.120341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.308 [2024-11-19 08:01:28.120378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.308 qpair failed and we were unable to recover it. 00:37:36.308 [2024-11-19 08:01:28.120553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.308 [2024-11-19 08:01:28.120589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.308 qpair failed and we were unable to recover it. 00:37:36.308 [2024-11-19 08:01:28.120712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.308 [2024-11-19 08:01:28.120763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.308 qpair failed and we were unable to recover it. 00:37:36.308 [2024-11-19 08:01:28.120908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.308 [2024-11-19 08:01:28.120946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.308 qpair failed and we were unable to recover it. 00:37:36.308 [2024-11-19 08:01:28.121089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.308 [2024-11-19 08:01:28.121126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.308 qpair failed and we were unable to recover it. 00:37:36.308 [2024-11-19 08:01:28.121237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.308 [2024-11-19 08:01:28.121274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.308 qpair failed and we were unable to recover it. 00:37:36.309 [2024-11-19 08:01:28.121448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.309 [2024-11-19 08:01:28.121486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.309 qpair failed and we were unable to recover it. 00:37:36.309 [2024-11-19 08:01:28.121729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.309 [2024-11-19 08:01:28.121779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.309 qpair failed and we were unable to recover it. 00:37:36.309 [2024-11-19 08:01:28.121923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.309 [2024-11-19 08:01:28.121959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.309 qpair failed and we were unable to recover it. 00:37:36.309 [2024-11-19 08:01:28.122092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.309 [2024-11-19 08:01:28.122129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.309 qpair failed and we were unable to recover it. 00:37:36.309 [2024-11-19 08:01:28.122232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.309 [2024-11-19 08:01:28.122268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.309 qpair failed and we were unable to recover it. 00:37:36.309 [2024-11-19 08:01:28.122394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.309 [2024-11-19 08:01:28.122430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.309 qpair failed and we were unable to recover it. 00:37:36.309 [2024-11-19 08:01:28.122537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.309 [2024-11-19 08:01:28.122572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.309 qpair failed and we were unable to recover it. 00:37:36.309 [2024-11-19 08:01:28.122714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.309 [2024-11-19 08:01:28.122750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.309 qpair failed and we were unable to recover it. 00:37:36.309 [2024-11-19 08:01:28.122865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.309 [2024-11-19 08:01:28.122902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.309 qpair failed and we were unable to recover it. 00:37:36.309 [2024-11-19 08:01:28.123044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.309 [2024-11-19 08:01:28.123080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.309 qpair failed and we were unable to recover it. 00:37:36.309 [2024-11-19 08:01:28.123193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.309 [2024-11-19 08:01:28.123228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.309 qpair failed and we were unable to recover it. 00:37:36.309 [2024-11-19 08:01:28.123381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.309 [2024-11-19 08:01:28.123421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.309 qpair failed and we were unable to recover it. 00:37:36.309 [2024-11-19 08:01:28.123567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.309 [2024-11-19 08:01:28.123603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.309 qpair failed and we were unable to recover it. 00:37:36.309 [2024-11-19 08:01:28.123713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.309 [2024-11-19 08:01:28.123749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.309 qpair failed and we were unable to recover it. 00:37:36.309 [2024-11-19 08:01:28.123891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.309 [2024-11-19 08:01:28.123927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.309 qpair failed and we were unable to recover it. 00:37:36.309 [2024-11-19 08:01:28.124067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.309 [2024-11-19 08:01:28.124102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.309 qpair failed and we were unable to recover it. 00:37:36.309 [2024-11-19 08:01:28.124207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.309 [2024-11-19 08:01:28.124243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.309 qpair failed and we were unable to recover it. 00:37:36.309 [2024-11-19 08:01:28.124386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.309 [2024-11-19 08:01:28.124422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.309 qpair failed and we were unable to recover it. 00:37:36.309 [2024-11-19 08:01:28.124539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.309 [2024-11-19 08:01:28.124575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.309 qpair failed and we were unable to recover it. 00:37:36.309 [2024-11-19 08:01:28.124716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.309 [2024-11-19 08:01:28.124751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.309 qpair failed and we were unable to recover it. 00:37:36.309 [2024-11-19 08:01:28.124886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.309 [2024-11-19 08:01:28.124922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.309 qpair failed and we were unable to recover it. 00:37:36.309 [2024-11-19 08:01:28.125084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.309 [2024-11-19 08:01:28.125120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.309 qpair failed and we were unable to recover it. 00:37:36.309 [2024-11-19 08:01:28.125256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.309 [2024-11-19 08:01:28.125296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.309 qpair failed and we were unable to recover it. 00:37:36.309 [2024-11-19 08:01:28.125439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.309 [2024-11-19 08:01:28.125476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.309 qpair failed and we were unable to recover it. 00:37:36.309 [2024-11-19 08:01:28.125631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.309 [2024-11-19 08:01:28.125681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.309 qpair failed and we were unable to recover it. 00:37:36.309 [2024-11-19 08:01:28.125840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.309 [2024-11-19 08:01:28.125890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.309 qpair failed and we were unable to recover it. 00:37:36.309 [2024-11-19 08:01:28.126036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.309 [2024-11-19 08:01:28.126072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.309 qpair failed and we were unable to recover it. 00:37:36.309 [2024-11-19 08:01:28.126207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.309 [2024-11-19 08:01:28.126242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.309 qpair failed and we were unable to recover it. 00:37:36.309 [2024-11-19 08:01:28.126377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.309 [2024-11-19 08:01:28.126412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.309 qpair failed and we were unable to recover it. 00:37:36.309 [2024-11-19 08:01:28.126551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.309 [2024-11-19 08:01:28.126587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.309 qpair failed and we were unable to recover it. 00:37:36.309 [2024-11-19 08:01:28.126707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.309 [2024-11-19 08:01:28.126744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.309 qpair failed and we were unable to recover it. 00:37:36.309 [2024-11-19 08:01:28.126862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.309 [2024-11-19 08:01:28.126902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.309 qpair failed and we were unable to recover it. 00:37:36.309 [2024-11-19 08:01:28.127044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.309 [2024-11-19 08:01:28.127080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.309 qpair failed and we were unable to recover it. 00:37:36.309 [2024-11-19 08:01:28.127180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.309 [2024-11-19 08:01:28.127214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.309 qpair failed and we were unable to recover it. 00:37:36.309 [2024-11-19 08:01:28.127325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.309 [2024-11-19 08:01:28.127361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.309 qpair failed and we were unable to recover it. 00:37:36.309 [2024-11-19 08:01:28.127491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.309 [2024-11-19 08:01:28.127526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.310 qpair failed and we were unable to recover it. 00:37:36.310 [2024-11-19 08:01:28.127646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.310 [2024-11-19 08:01:28.127681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.310 qpair failed and we were unable to recover it. 00:37:36.310 [2024-11-19 08:01:28.127906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.310 [2024-11-19 08:01:28.127941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.310 qpair failed and we were unable to recover it. 00:37:36.310 [2024-11-19 08:01:28.128080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.310 [2024-11-19 08:01:28.128115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.310 qpair failed and we were unable to recover it. 00:37:36.310 [2024-11-19 08:01:28.128218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.310 [2024-11-19 08:01:28.128251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.310 qpair failed and we were unable to recover it. 00:37:36.310 [2024-11-19 08:01:28.128357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.310 [2024-11-19 08:01:28.128392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.310 qpair failed and we were unable to recover it. 00:37:36.310 [2024-11-19 08:01:28.128540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.310 [2024-11-19 08:01:28.128591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.310 qpair failed and we were unable to recover it. 00:37:36.310 [2024-11-19 08:01:28.128737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.310 [2024-11-19 08:01:28.128776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.310 qpair failed and we were unable to recover it. 00:37:36.310 [2024-11-19 08:01:28.128891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.310 [2024-11-19 08:01:28.128927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.310 qpair failed and we were unable to recover it. 00:37:36.310 [2024-11-19 08:01:28.129037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.310 [2024-11-19 08:01:28.129073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.310 qpair failed and we were unable to recover it. 00:37:36.310 [2024-11-19 08:01:28.129215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.310 [2024-11-19 08:01:28.129251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.310 qpair failed and we were unable to recover it. 00:37:36.310 [2024-11-19 08:01:28.129382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.310 [2024-11-19 08:01:28.129418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.310 qpair failed and we were unable to recover it. 00:37:36.310 [2024-11-19 08:01:28.129538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.310 [2024-11-19 08:01:28.129574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.310 qpair failed and we were unable to recover it. 00:37:36.310 [2024-11-19 08:01:28.129713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.310 [2024-11-19 08:01:28.129751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.310 qpair failed and we were unable to recover it. 00:37:36.310 [2024-11-19 08:01:28.129929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.310 [2024-11-19 08:01:28.129979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.310 qpair failed and we were unable to recover it. 00:37:36.310 [2024-11-19 08:01:28.130103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.310 [2024-11-19 08:01:28.130142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.310 qpair failed and we were unable to recover it. 00:37:36.310 [2024-11-19 08:01:28.130300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.310 [2024-11-19 08:01:28.130349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.310 qpair failed and we were unable to recover it. 00:37:36.310 [2024-11-19 08:01:28.130468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.310 [2024-11-19 08:01:28.130506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.310 qpair failed and we were unable to recover it. 00:37:36.310 [2024-11-19 08:01:28.130639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.310 [2024-11-19 08:01:28.130699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.310 qpair failed and we were unable to recover it. 00:37:36.310 [2024-11-19 08:01:28.130850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.310 [2024-11-19 08:01:28.130889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.310 qpair failed and we were unable to recover it. 00:37:36.310 [2024-11-19 08:01:28.131037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.310 [2024-11-19 08:01:28.131075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.310 qpair failed and we were unable to recover it. 00:37:36.310 [2024-11-19 08:01:28.131217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.310 [2024-11-19 08:01:28.131253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.310 qpair failed and we were unable to recover it. 00:37:36.310 [2024-11-19 08:01:28.131389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.310 [2024-11-19 08:01:28.131425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.310 qpair failed and we were unable to recover it. 00:37:36.310 [2024-11-19 08:01:28.131565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.310 [2024-11-19 08:01:28.131601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.310 qpair failed and we were unable to recover it. 00:37:36.310 [2024-11-19 08:01:28.131730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.310 [2024-11-19 08:01:28.131780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.310 qpair failed and we were unable to recover it. 00:37:36.310 [2024-11-19 08:01:28.131909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.310 [2024-11-19 08:01:28.131948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.310 qpair failed and we were unable to recover it. 00:37:36.310 [2024-11-19 08:01:28.132067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.310 [2024-11-19 08:01:28.132103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.310 qpair failed and we were unable to recover it. 00:37:36.310 [2024-11-19 08:01:28.132218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.310 [2024-11-19 08:01:28.132262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.310 qpair failed and we were unable to recover it. 00:37:36.310 [2024-11-19 08:01:28.132405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.310 [2024-11-19 08:01:28.132441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.310 qpair failed and we were unable to recover it. 00:37:36.310 [2024-11-19 08:01:28.132574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.310 [2024-11-19 08:01:28.132611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.310 qpair failed and we were unable to recover it. 00:37:36.310 [2024-11-19 08:01:28.132735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.310 [2024-11-19 08:01:28.132772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.310 qpair failed and we were unable to recover it. 00:37:36.310 [2024-11-19 08:01:28.132933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.310 [2024-11-19 08:01:28.132970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.310 qpair failed and we were unable to recover it. 00:37:36.310 [2024-11-19 08:01:28.133085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.310 [2024-11-19 08:01:28.133122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.310 qpair failed and we were unable to recover it. 00:37:36.310 [2024-11-19 08:01:28.133238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.310 [2024-11-19 08:01:28.133274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.310 qpair failed and we were unable to recover it. 00:37:36.310 [2024-11-19 08:01:28.133502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.310 [2024-11-19 08:01:28.133539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.310 qpair failed and we were unable to recover it. 00:37:36.310 [2024-11-19 08:01:28.133675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.310 [2024-11-19 08:01:28.133725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.310 qpair failed and we were unable to recover it. 00:37:36.310 [2024-11-19 08:01:28.133827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.310 [2024-11-19 08:01:28.133863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.310 qpair failed and we were unable to recover it. 00:37:36.310 [2024-11-19 08:01:28.133994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.310 [2024-11-19 08:01:28.134030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.310 qpair failed and we were unable to recover it. 00:37:36.310 [2024-11-19 08:01:28.134144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.310 [2024-11-19 08:01:28.134180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.310 qpair failed and we were unable to recover it. 00:37:36.310 [2024-11-19 08:01:28.134293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.310 [2024-11-19 08:01:28.134329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.310 qpair failed and we were unable to recover it. 00:37:36.310 [2024-11-19 08:01:28.134483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.310 [2024-11-19 08:01:28.134533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.310 qpair failed and we were unable to recover it. 00:37:36.310 [2024-11-19 08:01:28.134680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.310 [2024-11-19 08:01:28.134725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.310 qpair failed and we were unable to recover it. 00:37:36.310 [2024-11-19 08:01:28.134850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.311 [2024-11-19 08:01:28.134886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.311 qpair failed and we were unable to recover it. 00:37:36.311 [2024-11-19 08:01:28.135021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.311 [2024-11-19 08:01:28.135056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.311 qpair failed and we were unable to recover it. 00:37:36.311 [2024-11-19 08:01:28.135157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.311 [2024-11-19 08:01:28.135192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.311 qpair failed and we were unable to recover it. 00:37:36.311 [2024-11-19 08:01:28.135300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.311 [2024-11-19 08:01:28.135336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.311 qpair failed and we were unable to recover it. 00:37:36.311 [2024-11-19 08:01:28.135400] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2780 (9): Bad file descriptor 00:37:36.311 [2024-11-19 08:01:28.135541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.311 [2024-11-19 08:01:28.135581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.311 qpair failed and we were unable to recover it. 00:37:36.311 [2024-11-19 08:01:28.135700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.311 [2024-11-19 08:01:28.135738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.311 qpair failed and we were unable to recover it. 00:37:36.311 [2024-11-19 08:01:28.135842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.311 [2024-11-19 08:01:28.135878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.311 qpair failed and we were unable to recover it. 00:37:36.311 [2024-11-19 08:01:28.136023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.311 [2024-11-19 08:01:28.136060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.311 qpair failed and we were unable to recover it. 00:37:36.311 [2024-11-19 08:01:28.136171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.311 [2024-11-19 08:01:28.136208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.311 qpair failed and we were unable to recover it. 00:37:36.311 [2024-11-19 08:01:28.136320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.311 [2024-11-19 08:01:28.136355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.311 qpair failed and we were unable to recover it. 00:37:36.311 [2024-11-19 08:01:28.136504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.311 [2024-11-19 08:01:28.136541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.311 qpair failed and we were unable to recover it. 00:37:36.311 [2024-11-19 08:01:28.136702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.311 [2024-11-19 08:01:28.136752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.311 qpair failed and we were unable to recover it. 00:37:36.311 [2024-11-19 08:01:28.136906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.311 [2024-11-19 08:01:28.136944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.311 qpair failed and we were unable to recover it. 00:37:36.311 [2024-11-19 08:01:28.137087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.311 [2024-11-19 08:01:28.137123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.311 qpair failed and we were unable to recover it. 00:37:36.311 [2024-11-19 08:01:28.137270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.311 [2024-11-19 08:01:28.137306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.311 qpair failed and we were unable to recover it. 00:37:36.311 [2024-11-19 08:01:28.137467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.311 [2024-11-19 08:01:28.137517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.311 qpair failed and we were unable to recover it. 00:37:36.311 [2024-11-19 08:01:28.137645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.311 [2024-11-19 08:01:28.137683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.311 qpair failed and we were unable to recover it. 00:37:36.311 [2024-11-19 08:01:28.137830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.311 [2024-11-19 08:01:28.137866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.311 qpair failed and we were unable to recover it. 00:37:36.311 [2024-11-19 08:01:28.137978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.311 [2024-11-19 08:01:28.138014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.311 qpair failed and we were unable to recover it. 00:37:36.311 [2024-11-19 08:01:28.138154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.311 [2024-11-19 08:01:28.138190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.311 qpair failed and we were unable to recover it. 00:37:36.311 [2024-11-19 08:01:28.138346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.311 [2024-11-19 08:01:28.138396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.311 qpair failed and we were unable to recover it. 00:37:36.311 [2024-11-19 08:01:28.138520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.311 [2024-11-19 08:01:28.138558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.311 qpair failed and we were unable to recover it. 00:37:36.311 [2024-11-19 08:01:28.138717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.311 [2024-11-19 08:01:28.138767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.311 qpair failed and we were unable to recover it. 00:37:36.311 [2024-11-19 08:01:28.138926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.311 [2024-11-19 08:01:28.138965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.311 qpair failed and we were unable to recover it. 00:37:36.311 [2024-11-19 08:01:28.139108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.311 [2024-11-19 08:01:28.139144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.311 qpair failed and we were unable to recover it. 00:37:36.311 [2024-11-19 08:01:28.139262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.311 [2024-11-19 08:01:28.139298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.311 qpair failed and we were unable to recover it. 00:37:36.311 [2024-11-19 08:01:28.139410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.311 [2024-11-19 08:01:28.139447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.311 qpair failed and we were unable to recover it. 00:37:36.311 [2024-11-19 08:01:28.139605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.311 [2024-11-19 08:01:28.139655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.311 qpair failed and we were unable to recover it. 00:37:36.311 [2024-11-19 08:01:28.139816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.311 [2024-11-19 08:01:28.139854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.311 qpair failed and we were unable to recover it. 00:37:36.311 [2024-11-19 08:01:28.139965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.311 [2024-11-19 08:01:28.140003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.311 qpair failed and we were unable to recover it. 00:37:36.311 [2024-11-19 08:01:28.140114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.311 [2024-11-19 08:01:28.140150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.311 qpair failed and we were unable to recover it. 00:37:36.311 [2024-11-19 08:01:28.140293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.311 [2024-11-19 08:01:28.140329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.311 qpair failed and we were unable to recover it. 00:37:36.311 [2024-11-19 08:01:28.140506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.311 [2024-11-19 08:01:28.140542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.311 qpair failed and we were unable to recover it. 00:37:36.311 [2024-11-19 08:01:28.140695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.311 [2024-11-19 08:01:28.140732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.311 qpair failed and we were unable to recover it. 00:37:36.311 [2024-11-19 08:01:28.140879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.311 [2024-11-19 08:01:28.140919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.311 qpair failed and we were unable to recover it. 00:37:36.311 [2024-11-19 08:01:28.141077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.311 [2024-11-19 08:01:28.141115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.311 qpair failed and we were unable to recover it. 00:37:36.311 [2024-11-19 08:01:28.141232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.311 [2024-11-19 08:01:28.141280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.311 qpair failed and we were unable to recover it. 00:37:36.311 [2024-11-19 08:01:28.141388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.311 [2024-11-19 08:01:28.141424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.311 qpair failed and we were unable to recover it. 00:37:36.311 [2024-11-19 08:01:28.141565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.311 [2024-11-19 08:01:28.141608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.311 qpair failed and we were unable to recover it. 00:37:36.311 [2024-11-19 08:01:28.141722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.312 [2024-11-19 08:01:28.141758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.312 qpair failed and we were unable to recover it. 00:37:36.312 [2024-11-19 08:01:28.141895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.312 [2024-11-19 08:01:28.141930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.312 qpair failed and we were unable to recover it. 00:37:36.312 [2024-11-19 08:01:28.142041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.312 [2024-11-19 08:01:28.142077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.312 qpair failed and we were unable to recover it. 00:37:36.312 [2024-11-19 08:01:28.142214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.312 [2024-11-19 08:01:28.142249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.312 qpair failed and we were unable to recover it. 00:37:36.312 [2024-11-19 08:01:28.142387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.312 [2024-11-19 08:01:28.142422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.312 qpair failed and we were unable to recover it. 00:37:36.312 [2024-11-19 08:01:28.142540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.312 [2024-11-19 08:01:28.142578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.312 qpair failed and we were unable to recover it. 00:37:36.312 [2024-11-19 08:01:28.142708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.312 [2024-11-19 08:01:28.142745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.312 qpair failed and we were unable to recover it. 00:37:36.312 [2024-11-19 08:01:28.142900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.312 [2024-11-19 08:01:28.142951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.312 qpair failed and we were unable to recover it. 00:37:36.312 [2024-11-19 08:01:28.143068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.312 [2024-11-19 08:01:28.143105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.312 qpair failed and we were unable to recover it. 00:37:36.312 [2024-11-19 08:01:28.143250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.312 [2024-11-19 08:01:28.143285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.312 qpair failed and we were unable to recover it. 00:37:36.312 [2024-11-19 08:01:28.143401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.312 [2024-11-19 08:01:28.143438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.312 qpair failed and we were unable to recover it. 00:37:36.312 [2024-11-19 08:01:28.143540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.312 [2024-11-19 08:01:28.143576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.312 qpair failed and we were unable to recover it. 00:37:36.312 [2024-11-19 08:01:28.143717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.312 [2024-11-19 08:01:28.143753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.312 qpair failed and we were unable to recover it. 00:37:36.312 [2024-11-19 08:01:28.143870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.312 [2024-11-19 08:01:28.143905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.312 qpair failed and we were unable to recover it. 00:37:36.312 [2024-11-19 08:01:28.144034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.312 [2024-11-19 08:01:28.144070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.312 qpair failed and we were unable to recover it. 00:37:36.312 [2024-11-19 08:01:28.144207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.312 [2024-11-19 08:01:28.144243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.312 qpair failed and we were unable to recover it. 00:37:36.312 [2024-11-19 08:01:28.144381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.312 [2024-11-19 08:01:28.144416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.312 qpair failed and we were unable to recover it. 00:37:36.312 [2024-11-19 08:01:28.144526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.312 [2024-11-19 08:01:28.144562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.312 qpair failed and we were unable to recover it. 00:37:36.312 [2024-11-19 08:01:28.144698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.312 [2024-11-19 08:01:28.144749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.312 qpair failed and we were unable to recover it. 00:37:36.312 [2024-11-19 08:01:28.144888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.312 [2024-11-19 08:01:28.144924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.312 qpair failed and we were unable to recover it. 00:37:36.312 [2024-11-19 08:01:28.145045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.312 [2024-11-19 08:01:28.145085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.312 qpair failed and we were unable to recover it. 00:37:36.312 [2024-11-19 08:01:28.145198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.312 [2024-11-19 08:01:28.145233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.312 qpair failed and we were unable to recover it. 00:37:36.312 [2024-11-19 08:01:28.145339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.312 [2024-11-19 08:01:28.145374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.312 qpair failed and we were unable to recover it. 00:37:36.312 [2024-11-19 08:01:28.145491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.312 [2024-11-19 08:01:28.145526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.312 qpair failed and we were unable to recover it. 00:37:36.312 [2024-11-19 08:01:28.145666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.312 [2024-11-19 08:01:28.145706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.312 qpair failed and we were unable to recover it. 00:37:36.312 [2024-11-19 08:01:28.145818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.312 [2024-11-19 08:01:28.145853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.312 qpair failed and we were unable to recover it. 00:37:36.312 [2024-11-19 08:01:28.145973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.312 [2024-11-19 08:01:28.146009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.312 qpair failed and we were unable to recover it. 00:37:36.312 [2024-11-19 08:01:28.146118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.312 [2024-11-19 08:01:28.146153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.312 qpair failed and we were unable to recover it. 00:37:36.312 [2024-11-19 08:01:28.146249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.312 [2024-11-19 08:01:28.146283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.312 qpair failed and we were unable to recover it. 00:37:36.312 [2024-11-19 08:01:28.146394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.312 [2024-11-19 08:01:28.146430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.312 qpair failed and we were unable to recover it. 00:37:36.312 [2024-11-19 08:01:28.146537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.312 [2024-11-19 08:01:28.146571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.312 qpair failed and we were unable to recover it. 00:37:36.312 [2024-11-19 08:01:28.146702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.312 [2024-11-19 08:01:28.146753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.312 qpair failed and we were unable to recover it. 00:37:36.312 [2024-11-19 08:01:28.146874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.312 [2024-11-19 08:01:28.146910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.312 qpair failed and we were unable to recover it. 00:37:36.312 [2024-11-19 08:01:28.147016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.312 [2024-11-19 08:01:28.147052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.312 qpair failed and we were unable to recover it. 00:37:36.312 [2024-11-19 08:01:28.147187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.312 [2024-11-19 08:01:28.147223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.312 qpair failed and we were unable to recover it. 00:37:36.312 [2024-11-19 08:01:28.147342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.312 [2024-11-19 08:01:28.147376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.312 qpair failed and we were unable to recover it. 00:37:36.312 [2024-11-19 08:01:28.147490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.312 [2024-11-19 08:01:28.147525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.313 qpair failed and we were unable to recover it. 00:37:36.313 [2024-11-19 08:01:28.147661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.313 [2024-11-19 08:01:28.147704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.313 qpair failed and we were unable to recover it. 00:37:36.313 [2024-11-19 08:01:28.147827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.313 [2024-11-19 08:01:28.147863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.313 qpair failed and we were unable to recover it. 00:37:36.313 [2024-11-19 08:01:28.147967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.313 [2024-11-19 08:01:28.148010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.313 qpair failed and we were unable to recover it. 00:37:36.313 [2024-11-19 08:01:28.148179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.313 [2024-11-19 08:01:28.148215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.313 qpair failed and we were unable to recover it. 00:37:36.313 [2024-11-19 08:01:28.148381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.313 [2024-11-19 08:01:28.148417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.313 qpair failed and we were unable to recover it. 00:37:36.313 [2024-11-19 08:01:28.148523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.313 [2024-11-19 08:01:28.148570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.313 qpair failed and we were unable to recover it. 00:37:36.313 [2024-11-19 08:01:28.148684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.313 [2024-11-19 08:01:28.148729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.313 qpair failed and we were unable to recover it. 00:37:36.313 [2024-11-19 08:01:28.148870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.313 [2024-11-19 08:01:28.148906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.313 qpair failed and we were unable to recover it. 00:37:36.313 [2024-11-19 08:01:28.149045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.313 [2024-11-19 08:01:28.149081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.313 qpair failed and we were unable to recover it. 00:37:36.313 [2024-11-19 08:01:28.149221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.313 [2024-11-19 08:01:28.149259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.313 qpair failed and we were unable to recover it. 00:37:36.313 [2024-11-19 08:01:28.149393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.313 [2024-11-19 08:01:28.149428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.313 qpair failed and we were unable to recover it. 00:37:36.313 [2024-11-19 08:01:28.149563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.313 [2024-11-19 08:01:28.149598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.313 qpair failed and we were unable to recover it. 00:37:36.313 [2024-11-19 08:01:28.149711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.313 [2024-11-19 08:01:28.149747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.313 qpair failed and we were unable to recover it. 00:37:36.313 [2024-11-19 08:01:28.149907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.313 [2024-11-19 08:01:28.149957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.313 qpair failed and we were unable to recover it. 00:37:36.313 [2024-11-19 08:01:28.150106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.313 [2024-11-19 08:01:28.150145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.313 qpair failed and we were unable to recover it. 00:37:36.313 [2024-11-19 08:01:28.150260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.313 [2024-11-19 08:01:28.150298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.313 qpair failed and we were unable to recover it. 00:37:36.313 [2024-11-19 08:01:28.150434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.313 [2024-11-19 08:01:28.150471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.313 qpair failed and we were unable to recover it. 00:37:36.313 [2024-11-19 08:01:28.150572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.313 [2024-11-19 08:01:28.150608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.313 qpair failed and we were unable to recover it. 00:37:36.313 [2024-11-19 08:01:28.150772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.313 [2024-11-19 08:01:28.150823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.313 qpair failed and we were unable to recover it. 00:37:36.313 [2024-11-19 08:01:28.150940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.313 [2024-11-19 08:01:28.150977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.313 qpair failed and we were unable to recover it. 00:37:36.313 [2024-11-19 08:01:28.151093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.313 [2024-11-19 08:01:28.151129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.313 qpair failed and we were unable to recover it. 00:37:36.313 [2024-11-19 08:01:28.151246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.313 [2024-11-19 08:01:28.151282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.313 qpair failed and we were unable to recover it. 00:37:36.313 [2024-11-19 08:01:28.151405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.313 [2024-11-19 08:01:28.151440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.313 qpair failed and we were unable to recover it. 00:37:36.313 [2024-11-19 08:01:28.151542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.313 [2024-11-19 08:01:28.151577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.313 qpair failed and we were unable to recover it. 00:37:36.313 [2024-11-19 08:01:28.151707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.313 [2024-11-19 08:01:28.151743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.313 qpair failed and we were unable to recover it. 00:37:36.313 [2024-11-19 08:01:28.151848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.313 [2024-11-19 08:01:28.151883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.313 qpair failed and we were unable to recover it. 00:37:36.313 [2024-11-19 08:01:28.151992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.313 [2024-11-19 08:01:28.152028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.313 qpair failed and we were unable to recover it. 00:37:36.313 [2024-11-19 08:01:28.152169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.313 [2024-11-19 08:01:28.152204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.313 qpair failed and we were unable to recover it. 00:37:36.313 [2024-11-19 08:01:28.152350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.313 [2024-11-19 08:01:28.152385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.313 qpair failed and we were unable to recover it. 00:37:36.313 [2024-11-19 08:01:28.152528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.313 [2024-11-19 08:01:28.152579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.313 qpair failed and we were unable to recover it. 00:37:36.313 [2024-11-19 08:01:28.152752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.313 [2024-11-19 08:01:28.152802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.314 qpair failed and we were unable to recover it. 00:37:36.314 [2024-11-19 08:01:28.152946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.314 [2024-11-19 08:01:28.152986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.314 qpair failed and we were unable to recover it. 00:37:36.314 [2024-11-19 08:01:28.153157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.314 [2024-11-19 08:01:28.153193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.314 qpair failed and we were unable to recover it. 00:37:36.314 [2024-11-19 08:01:28.153349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.314 [2024-11-19 08:01:28.153384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.314 qpair failed and we were unable to recover it. 00:37:36.314 [2024-11-19 08:01:28.153497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.314 [2024-11-19 08:01:28.153533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.314 qpair failed and we were unable to recover it. 00:37:36.314 [2024-11-19 08:01:28.153645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.314 [2024-11-19 08:01:28.153682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.314 qpair failed and we were unable to recover it. 00:37:36.314 [2024-11-19 08:01:28.153827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.314 [2024-11-19 08:01:28.153862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.314 qpair failed and we were unable to recover it. 00:37:36.314 [2024-11-19 08:01:28.153971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.314 [2024-11-19 08:01:28.154006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.314 qpair failed and we were unable to recover it. 00:37:36.314 [2024-11-19 08:01:28.154141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.314 [2024-11-19 08:01:28.154185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.314 qpair failed and we were unable to recover it. 00:37:36.314 [2024-11-19 08:01:28.154357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.314 [2024-11-19 08:01:28.154392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.314 qpair failed and we were unable to recover it. 00:37:36.314 [2024-11-19 08:01:28.154525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.314 [2024-11-19 08:01:28.154560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.314 qpair failed and we were unable to recover it. 00:37:36.314 [2024-11-19 08:01:28.154676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.314 [2024-11-19 08:01:28.154719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.314 qpair failed and we were unable to recover it. 00:37:36.314 [2024-11-19 08:01:28.154836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.314 [2024-11-19 08:01:28.154877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.314 qpair failed and we were unable to recover it. 00:37:36.314 [2024-11-19 08:01:28.154985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.314 [2024-11-19 08:01:28.155020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.314 qpair failed and we were unable to recover it. 00:37:36.314 [2024-11-19 08:01:28.155130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.314 [2024-11-19 08:01:28.155166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.314 qpair failed and we were unable to recover it. 00:37:36.314 [2024-11-19 08:01:28.155273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.314 [2024-11-19 08:01:28.155315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.314 qpair failed and we were unable to recover it. 00:37:36.314 [2024-11-19 08:01:28.155464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.314 [2024-11-19 08:01:28.155513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.314 qpair failed and we were unable to recover it. 00:37:36.314 [2024-11-19 08:01:28.155657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.314 [2024-11-19 08:01:28.155704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.314 qpair failed and we were unable to recover it. 00:37:36.314 [2024-11-19 08:01:28.155815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.314 [2024-11-19 08:01:28.155852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.314 qpair failed and we were unable to recover it. 00:37:36.314 [2024-11-19 08:01:28.155992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.314 [2024-11-19 08:01:28.156029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.314 qpair failed and we were unable to recover it. 00:37:36.314 [2024-11-19 08:01:28.156135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.314 [2024-11-19 08:01:28.156171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.314 qpair failed and we were unable to recover it. 00:37:36.602 [2024-11-19 08:01:28.156281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.602 [2024-11-19 08:01:28.156317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.602 qpair failed and we were unable to recover it. 00:37:36.602 [2024-11-19 08:01:28.156446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.602 [2024-11-19 08:01:28.156482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.602 qpair failed and we were unable to recover it. 00:37:36.602 [2024-11-19 08:01:28.156623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.602 [2024-11-19 08:01:28.156660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.602 qpair failed and we were unable to recover it. 00:37:36.602 [2024-11-19 08:01:28.156807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.602 [2024-11-19 08:01:28.156858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.602 qpair failed and we were unable to recover it. 00:37:36.602 [2024-11-19 08:01:28.156983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.602 [2024-11-19 08:01:28.157019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.602 qpair failed and we were unable to recover it. 00:37:36.602 [2024-11-19 08:01:28.157181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.602 [2024-11-19 08:01:28.157216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.602 qpair failed and we were unable to recover it. 00:37:36.602 [2024-11-19 08:01:28.157328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.602 [2024-11-19 08:01:28.157364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.602 qpair failed and we were unable to recover it. 00:37:36.602 [2024-11-19 08:01:28.157510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.602 [2024-11-19 08:01:28.157546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.602 qpair failed and we were unable to recover it. 00:37:36.602 [2024-11-19 08:01:28.157685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.602 [2024-11-19 08:01:28.157732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.602 qpair failed and we were unable to recover it. 00:37:36.602 [2024-11-19 08:01:28.157859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.602 [2024-11-19 08:01:28.157894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.602 qpair failed and we were unable to recover it. 00:37:36.602 [2024-11-19 08:01:28.158030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.602 [2024-11-19 08:01:28.158069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.602 qpair failed and we were unable to recover it. 00:37:36.602 [2024-11-19 08:01:28.158213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.602 [2024-11-19 08:01:28.158250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.602 qpair failed and we were unable to recover it. 00:37:36.602 [2024-11-19 08:01:28.158378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.602 [2024-11-19 08:01:28.158428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.602 qpair failed and we were unable to recover it. 00:37:36.602 [2024-11-19 08:01:28.158576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.602 [2024-11-19 08:01:28.158615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.602 qpair failed and we were unable to recover it. 00:37:36.602 [2024-11-19 08:01:28.158764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.602 [2024-11-19 08:01:28.158801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.602 qpair failed and we were unable to recover it. 00:37:36.602 [2024-11-19 08:01:28.158921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.602 [2024-11-19 08:01:28.158958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.602 qpair failed and we were unable to recover it. 00:37:36.603 [2024-11-19 08:01:28.159075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.603 [2024-11-19 08:01:28.159111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.603 qpair failed and we were unable to recover it. 00:37:36.603 [2024-11-19 08:01:28.159248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.603 [2024-11-19 08:01:28.159284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.603 qpair failed and we were unable to recover it. 00:37:36.603 [2024-11-19 08:01:28.159390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.603 [2024-11-19 08:01:28.159428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.603 qpair failed and we were unable to recover it. 00:37:36.603 [2024-11-19 08:01:28.159545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.603 [2024-11-19 08:01:28.159583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.603 qpair failed and we were unable to recover it. 00:37:36.603 [2024-11-19 08:01:28.159736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.603 [2024-11-19 08:01:28.159785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.603 qpair failed and we were unable to recover it. 00:37:36.603 [2024-11-19 08:01:28.159936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.603 [2024-11-19 08:01:28.159973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.603 qpair failed and we were unable to recover it. 00:37:36.603 [2024-11-19 08:01:28.160112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.603 [2024-11-19 08:01:28.160149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.603 qpair failed and we were unable to recover it. 00:37:36.603 [2024-11-19 08:01:28.160250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.603 [2024-11-19 08:01:28.160286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.603 qpair failed and we were unable to recover it. 00:37:36.603 [2024-11-19 08:01:28.160392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.603 [2024-11-19 08:01:28.160429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.603 qpair failed and we were unable to recover it. 00:37:36.603 [2024-11-19 08:01:28.160617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.603 [2024-11-19 08:01:28.160667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.603 qpair failed and we were unable to recover it. 00:37:36.603 [2024-11-19 08:01:28.160801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.603 [2024-11-19 08:01:28.160839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.603 qpair failed and we were unable to recover it. 00:37:36.603 [2024-11-19 08:01:28.160992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.603 [2024-11-19 08:01:28.161028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.603 qpair failed and we were unable to recover it. 00:37:36.603 [2024-11-19 08:01:28.161175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.603 [2024-11-19 08:01:28.161211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.603 qpair failed and we were unable to recover it. 00:37:36.603 [2024-11-19 08:01:28.161347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.603 [2024-11-19 08:01:28.161382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.603 qpair failed and we were unable to recover it. 00:37:36.603 [2024-11-19 08:01:28.161487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.603 [2024-11-19 08:01:28.161523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.603 qpair failed and we were unable to recover it. 00:37:36.603 [2024-11-19 08:01:28.161633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.603 [2024-11-19 08:01:28.161674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.603 qpair failed and we were unable to recover it. 00:37:36.603 [2024-11-19 08:01:28.161849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.603 [2024-11-19 08:01:28.161899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.603 qpair failed and we were unable to recover it. 00:37:36.603 [2024-11-19 08:01:28.162021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.603 [2024-11-19 08:01:28.162058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.603 qpair failed and we were unable to recover it. 00:37:36.603 [2024-11-19 08:01:28.162172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.603 [2024-11-19 08:01:28.162210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.603 qpair failed and we were unable to recover it. 00:37:36.603 [2024-11-19 08:01:28.162350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.603 [2024-11-19 08:01:28.162385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.603 qpair failed and we were unable to recover it. 00:37:36.603 [2024-11-19 08:01:28.162546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.603 [2024-11-19 08:01:28.162582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.603 qpair failed and we were unable to recover it. 00:37:36.603 [2024-11-19 08:01:28.162745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.603 [2024-11-19 08:01:28.162782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.603 qpair failed and we were unable to recover it. 00:37:36.603 [2024-11-19 08:01:28.162884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.603 [2024-11-19 08:01:28.162920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.603 qpair failed and we were unable to recover it. 00:37:36.603 [2024-11-19 08:01:28.163030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.603 [2024-11-19 08:01:28.163065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.603 qpair failed and we were unable to recover it. 00:37:36.603 [2024-11-19 08:01:28.163181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.603 [2024-11-19 08:01:28.163216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.603 qpair failed and we were unable to recover it. 00:37:36.603 [2024-11-19 08:01:28.163330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.603 [2024-11-19 08:01:28.163365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.603 qpair failed and we were unable to recover it. 00:37:36.603 [2024-11-19 08:01:28.163473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.603 [2024-11-19 08:01:28.163508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.603 qpair failed and we were unable to recover it. 00:37:36.603 [2024-11-19 08:01:28.163613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.603 [2024-11-19 08:01:28.163646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.603 qpair failed and we were unable to recover it. 00:37:36.603 [2024-11-19 08:01:28.163761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.603 [2024-11-19 08:01:28.163797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.603 qpair failed and we were unable to recover it. 00:37:36.603 [2024-11-19 08:01:28.163915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.603 [2024-11-19 08:01:28.163955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.603 qpair failed and we were unable to recover it. 00:37:36.603 [2024-11-19 08:01:28.164073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.603 [2024-11-19 08:01:28.164110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.603 qpair failed and we were unable to recover it. 00:37:36.603 [2024-11-19 08:01:28.164262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.603 [2024-11-19 08:01:28.164298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.603 qpair failed and we were unable to recover it. 00:37:36.603 [2024-11-19 08:01:28.164402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.603 [2024-11-19 08:01:28.164439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.603 qpair failed and we were unable to recover it. 00:37:36.603 [2024-11-19 08:01:28.164565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.603 [2024-11-19 08:01:28.164602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.603 qpair failed and we were unable to recover it. 00:37:36.603 [2024-11-19 08:01:28.164750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.603 [2024-11-19 08:01:28.164788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.603 qpair failed and we were unable to recover it. 00:37:36.603 [2024-11-19 08:01:28.164900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.603 [2024-11-19 08:01:28.164938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.603 qpair failed and we were unable to recover it. 00:37:36.603 [2024-11-19 08:01:28.165078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.603 [2024-11-19 08:01:28.165120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.603 qpair failed and we were unable to recover it. 00:37:36.603 [2024-11-19 08:01:28.165227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.604 [2024-11-19 08:01:28.165262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.604 qpair failed and we were unable to recover it. 00:37:36.604 [2024-11-19 08:01:28.165406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.604 [2024-11-19 08:01:28.165441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.604 qpair failed and we were unable to recover it. 00:37:36.604 [2024-11-19 08:01:28.165574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.604 [2024-11-19 08:01:28.165610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.604 qpair failed and we were unable to recover it. 00:37:36.604 [2024-11-19 08:01:28.165727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.604 [2024-11-19 08:01:28.165763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.604 qpair failed and we were unable to recover it. 00:37:36.604 [2024-11-19 08:01:28.165876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.604 [2024-11-19 08:01:28.165912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.604 qpair failed and we were unable to recover it. 00:37:36.604 [2024-11-19 08:01:28.166055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.604 [2024-11-19 08:01:28.166090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.604 qpair failed and we were unable to recover it. 00:37:36.604 [2024-11-19 08:01:28.166233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.604 [2024-11-19 08:01:28.166270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.604 qpair failed and we were unable to recover it. 00:37:36.604 [2024-11-19 08:01:28.166381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.604 [2024-11-19 08:01:28.166417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.604 qpair failed and we were unable to recover it. 00:37:36.604 [2024-11-19 08:01:28.166535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.604 [2024-11-19 08:01:28.166585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.604 qpair failed and we were unable to recover it. 00:37:36.604 [2024-11-19 08:01:28.166731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.604 [2024-11-19 08:01:28.166767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.604 qpair failed and we were unable to recover it. 00:37:36.604 [2024-11-19 08:01:28.166871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.604 [2024-11-19 08:01:28.166906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.604 qpair failed and we were unable to recover it. 00:37:36.604 [2024-11-19 08:01:28.167042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.604 [2024-11-19 08:01:28.167077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.604 qpair failed and we were unable to recover it. 00:37:36.604 [2024-11-19 08:01:28.167210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.604 [2024-11-19 08:01:28.167244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.604 qpair failed and we were unable to recover it. 00:37:36.604 [2024-11-19 08:01:28.167363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.604 [2024-11-19 08:01:28.167397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.604 qpair failed and we were unable to recover it. 00:37:36.604 [2024-11-19 08:01:28.167501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.604 [2024-11-19 08:01:28.167538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.604 qpair failed and we were unable to recover it. 00:37:36.604 [2024-11-19 08:01:28.167644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.604 [2024-11-19 08:01:28.167681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.604 qpair failed and we were unable to recover it. 00:37:36.604 [2024-11-19 08:01:28.167826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.604 [2024-11-19 08:01:28.167862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.604 qpair failed and we were unable to recover it. 00:37:36.604 [2024-11-19 08:01:28.167973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.604 [2024-11-19 08:01:28.168009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.604 qpair failed and we were unable to recover it. 00:37:36.604 [2024-11-19 08:01:28.168147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.604 [2024-11-19 08:01:28.168188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.604 qpair failed and we were unable to recover it. 00:37:36.604 [2024-11-19 08:01:28.168334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.604 [2024-11-19 08:01:28.168373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.604 qpair failed and we were unable to recover it. 00:37:36.604 [2024-11-19 08:01:28.168478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.604 [2024-11-19 08:01:28.168514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.604 qpair failed and we were unable to recover it. 00:37:36.604 [2024-11-19 08:01:28.168630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.604 [2024-11-19 08:01:28.168665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.604 qpair failed and we were unable to recover it. 00:37:36.604 [2024-11-19 08:01:28.168779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.604 [2024-11-19 08:01:28.168815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.604 qpair failed and we were unable to recover it. 00:37:36.604 [2024-11-19 08:01:28.168962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.604 [2024-11-19 08:01:28.168998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.604 qpair failed and we were unable to recover it. 00:37:36.604 [2024-11-19 08:01:28.169115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.604 [2024-11-19 08:01:28.169151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.604 qpair failed and we were unable to recover it. 00:37:36.604 [2024-11-19 08:01:28.169260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.604 [2024-11-19 08:01:28.169296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.604 qpair failed and we were unable to recover it. 00:37:36.604 [2024-11-19 08:01:28.169438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.604 [2024-11-19 08:01:28.169474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.604 qpair failed and we were unable to recover it. 00:37:36.604 [2024-11-19 08:01:28.169575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.604 [2024-11-19 08:01:28.169611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.604 qpair failed and we were unable to recover it. 00:37:36.604 [2024-11-19 08:01:28.169725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.604 [2024-11-19 08:01:28.169762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.604 qpair failed and we were unable to recover it. 00:37:36.604 [2024-11-19 08:01:28.169901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.604 [2024-11-19 08:01:28.169936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.604 qpair failed and we were unable to recover it. 00:37:36.604 [2024-11-19 08:01:28.170076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.604 [2024-11-19 08:01:28.170112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.604 qpair failed and we were unable to recover it. 00:37:36.604 [2024-11-19 08:01:28.170253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.604 [2024-11-19 08:01:28.170288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.604 qpair failed and we were unable to recover it. 00:37:36.604 [2024-11-19 08:01:28.170422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.604 [2024-11-19 08:01:28.170471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.604 qpair failed and we were unable to recover it. 00:37:36.604 [2024-11-19 08:01:28.170599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.604 [2024-11-19 08:01:28.170650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.604 qpair failed and we were unable to recover it. 00:37:36.604 [2024-11-19 08:01:28.170804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.604 [2024-11-19 08:01:28.170842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.604 qpair failed and we were unable to recover it. 00:37:36.604 [2024-11-19 08:01:28.170955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.604 [2024-11-19 08:01:28.170991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.604 qpair failed and we were unable to recover it. 00:37:36.604 [2024-11-19 08:01:28.171121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.604 [2024-11-19 08:01:28.171156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.604 qpair failed and we were unable to recover it. 00:37:36.604 [2024-11-19 08:01:28.171299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.604 [2024-11-19 08:01:28.171335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.604 qpair failed and we were unable to recover it. 00:37:36.605 [2024-11-19 08:01:28.171442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.605 [2024-11-19 08:01:28.171479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.605 qpair failed and we were unable to recover it. 00:37:36.605 [2024-11-19 08:01:28.171631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.605 [2024-11-19 08:01:28.171666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.605 qpair failed and we were unable to recover it. 00:37:36.605 [2024-11-19 08:01:28.171794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.605 [2024-11-19 08:01:28.171834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.605 qpair failed and we were unable to recover it. 00:37:36.605 [2024-11-19 08:01:28.171990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.605 [2024-11-19 08:01:28.172027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.605 qpair failed and we were unable to recover it. 00:37:36.605 [2024-11-19 08:01:28.172136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.605 [2024-11-19 08:01:28.172173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.605 qpair failed and we were unable to recover it. 00:37:36.605 [2024-11-19 08:01:28.172291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.605 [2024-11-19 08:01:28.172327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.605 qpair failed and we were unable to recover it. 00:37:36.605 [2024-11-19 08:01:28.172461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.605 [2024-11-19 08:01:28.172497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.605 qpair failed and we were unable to recover it. 00:37:36.605 [2024-11-19 08:01:28.172623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.605 [2024-11-19 08:01:28.172673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.605 qpair failed and we were unable to recover it. 00:37:36.605 [2024-11-19 08:01:28.172793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.605 [2024-11-19 08:01:28.172841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.605 qpair failed and we were unable to recover it. 00:37:36.605 [2024-11-19 08:01:28.172977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.605 [2024-11-19 08:01:28.173013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.605 qpair failed and we were unable to recover it. 00:37:36.605 [2024-11-19 08:01:28.173186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.605 [2024-11-19 08:01:28.173221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.605 qpair failed and we were unable to recover it. 00:37:36.605 [2024-11-19 08:01:28.173325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.605 [2024-11-19 08:01:28.173361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.605 qpair failed and we were unable to recover it. 00:37:36.605 [2024-11-19 08:01:28.173500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.605 [2024-11-19 08:01:28.173538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.605 qpair failed and we were unable to recover it. 00:37:36.605 [2024-11-19 08:01:28.173710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.605 [2024-11-19 08:01:28.173748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.605 qpair failed and we were unable to recover it. 00:37:36.605 [2024-11-19 08:01:28.173863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.605 [2024-11-19 08:01:28.173903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.605 qpair failed and we were unable to recover it. 00:37:36.605 [2024-11-19 08:01:28.174046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.605 [2024-11-19 08:01:28.174083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.605 qpair failed and we were unable to recover it. 00:37:36.605 [2024-11-19 08:01:28.174193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.605 [2024-11-19 08:01:28.174228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.605 qpair failed and we were unable to recover it. 00:37:36.605 [2024-11-19 08:01:28.174368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.605 [2024-11-19 08:01:28.174403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.605 qpair failed and we were unable to recover it. 00:37:36.605 [2024-11-19 08:01:28.174522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.605 [2024-11-19 08:01:28.174558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.605 qpair failed and we were unable to recover it. 00:37:36.605 [2024-11-19 08:01:28.174697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.605 [2024-11-19 08:01:28.174732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.605 qpair failed and we were unable to recover it. 00:37:36.605 [2024-11-19 08:01:28.174857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.605 [2024-11-19 08:01:28.174898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.605 qpair failed and we were unable to recover it. 00:37:36.605 [2024-11-19 08:01:28.175037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.605 [2024-11-19 08:01:28.175073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.605 qpair failed and we were unable to recover it. 00:37:36.605 [2024-11-19 08:01:28.175181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.605 [2024-11-19 08:01:28.175216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.605 qpair failed and we were unable to recover it. 00:37:36.605 [2024-11-19 08:01:28.175351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.605 [2024-11-19 08:01:28.175386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.605 qpair failed and we were unable to recover it. 00:37:36.605 [2024-11-19 08:01:28.175524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.605 [2024-11-19 08:01:28.175560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.605 qpair failed and we were unable to recover it. 00:37:36.605 [2024-11-19 08:01:28.175727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.605 [2024-11-19 08:01:28.175778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.605 qpair failed and we were unable to recover it. 00:37:36.605 [2024-11-19 08:01:28.175935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.605 [2024-11-19 08:01:28.175987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.605 qpair failed and we were unable to recover it. 00:37:36.605 [2024-11-19 08:01:28.176109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.605 [2024-11-19 08:01:28.176148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.605 qpair failed and we were unable to recover it. 00:37:36.605 [2024-11-19 08:01:28.176274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.605 [2024-11-19 08:01:28.176311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.605 qpair failed and we were unable to recover it. 00:37:36.605 [2024-11-19 08:01:28.176431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.605 [2024-11-19 08:01:28.176468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.605 qpair failed and we were unable to recover it. 00:37:36.605 [2024-11-19 08:01:28.176609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.605 [2024-11-19 08:01:28.176647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.605 qpair failed and we were unable to recover it. 00:37:36.605 [2024-11-19 08:01:28.176784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.605 [2024-11-19 08:01:28.176820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.605 qpair failed and we were unable to recover it. 00:37:36.605 [2024-11-19 08:01:28.176930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.605 [2024-11-19 08:01:28.176968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.605 qpair failed and we were unable to recover it. 00:37:36.605 [2024-11-19 08:01:28.177118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.605 [2024-11-19 08:01:28.177168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.605 qpair failed and we were unable to recover it. 00:37:36.605 [2024-11-19 08:01:28.177329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.605 [2024-11-19 08:01:28.177368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.605 qpair failed and we were unable to recover it. 00:37:36.605 [2024-11-19 08:01:28.177480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.605 [2024-11-19 08:01:28.177516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.605 qpair failed and we were unable to recover it. 00:37:36.605 [2024-11-19 08:01:28.177630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.605 [2024-11-19 08:01:28.177666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.605 qpair failed and we were unable to recover it. 00:37:36.605 [2024-11-19 08:01:28.177818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.605 [2024-11-19 08:01:28.177853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.606 qpair failed and we were unable to recover it. 00:37:36.606 [2024-11-19 08:01:28.177967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.606 [2024-11-19 08:01:28.178002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.606 qpair failed and we were unable to recover it. 00:37:36.606 [2024-11-19 08:01:28.178143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.606 [2024-11-19 08:01:28.178178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.606 qpair failed and we were unable to recover it. 00:37:36.606 [2024-11-19 08:01:28.178314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.606 [2024-11-19 08:01:28.178352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.606 qpair failed and we were unable to recover it. 00:37:36.606 [2024-11-19 08:01:28.178490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.606 [2024-11-19 08:01:28.178528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.606 qpair failed and we were unable to recover it. 00:37:36.606 [2024-11-19 08:01:28.178636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.606 [2024-11-19 08:01:28.178672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.606 qpair failed and we were unable to recover it. 00:37:36.606 [2024-11-19 08:01:28.178797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.606 [2024-11-19 08:01:28.178833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.606 qpair failed and we were unable to recover it. 00:37:36.606 [2024-11-19 08:01:28.178941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.606 [2024-11-19 08:01:28.178976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.606 qpair failed and we were unable to recover it. 00:37:36.606 [2024-11-19 08:01:28.179140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.606 [2024-11-19 08:01:28.179176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.606 qpair failed and we were unable to recover it. 00:37:36.606 [2024-11-19 08:01:28.179275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.606 [2024-11-19 08:01:28.179310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.606 qpair failed and we were unable to recover it. 00:37:36.606 [2024-11-19 08:01:28.179422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.606 [2024-11-19 08:01:28.179472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.606 qpair failed and we were unable to recover it. 00:37:36.606 [2024-11-19 08:01:28.179584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.606 [2024-11-19 08:01:28.179621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.606 qpair failed and we were unable to recover it. 00:37:36.606 [2024-11-19 08:01:28.179749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.606 [2024-11-19 08:01:28.179785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.606 qpair failed and we were unable to recover it. 00:37:36.606 [2024-11-19 08:01:28.179897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.606 [2024-11-19 08:01:28.179934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.606 qpair failed and we were unable to recover it. 00:37:36.606 [2024-11-19 08:01:28.180070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.606 [2024-11-19 08:01:28.180106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.606 qpair failed and we were unable to recover it. 00:37:36.606 [2024-11-19 08:01:28.180218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.606 [2024-11-19 08:01:28.180254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.606 qpair failed and we were unable to recover it. 00:37:36.606 [2024-11-19 08:01:28.180409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.606 [2024-11-19 08:01:28.180445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.606 qpair failed and we were unable to recover it. 00:37:36.606 [2024-11-19 08:01:28.180576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.606 [2024-11-19 08:01:28.180612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.606 qpair failed and we were unable to recover it. 00:37:36.606 [2024-11-19 08:01:28.180762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.606 [2024-11-19 08:01:28.180799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.606 qpair failed and we were unable to recover it. 00:37:36.606 [2024-11-19 08:01:28.180968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.606 [2024-11-19 08:01:28.181005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.606 qpair failed and we were unable to recover it. 00:37:36.606 [2024-11-19 08:01:28.181136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.606 [2024-11-19 08:01:28.181186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.606 qpair failed and we were unable to recover it. 00:37:36.606 [2024-11-19 08:01:28.181309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.606 [2024-11-19 08:01:28.181348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.606 qpair failed and we were unable to recover it. 00:37:36.606 [2024-11-19 08:01:28.181459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.606 [2024-11-19 08:01:28.181495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.606 qpair failed and we were unable to recover it. 00:37:36.606 [2024-11-19 08:01:28.181627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.606 [2024-11-19 08:01:28.181668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.606 qpair failed and we were unable to recover it. 00:37:36.606 [2024-11-19 08:01:28.181804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.606 [2024-11-19 08:01:28.181853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.606 qpair failed and we were unable to recover it. 00:37:36.606 [2024-11-19 08:01:28.181976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.606 [2024-11-19 08:01:28.182014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.606 qpair failed and we were unable to recover it. 00:37:36.606 [2024-11-19 08:01:28.182156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.606 [2024-11-19 08:01:28.182192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.606 qpair failed and we were unable to recover it. 00:37:36.606 [2024-11-19 08:01:28.182326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.606 [2024-11-19 08:01:28.182362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.606 qpair failed and we were unable to recover it. 00:37:36.606 [2024-11-19 08:01:28.182466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.606 [2024-11-19 08:01:28.182502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.606 qpair failed and we were unable to recover it. 00:37:36.606 [2024-11-19 08:01:28.182607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.606 [2024-11-19 08:01:28.182642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.606 qpair failed and we were unable to recover it. 00:37:36.606 [2024-11-19 08:01:28.182763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.606 [2024-11-19 08:01:28.182800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.606 qpair failed and we were unable to recover it. 00:37:36.606 [2024-11-19 08:01:28.182906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.606 [2024-11-19 08:01:28.182945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.606 qpair failed and we were unable to recover it. 00:37:36.606 [2024-11-19 08:01:28.183083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.606 [2024-11-19 08:01:28.183120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.606 qpair failed and we were unable to recover it. 00:37:36.606 [2024-11-19 08:01:28.183272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.606 [2024-11-19 08:01:28.183308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.606 qpair failed and we were unable to recover it. 00:37:36.606 [2024-11-19 08:01:28.183424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.606 [2024-11-19 08:01:28.183460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.606 qpair failed and we were unable to recover it. 00:37:36.606 [2024-11-19 08:01:28.183600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.606 [2024-11-19 08:01:28.183637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.606 qpair failed and we were unable to recover it. 00:37:36.606 [2024-11-19 08:01:28.183820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.606 [2024-11-19 08:01:28.183857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.606 qpair failed and we were unable to recover it. 00:37:36.606 [2024-11-19 08:01:28.183986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.606 [2024-11-19 08:01:28.184024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.606 qpair failed and we were unable to recover it. 00:37:36.606 [2024-11-19 08:01:28.184127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.607 [2024-11-19 08:01:28.184163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.607 qpair failed and we were unable to recover it. 00:37:36.607 [2024-11-19 08:01:28.184296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.607 [2024-11-19 08:01:28.184331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.607 qpair failed and we were unable to recover it. 00:37:36.607 [2024-11-19 08:01:28.184442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.607 [2024-11-19 08:01:28.184479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.607 qpair failed and we were unable to recover it. 00:37:36.607 [2024-11-19 08:01:28.184589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.607 [2024-11-19 08:01:28.184625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.607 qpair failed and we were unable to recover it. 00:37:36.607 [2024-11-19 08:01:28.184777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.607 [2024-11-19 08:01:28.184815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.607 qpair failed and we were unable to recover it. 00:37:36.607 [2024-11-19 08:01:28.184936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.607 [2024-11-19 08:01:28.184973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.607 qpair failed and we were unable to recover it. 00:37:36.607 [2024-11-19 08:01:28.185116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.607 [2024-11-19 08:01:28.185151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.607 qpair failed and we were unable to recover it. 00:37:36.607 [2024-11-19 08:01:28.185286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.607 [2024-11-19 08:01:28.185321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.607 qpair failed and we were unable to recover it. 00:37:36.607 [2024-11-19 08:01:28.185463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.607 [2024-11-19 08:01:28.185498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.607 qpair failed and we were unable to recover it. 00:37:36.607 [2024-11-19 08:01:28.185600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.607 [2024-11-19 08:01:28.185635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.607 qpair failed and we were unable to recover it. 00:37:36.607 [2024-11-19 08:01:28.185750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.607 [2024-11-19 08:01:28.185786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.607 qpair failed and we were unable to recover it. 00:37:36.607 [2024-11-19 08:01:28.185927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.607 [2024-11-19 08:01:28.185963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.607 qpair failed and we were unable to recover it. 00:37:36.607 [2024-11-19 08:01:28.186075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.607 [2024-11-19 08:01:28.186112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.607 qpair failed and we were unable to recover it. 00:37:36.607 [2024-11-19 08:01:28.186250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.607 [2024-11-19 08:01:28.186285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.607 qpair failed and we were unable to recover it. 00:37:36.607 [2024-11-19 08:01:28.186423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.607 [2024-11-19 08:01:28.186460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.607 qpair failed and we were unable to recover it. 00:37:36.607 [2024-11-19 08:01:28.186619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.607 [2024-11-19 08:01:28.186670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.607 qpair failed and we were unable to recover it. 00:37:36.607 [2024-11-19 08:01:28.186795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.607 [2024-11-19 08:01:28.186831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.607 qpair failed and we were unable to recover it. 00:37:36.607 [2024-11-19 08:01:28.186969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.607 [2024-11-19 08:01:28.187004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.607 qpair failed and we were unable to recover it. 00:37:36.607 [2024-11-19 08:01:28.187137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.607 [2024-11-19 08:01:28.187173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.607 qpair failed and we were unable to recover it. 00:37:36.607 [2024-11-19 08:01:28.187281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.607 [2024-11-19 08:01:28.187316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.607 qpair failed and we were unable to recover it. 00:37:36.607 [2024-11-19 08:01:28.187432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.607 [2024-11-19 08:01:28.187468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.607 qpair failed and we were unable to recover it. 00:37:36.607 [2024-11-19 08:01:28.187605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.607 [2024-11-19 08:01:28.187641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.607 qpair failed and we were unable to recover it. 00:37:36.607 [2024-11-19 08:01:28.187806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.607 [2024-11-19 08:01:28.187857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.607 qpair failed and we were unable to recover it. 00:37:36.607 [2024-11-19 08:01:28.187978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.607 [2024-11-19 08:01:28.188014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.607 qpair failed and we were unable to recover it. 00:37:36.607 [2024-11-19 08:01:28.188145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.607 [2024-11-19 08:01:28.188180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.607 qpair failed and we were unable to recover it. 00:37:36.607 [2024-11-19 08:01:28.188289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.607 [2024-11-19 08:01:28.188329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.607 qpair failed and we were unable to recover it. 00:37:36.607 [2024-11-19 08:01:28.188436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.607 [2024-11-19 08:01:28.188471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.607 qpair failed and we were unable to recover it. 00:37:36.607 [2024-11-19 08:01:28.188583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.607 [2024-11-19 08:01:28.188620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.607 qpair failed and we were unable to recover it. 00:37:36.607 [2024-11-19 08:01:28.188758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.607 [2024-11-19 08:01:28.188809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.607 qpair failed and we were unable to recover it. 00:37:36.607 [2024-11-19 08:01:28.188955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.607 [2024-11-19 08:01:28.188993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.607 qpair failed and we were unable to recover it. 00:37:36.607 [2024-11-19 08:01:28.189129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.607 [2024-11-19 08:01:28.189165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.607 qpair failed and we were unable to recover it. 00:37:36.607 [2024-11-19 08:01:28.189317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.607 [2024-11-19 08:01:28.189354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.607 qpair failed and we were unable to recover it. 00:37:36.607 [2024-11-19 08:01:28.189498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.607 [2024-11-19 08:01:28.189534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.608 qpair failed and we were unable to recover it. 00:37:36.608 [2024-11-19 08:01:28.189642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.608 [2024-11-19 08:01:28.189678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.608 qpair failed and we were unable to recover it. 00:37:36.608 [2024-11-19 08:01:28.189801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.608 [2024-11-19 08:01:28.189838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.608 qpair failed and we were unable to recover it. 00:37:36.608 [2024-11-19 08:01:28.189944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.608 [2024-11-19 08:01:28.189979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.608 qpair failed and we were unable to recover it. 00:37:36.608 [2024-11-19 08:01:28.190080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.608 [2024-11-19 08:01:28.190115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.608 qpair failed and we were unable to recover it. 00:37:36.608 [2024-11-19 08:01:28.190274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.608 [2024-11-19 08:01:28.190309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.608 qpair failed and we were unable to recover it. 00:37:36.608 [2024-11-19 08:01:28.190426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.608 [2024-11-19 08:01:28.190476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.608 qpair failed and we were unable to recover it. 00:37:36.608 [2024-11-19 08:01:28.190604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.608 [2024-11-19 08:01:28.190641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.608 qpair failed and we were unable to recover it. 00:37:36.608 [2024-11-19 08:01:28.190781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.608 [2024-11-19 08:01:28.190831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.608 qpair failed and we were unable to recover it. 00:37:36.608 [2024-11-19 08:01:28.190951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.608 [2024-11-19 08:01:28.190987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.608 qpair failed and we were unable to recover it. 00:37:36.608 [2024-11-19 08:01:28.191097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.608 [2024-11-19 08:01:28.191133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.608 qpair failed and we were unable to recover it. 00:37:36.608 [2024-11-19 08:01:28.191246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.608 [2024-11-19 08:01:28.191282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.608 qpair failed and we were unable to recover it. 00:37:36.608 [2024-11-19 08:01:28.191399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.608 [2024-11-19 08:01:28.191435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.608 qpair failed and we were unable to recover it. 00:37:36.608 [2024-11-19 08:01:28.191566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.608 [2024-11-19 08:01:28.191602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.608 qpair failed and we were unable to recover it. 00:37:36.608 [2024-11-19 08:01:28.191756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.608 [2024-11-19 08:01:28.191793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.608 qpair failed and we were unable to recover it. 00:37:36.608 [2024-11-19 08:01:28.191921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.608 [2024-11-19 08:01:28.191971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.608 qpair failed and we were unable to recover it. 00:37:36.608 [2024-11-19 08:01:28.192110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.608 [2024-11-19 08:01:28.192148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.608 qpair failed and we were unable to recover it. 00:37:36.608 [2024-11-19 08:01:28.192260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.608 [2024-11-19 08:01:28.192296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.608 qpair failed and we were unable to recover it. 00:37:36.608 [2024-11-19 08:01:28.192425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.608 [2024-11-19 08:01:28.192460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.608 qpair failed and we were unable to recover it. 00:37:36.608 [2024-11-19 08:01:28.192585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.608 [2024-11-19 08:01:28.192635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.608 qpair failed and we were unable to recover it. 00:37:36.608 [2024-11-19 08:01:28.192779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.608 [2024-11-19 08:01:28.192817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.608 qpair failed and we were unable to recover it. 00:37:36.608 [2024-11-19 08:01:28.192957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.608 [2024-11-19 08:01:28.193004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.608 qpair failed and we were unable to recover it. 00:37:36.608 [2024-11-19 08:01:28.193167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.608 [2024-11-19 08:01:28.193203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.608 qpair failed and we were unable to recover it. 00:37:36.608 [2024-11-19 08:01:28.193306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.608 [2024-11-19 08:01:28.193342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.608 qpair failed and we were unable to recover it. 00:37:36.608 [2024-11-19 08:01:28.193514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.608 [2024-11-19 08:01:28.193552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.608 qpair failed and we were unable to recover it. 00:37:36.608 [2024-11-19 08:01:28.193680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.608 [2024-11-19 08:01:28.193738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.608 qpair failed and we were unable to recover it. 00:37:36.608 [2024-11-19 08:01:28.193872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.608 [2024-11-19 08:01:28.193922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.608 qpair failed and we were unable to recover it. 00:37:36.608 [2024-11-19 08:01:28.194043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.608 [2024-11-19 08:01:28.194081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.608 qpair failed and we were unable to recover it. 00:37:36.608 [2024-11-19 08:01:28.194222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.608 [2024-11-19 08:01:28.194258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.608 qpair failed and we were unable to recover it. 00:37:36.608 [2024-11-19 08:01:28.194368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.608 [2024-11-19 08:01:28.194404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.608 qpair failed and we were unable to recover it. 00:37:36.608 [2024-11-19 08:01:28.194510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.608 [2024-11-19 08:01:28.194547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.608 qpair failed and we were unable to recover it. 00:37:36.608 [2024-11-19 08:01:28.194761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.608 [2024-11-19 08:01:28.194797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.608 qpair failed and we were unable to recover it. 00:37:36.608 [2024-11-19 08:01:28.194913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.608 [2024-11-19 08:01:28.194949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.608 qpair failed and we were unable to recover it. 00:37:36.608 [2024-11-19 08:01:28.195083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.608 [2024-11-19 08:01:28.195124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.608 qpair failed and we were unable to recover it. 00:37:36.608 [2024-11-19 08:01:28.195267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.608 [2024-11-19 08:01:28.195303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.608 qpair failed and we were unable to recover it. 00:37:36.608 [2024-11-19 08:01:28.195414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.608 [2024-11-19 08:01:28.195450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.608 qpair failed and we were unable to recover it. 00:37:36.608 [2024-11-19 08:01:28.195663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.608 [2024-11-19 08:01:28.195708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.608 qpair failed and we were unable to recover it. 00:37:36.608 [2024-11-19 08:01:28.195822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.608 [2024-11-19 08:01:28.195858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.608 qpair failed and we were unable to recover it. 00:37:36.608 [2024-11-19 08:01:28.195993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.609 [2024-11-19 08:01:28.196028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.609 qpair failed and we were unable to recover it. 00:37:36.609 [2024-11-19 08:01:28.196146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.609 [2024-11-19 08:01:28.196182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.609 qpair failed and we were unable to recover it. 00:37:36.609 [2024-11-19 08:01:28.196306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.609 [2024-11-19 08:01:28.196356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.609 qpair failed and we were unable to recover it. 00:37:36.609 [2024-11-19 08:01:28.196525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.609 [2024-11-19 08:01:28.196574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.609 qpair failed and we were unable to recover it. 00:37:36.609 [2024-11-19 08:01:28.196706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.609 [2024-11-19 08:01:28.196757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.609 qpair failed and we were unable to recover it. 00:37:36.609 [2024-11-19 08:01:28.196895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.609 [2024-11-19 08:01:28.196931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.609 qpair failed and we were unable to recover it. 00:37:36.609 [2024-11-19 08:01:28.197071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.609 [2024-11-19 08:01:28.197106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.609 qpair failed and we were unable to recover it. 00:37:36.609 [2024-11-19 08:01:28.197218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.609 [2024-11-19 08:01:28.197254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.609 qpair failed and we were unable to recover it. 00:37:36.609 [2024-11-19 08:01:28.197363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.609 [2024-11-19 08:01:28.197397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.609 qpair failed and we were unable to recover it. 00:37:36.609 [2024-11-19 08:01:28.197545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.609 [2024-11-19 08:01:28.197584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.609 qpair failed and we were unable to recover it. 00:37:36.609 [2024-11-19 08:01:28.197740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.609 [2024-11-19 08:01:28.197791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.609 qpair failed and we were unable to recover it. 00:37:36.609 [2024-11-19 08:01:28.197919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.609 [2024-11-19 08:01:28.197969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.609 qpair failed and we were unable to recover it. 00:37:36.609 [2024-11-19 08:01:28.198116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.609 [2024-11-19 08:01:28.198155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.609 qpair failed and we were unable to recover it. 00:37:36.609 [2024-11-19 08:01:28.198297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.609 [2024-11-19 08:01:28.198334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.609 qpair failed and we were unable to recover it. 00:37:36.609 [2024-11-19 08:01:28.198443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.609 [2024-11-19 08:01:28.198479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.609 qpair failed and we were unable to recover it. 00:37:36.609 [2024-11-19 08:01:28.198596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.609 [2024-11-19 08:01:28.198633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.609 qpair failed and we were unable to recover it. 00:37:36.609 [2024-11-19 08:01:28.198799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.609 [2024-11-19 08:01:28.198850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.609 qpair failed and we were unable to recover it. 00:37:36.609 [2024-11-19 08:01:28.198972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.609 [2024-11-19 08:01:28.199011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.609 qpair failed and we were unable to recover it. 00:37:36.609 [2024-11-19 08:01:28.199124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.609 [2024-11-19 08:01:28.199161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.609 qpair failed and we were unable to recover it. 00:37:36.609 [2024-11-19 08:01:28.199301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.609 [2024-11-19 08:01:28.199338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.609 qpair failed and we were unable to recover it. 00:37:36.609 [2024-11-19 08:01:28.199441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.609 [2024-11-19 08:01:28.199476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.609 qpair failed and we were unable to recover it. 00:37:36.609 [2024-11-19 08:01:28.199607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.609 [2024-11-19 08:01:28.199644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.609 qpair failed and we were unable to recover it. 00:37:36.609 [2024-11-19 08:01:28.199800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.609 [2024-11-19 08:01:28.199837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.609 qpair failed and we were unable to recover it. 00:37:36.609 [2024-11-19 08:01:28.199961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.609 [2024-11-19 08:01:28.200010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.609 qpair failed and we were unable to recover it. 00:37:36.609 [2024-11-19 08:01:28.200155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.609 [2024-11-19 08:01:28.200192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.609 qpair failed and we were unable to recover it. 00:37:36.609 [2024-11-19 08:01:28.200323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.609 [2024-11-19 08:01:28.200359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.609 qpair failed and we were unable to recover it. 00:37:36.609 [2024-11-19 08:01:28.200472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.609 [2024-11-19 08:01:28.200508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.609 qpair failed and we were unable to recover it. 00:37:36.609 [2024-11-19 08:01:28.200674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.609 [2024-11-19 08:01:28.200722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.609 qpair failed and we were unable to recover it. 00:37:36.609 [2024-11-19 08:01:28.200835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.609 [2024-11-19 08:01:28.200871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.609 qpair failed and we were unable to recover it. 00:37:36.609 [2024-11-19 08:01:28.200987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.609 [2024-11-19 08:01:28.201023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.609 qpair failed and we were unable to recover it. 00:37:36.609 [2024-11-19 08:01:28.201135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.609 [2024-11-19 08:01:28.201171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.609 qpair failed and we were unable to recover it. 00:37:36.609 [2024-11-19 08:01:28.201308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.609 [2024-11-19 08:01:28.201344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.609 qpair failed and we were unable to recover it. 00:37:36.609 [2024-11-19 08:01:28.201454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.609 [2024-11-19 08:01:28.201489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.609 qpair failed and we were unable to recover it. 00:37:36.609 [2024-11-19 08:01:28.201600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.609 [2024-11-19 08:01:28.201635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.609 qpair failed and we were unable to recover it. 00:37:36.609 [2024-11-19 08:01:28.201782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.609 [2024-11-19 08:01:28.201820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.609 qpair failed and we were unable to recover it. 00:37:36.609 [2024-11-19 08:01:28.201923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.609 [2024-11-19 08:01:28.201968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.609 qpair failed and we were unable to recover it. 00:37:36.609 [2024-11-19 08:01:28.202083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.609 [2024-11-19 08:01:28.202120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.609 qpair failed and we were unable to recover it. 00:37:36.609 [2024-11-19 08:01:28.202267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.609 [2024-11-19 08:01:28.202302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.609 qpair failed and we were unable to recover it. 00:37:36.610 [2024-11-19 08:01:28.202443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.610 [2024-11-19 08:01:28.202480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.610 qpair failed and we were unable to recover it. 00:37:36.610 [2024-11-19 08:01:28.202606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.610 [2024-11-19 08:01:28.202643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.610 qpair failed and we were unable to recover it. 00:37:36.610 [2024-11-19 08:01:28.202771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.610 [2024-11-19 08:01:28.202808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.610 qpair failed and we were unable to recover it. 00:37:36.610 [2024-11-19 08:01:28.202915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.610 [2024-11-19 08:01:28.202951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.610 qpair failed and we were unable to recover it. 00:37:36.610 [2024-11-19 08:01:28.203104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.610 [2024-11-19 08:01:28.203140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.610 qpair failed and we were unable to recover it. 00:37:36.610 [2024-11-19 08:01:28.203303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.610 [2024-11-19 08:01:28.203339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.610 qpair failed and we were unable to recover it. 00:37:36.610 [2024-11-19 08:01:28.203479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.610 [2024-11-19 08:01:28.203516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.610 qpair failed and we were unable to recover it. 00:37:36.610 [2024-11-19 08:01:28.203633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.610 [2024-11-19 08:01:28.203670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.610 qpair failed and we were unable to recover it. 00:37:36.610 [2024-11-19 08:01:28.203866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.610 [2024-11-19 08:01:28.203915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.610 qpair failed and we were unable to recover it. 00:37:36.610 [2024-11-19 08:01:28.204031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.610 [2024-11-19 08:01:28.204069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.610 qpair failed and we were unable to recover it. 00:37:36.610 [2024-11-19 08:01:28.204181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.610 [2024-11-19 08:01:28.204218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.610 qpair failed and we were unable to recover it. 00:37:36.610 [2024-11-19 08:01:28.204336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.610 [2024-11-19 08:01:28.204373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.610 qpair failed and we were unable to recover it. 00:37:36.610 [2024-11-19 08:01:28.204538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.610 [2024-11-19 08:01:28.204573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.610 qpair failed and we were unable to recover it. 00:37:36.610 [2024-11-19 08:01:28.204695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.610 [2024-11-19 08:01:28.204733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.610 qpair failed and we were unable to recover it. 00:37:36.610 [2024-11-19 08:01:28.204868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.610 [2024-11-19 08:01:28.204919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.610 qpair failed and we were unable to recover it. 00:37:36.610 [2024-11-19 08:01:28.205053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.610 [2024-11-19 08:01:28.205103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.610 qpair failed and we were unable to recover it. 00:37:36.610 [2024-11-19 08:01:28.205222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.610 [2024-11-19 08:01:28.205258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.610 qpair failed and we were unable to recover it. 00:37:36.610 [2024-11-19 08:01:28.205379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.610 [2024-11-19 08:01:28.205416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.610 qpair failed and we were unable to recover it. 00:37:36.610 [2024-11-19 08:01:28.205514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.610 [2024-11-19 08:01:28.205549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.610 qpair failed and we were unable to recover it. 00:37:36.610 [2024-11-19 08:01:28.205658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.610 [2024-11-19 08:01:28.205699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.610 qpair failed and we were unable to recover it. 00:37:36.610 [2024-11-19 08:01:28.205851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.610 [2024-11-19 08:01:28.205887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.610 qpair failed and we were unable to recover it. 00:37:36.610 [2024-11-19 08:01:28.206019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.610 [2024-11-19 08:01:28.206055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.610 qpair failed and we were unable to recover it. 00:37:36.610 [2024-11-19 08:01:28.206189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.610 [2024-11-19 08:01:28.206225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.610 qpair failed and we were unable to recover it. 00:37:36.610 [2024-11-19 08:01:28.206340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.610 [2024-11-19 08:01:28.206376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.610 qpair failed and we were unable to recover it. 00:37:36.610 [2024-11-19 08:01:28.206524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.610 [2024-11-19 08:01:28.206564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.610 qpair failed and we were unable to recover it. 00:37:36.610 [2024-11-19 08:01:28.206695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.610 [2024-11-19 08:01:28.206736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.610 qpair failed and we were unable to recover it. 00:37:36.610 [2024-11-19 08:01:28.206846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.610 [2024-11-19 08:01:28.206882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.610 qpair failed and we were unable to recover it. 00:37:36.610 [2024-11-19 08:01:28.207022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.610 [2024-11-19 08:01:28.207058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.610 qpair failed and we were unable to recover it. 00:37:36.610 [2024-11-19 08:01:28.207169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.610 [2024-11-19 08:01:28.207205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.610 qpair failed and we were unable to recover it. 00:37:36.610 [2024-11-19 08:01:28.207318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.610 [2024-11-19 08:01:28.207378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.610 qpair failed and we were unable to recover it. 00:37:36.610 [2024-11-19 08:01:28.207487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.610 [2024-11-19 08:01:28.207525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.610 qpair failed and we were unable to recover it. 00:37:36.610 [2024-11-19 08:01:28.207672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.610 [2024-11-19 08:01:28.207736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.610 qpair failed and we were unable to recover it. 00:37:36.610 [2024-11-19 08:01:28.207885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.610 [2024-11-19 08:01:28.207920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.610 qpair failed and we were unable to recover it. 00:37:36.610 [2024-11-19 08:01:28.208057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.610 [2024-11-19 08:01:28.208092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.610 qpair failed and we were unable to recover it. 00:37:36.610 [2024-11-19 08:01:28.208233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.610 [2024-11-19 08:01:28.208269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.610 qpair failed and we were unable to recover it. 00:37:36.610 [2024-11-19 08:01:28.208377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.610 [2024-11-19 08:01:28.208413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.610 qpair failed and we were unable to recover it. 00:37:36.610 [2024-11-19 08:01:28.208557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.610 [2024-11-19 08:01:28.208596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.610 qpair failed and we were unable to recover it. 00:37:36.610 [2024-11-19 08:01:28.208735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.611 [2024-11-19 08:01:28.208773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.611 qpair failed and we were unable to recover it. 00:37:36.611 [2024-11-19 08:01:28.208901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.611 [2024-11-19 08:01:28.208950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.611 qpair failed and we were unable to recover it. 00:37:36.611 [2024-11-19 08:01:28.209071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.611 [2024-11-19 08:01:28.209108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.611 qpair failed and we were unable to recover it. 00:37:36.611 [2024-11-19 08:01:28.209245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.611 [2024-11-19 08:01:28.209281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.611 qpair failed and we were unable to recover it. 00:37:36.611 [2024-11-19 08:01:28.209390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.611 [2024-11-19 08:01:28.209426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.611 qpair failed and we were unable to recover it. 00:37:36.611 [2024-11-19 08:01:28.209530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.611 [2024-11-19 08:01:28.209565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.611 qpair failed and we were unable to recover it. 00:37:36.611 [2024-11-19 08:01:28.209675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.611 [2024-11-19 08:01:28.209718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.611 qpair failed and we were unable to recover it. 00:37:36.611 [2024-11-19 08:01:28.209881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.611 [2024-11-19 08:01:28.209916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.611 qpair failed and we were unable to recover it. 00:37:36.611 [2024-11-19 08:01:28.210059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.611 [2024-11-19 08:01:28.210097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.611 qpair failed and we were unable to recover it. 00:37:36.611 [2024-11-19 08:01:28.210213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.611 [2024-11-19 08:01:28.210250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.611 qpair failed and we were unable to recover it. 00:37:36.611 [2024-11-19 08:01:28.210364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.611 [2024-11-19 08:01:28.210400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.611 qpair failed and we were unable to recover it. 00:37:36.611 [2024-11-19 08:01:28.210503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.611 [2024-11-19 08:01:28.210538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.611 qpair failed and we were unable to recover it. 00:37:36.611 [2024-11-19 08:01:28.210641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.611 [2024-11-19 08:01:28.210678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.611 qpair failed and we were unable to recover it. 00:37:36.611 [2024-11-19 08:01:28.210828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.611 [2024-11-19 08:01:28.210863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.611 qpair failed and we were unable to recover it. 00:37:36.611 [2024-11-19 08:01:28.210980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.611 [2024-11-19 08:01:28.211016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.611 qpair failed and we were unable to recover it. 00:37:36.611 [2024-11-19 08:01:28.211146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.611 [2024-11-19 08:01:28.211182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.611 qpair failed and we were unable to recover it. 00:37:36.611 [2024-11-19 08:01:28.211319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.611 [2024-11-19 08:01:28.211355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.611 qpair failed and we were unable to recover it. 00:37:36.611 [2024-11-19 08:01:28.211498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.611 [2024-11-19 08:01:28.211534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.611 qpair failed and we were unable to recover it. 00:37:36.611 [2024-11-19 08:01:28.211665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.611 [2024-11-19 08:01:28.211726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.611 qpair failed and we were unable to recover it. 00:37:36.611 [2024-11-19 08:01:28.211888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.611 [2024-11-19 08:01:28.211938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.611 qpair failed and we were unable to recover it. 00:37:36.611 [2024-11-19 08:01:28.212082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.611 [2024-11-19 08:01:28.212120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.611 qpair failed and we were unable to recover it. 00:37:36.611 [2024-11-19 08:01:28.212263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.611 [2024-11-19 08:01:28.212300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.611 qpair failed and we were unable to recover it. 00:37:36.611 [2024-11-19 08:01:28.212429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.611 [2024-11-19 08:01:28.212464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.611 qpair failed and we were unable to recover it. 00:37:36.611 [2024-11-19 08:01:28.212573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.611 [2024-11-19 08:01:28.212609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.611 qpair failed and we were unable to recover it. 00:37:36.611 [2024-11-19 08:01:28.212729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.611 [2024-11-19 08:01:28.212767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.611 qpair failed and we were unable to recover it. 00:37:36.611 [2024-11-19 08:01:28.212926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.611 [2024-11-19 08:01:28.212976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.611 qpair failed and we were unable to recover it. 00:37:36.611 [2024-11-19 08:01:28.213130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.611 [2024-11-19 08:01:28.213168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.611 qpair failed and we were unable to recover it. 00:37:36.611 [2024-11-19 08:01:28.213277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.611 [2024-11-19 08:01:28.213318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.611 qpair failed and we were unable to recover it. 00:37:36.611 [2024-11-19 08:01:28.213484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.611 [2024-11-19 08:01:28.213520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.611 qpair failed and we were unable to recover it. 00:37:36.611 [2024-11-19 08:01:28.213627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.611 [2024-11-19 08:01:28.213662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.611 qpair failed and we were unable to recover it. 00:37:36.611 [2024-11-19 08:01:28.213811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.611 [2024-11-19 08:01:28.213849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.611 qpair failed and we were unable to recover it. 00:37:36.611 [2024-11-19 08:01:28.213990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.611 [2024-11-19 08:01:28.214026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.611 qpair failed and we were unable to recover it. 00:37:36.611 [2024-11-19 08:01:28.214164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.611 [2024-11-19 08:01:28.214199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.611 qpair failed and we were unable to recover it. 00:37:36.611 [2024-11-19 08:01:28.214303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.611 [2024-11-19 08:01:28.214339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.611 qpair failed and we were unable to recover it. 00:37:36.612 [2024-11-19 08:01:28.214449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.612 [2024-11-19 08:01:28.214485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.612 qpair failed and we were unable to recover it. 00:37:36.612 [2024-11-19 08:01:28.214636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.612 [2024-11-19 08:01:28.214687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.612 qpair failed and we were unable to recover it. 00:37:36.612 [2024-11-19 08:01:28.214843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.612 [2024-11-19 08:01:28.214880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.612 qpair failed and we were unable to recover it. 00:37:36.612 [2024-11-19 08:01:28.215000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.612 [2024-11-19 08:01:28.215037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.612 qpair failed and we were unable to recover it. 00:37:36.612 [2024-11-19 08:01:28.215177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.612 [2024-11-19 08:01:28.215213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.612 qpair failed and we were unable to recover it. 00:37:36.612 [2024-11-19 08:01:28.215344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.612 [2024-11-19 08:01:28.215380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.612 qpair failed and we were unable to recover it. 00:37:36.612 [2024-11-19 08:01:28.215491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.612 [2024-11-19 08:01:28.215526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.612 qpair failed and we were unable to recover it. 00:37:36.612 [2024-11-19 08:01:28.215636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.612 [2024-11-19 08:01:28.215672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.612 qpair failed and we were unable to recover it. 00:37:36.612 [2024-11-19 08:01:28.215788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.612 [2024-11-19 08:01:28.215824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.612 qpair failed and we were unable to recover it. 00:37:36.612 [2024-11-19 08:01:28.215951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.612 [2024-11-19 08:01:28.215987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.612 qpair failed and we were unable to recover it. 00:37:36.612 [2024-11-19 08:01:28.216124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.612 [2024-11-19 08:01:28.216160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.612 qpair failed and we were unable to recover it. 00:37:36.612 [2024-11-19 08:01:28.216267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.612 [2024-11-19 08:01:28.216303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.612 qpair failed and we were unable to recover it. 00:37:36.612 [2024-11-19 08:01:28.216422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.612 [2024-11-19 08:01:28.216458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.612 qpair failed and we were unable to recover it. 00:37:36.612 [2024-11-19 08:01:28.216564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.612 [2024-11-19 08:01:28.216601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.612 qpair failed and we were unable to recover it. 00:37:36.612 [2024-11-19 08:01:28.216739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.612 [2024-11-19 08:01:28.216776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.612 qpair failed and we were unable to recover it. 00:37:36.612 [2024-11-19 08:01:28.216882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.612 [2024-11-19 08:01:28.216918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.612 qpair failed and we were unable to recover it. 00:37:36.612 [2024-11-19 08:01:28.217085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.612 [2024-11-19 08:01:28.217121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.612 qpair failed and we were unable to recover it. 00:37:36.612 [2024-11-19 08:01:28.217228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.612 [2024-11-19 08:01:28.217264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.612 qpair failed and we were unable to recover it. 00:37:36.612 [2024-11-19 08:01:28.217372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.612 [2024-11-19 08:01:28.217409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.612 qpair failed and we were unable to recover it. 00:37:36.612 [2024-11-19 08:01:28.217551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.612 [2024-11-19 08:01:28.217587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.612 qpair failed and we were unable to recover it. 00:37:36.612 [2024-11-19 08:01:28.217721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.612 [2024-11-19 08:01:28.217772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.612 qpair failed and we were unable to recover it. 00:37:36.612 [2024-11-19 08:01:28.217963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.612 [2024-11-19 08:01:28.218000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.612 qpair failed and we were unable to recover it. 00:37:36.612 [2024-11-19 08:01:28.218156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.612 [2024-11-19 08:01:28.218206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.612 qpair failed and we were unable to recover it. 00:37:36.612 [2024-11-19 08:01:28.218331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.612 [2024-11-19 08:01:28.218379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.612 qpair failed and we were unable to recover it. 00:37:36.612 [2024-11-19 08:01:28.218546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.612 [2024-11-19 08:01:28.218583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.612 qpair failed and we were unable to recover it. 00:37:36.612 [2024-11-19 08:01:28.218724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.612 [2024-11-19 08:01:28.218772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.612 qpair failed and we were unable to recover it. 00:37:36.612 [2024-11-19 08:01:28.218885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.612 [2024-11-19 08:01:28.218920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.612 qpair failed and we were unable to recover it. 00:37:36.612 [2024-11-19 08:01:28.219028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.612 [2024-11-19 08:01:28.219064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.612 qpair failed and we were unable to recover it. 00:37:36.612 [2024-11-19 08:01:28.219166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.612 [2024-11-19 08:01:28.219201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.612 qpair failed and we were unable to recover it. 00:37:36.612 [2024-11-19 08:01:28.219341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.612 [2024-11-19 08:01:28.219376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.612 qpair failed and we were unable to recover it. 00:37:36.612 [2024-11-19 08:01:28.219528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.612 [2024-11-19 08:01:28.219578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.612 qpair failed and we were unable to recover it. 00:37:36.612 [2024-11-19 08:01:28.219728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.612 [2024-11-19 08:01:28.219777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.612 qpair failed and we were unable to recover it. 00:37:36.612 [2024-11-19 08:01:28.219932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.612 [2024-11-19 08:01:28.219981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.612 qpair failed and we were unable to recover it. 00:37:36.612 [2024-11-19 08:01:28.220093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.612 [2024-11-19 08:01:28.220136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.612 qpair failed and we were unable to recover it. 00:37:36.612 [2024-11-19 08:01:28.220237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.612 [2024-11-19 08:01:28.220273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.612 qpair failed and we were unable to recover it. 00:37:36.612 [2024-11-19 08:01:28.220414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.612 [2024-11-19 08:01:28.220449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.612 qpair failed and we were unable to recover it. 00:37:36.612 [2024-11-19 08:01:28.220556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.612 [2024-11-19 08:01:28.220592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.612 qpair failed and we were unable to recover it. 00:37:36.612 [2024-11-19 08:01:28.220753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.613 [2024-11-19 08:01:28.220803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.613 qpair failed and we were unable to recover it. 00:37:36.613 [2024-11-19 08:01:28.220949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.613 [2024-11-19 08:01:28.220986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.613 qpair failed and we were unable to recover it. 00:37:36.613 [2024-11-19 08:01:28.221091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.613 [2024-11-19 08:01:28.221127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.613 qpair failed and we were unable to recover it. 00:37:36.613 [2024-11-19 08:01:28.221265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.613 [2024-11-19 08:01:28.221300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.613 qpair failed and we were unable to recover it. 00:37:36.613 [2024-11-19 08:01:28.221433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.613 [2024-11-19 08:01:28.221469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.613 qpair failed and we were unable to recover it. 00:37:36.613 [2024-11-19 08:01:28.221578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.613 [2024-11-19 08:01:28.221613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.613 qpair failed and we were unable to recover it. 00:37:36.613 [2024-11-19 08:01:28.221754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.613 [2024-11-19 08:01:28.221791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.613 qpair failed and we were unable to recover it. 00:37:36.613 [2024-11-19 08:01:28.221916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.613 [2024-11-19 08:01:28.221957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.613 qpair failed and we were unable to recover it. 00:37:36.613 [2024-11-19 08:01:28.222094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.613 [2024-11-19 08:01:28.222129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.613 qpair failed and we were unable to recover it. 00:37:36.613 [2024-11-19 08:01:28.222298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.613 [2024-11-19 08:01:28.222334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.613 qpair failed and we were unable to recover it. 00:37:36.613 [2024-11-19 08:01:28.222450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.613 [2024-11-19 08:01:28.222486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.613 qpair failed and we were unable to recover it. 00:37:36.613 [2024-11-19 08:01:28.222617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.613 [2024-11-19 08:01:28.222666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.613 qpair failed and we were unable to recover it. 00:37:36.613 [2024-11-19 08:01:28.222786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.613 [2024-11-19 08:01:28.222822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.613 qpair failed and we were unable to recover it. 00:37:36.613 [2024-11-19 08:01:28.222963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.613 [2024-11-19 08:01:28.223000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.613 qpair failed and we were unable to recover it. 00:37:36.613 [2024-11-19 08:01:28.223111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.613 [2024-11-19 08:01:28.223147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.613 qpair failed and we were unable to recover it. 00:37:36.613 [2024-11-19 08:01:28.223307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.613 [2024-11-19 08:01:28.223343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.613 qpair failed and we were unable to recover it. 00:37:36.613 [2024-11-19 08:01:28.223482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.613 [2024-11-19 08:01:28.223518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.613 qpair failed and we were unable to recover it. 00:37:36.613 [2024-11-19 08:01:28.223627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.613 [2024-11-19 08:01:28.223664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.613 qpair failed and we were unable to recover it. 00:37:36.613 [2024-11-19 08:01:28.223800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.613 [2024-11-19 08:01:28.223849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.613 qpair failed and we were unable to recover it. 00:37:36.613 [2024-11-19 08:01:28.223971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.613 [2024-11-19 08:01:28.224006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.613 qpair failed and we were unable to recover it. 00:37:36.613 [2024-11-19 08:01:28.224115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.613 [2024-11-19 08:01:28.224151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.613 qpair failed and we were unable to recover it. 00:37:36.613 [2024-11-19 08:01:28.224255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.613 [2024-11-19 08:01:28.224290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.613 qpair failed and we were unable to recover it. 00:37:36.613 [2024-11-19 08:01:28.224431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.613 [2024-11-19 08:01:28.224466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.613 qpair failed and we were unable to recover it. 00:37:36.613 [2024-11-19 08:01:28.224615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.613 [2024-11-19 08:01:28.224651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.613 qpair failed and we were unable to recover it. 00:37:36.613 [2024-11-19 08:01:28.224776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.613 [2024-11-19 08:01:28.224812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.613 qpair failed and we were unable to recover it. 00:37:36.613 [2024-11-19 08:01:28.224963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.613 [2024-11-19 08:01:28.225003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.613 qpair failed and we were unable to recover it. 00:37:36.613 [2024-11-19 08:01:28.225113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.613 [2024-11-19 08:01:28.225150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.613 qpair failed and we were unable to recover it. 00:37:36.613 [2024-11-19 08:01:28.225298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.613 [2024-11-19 08:01:28.225333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.613 qpair failed and we were unable to recover it. 00:37:36.613 [2024-11-19 08:01:28.225444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.613 [2024-11-19 08:01:28.225479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.613 qpair failed and we were unable to recover it. 00:37:36.613 [2024-11-19 08:01:28.225610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.613 [2024-11-19 08:01:28.225646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.613 qpair failed and we were unable to recover it. 00:37:36.613 [2024-11-19 08:01:28.225783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.613 [2024-11-19 08:01:28.225819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.613 qpair failed and we were unable to recover it. 00:37:36.613 [2024-11-19 08:01:28.225922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.613 [2024-11-19 08:01:28.225957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.613 qpair failed and we were unable to recover it. 00:37:36.613 [2024-11-19 08:01:28.226093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.613 [2024-11-19 08:01:28.226130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.613 qpair failed and we were unable to recover it. 00:37:36.613 [2024-11-19 08:01:28.226234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.613 [2024-11-19 08:01:28.226269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.613 qpair failed and we were unable to recover it. 00:37:36.613 [2024-11-19 08:01:28.226379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.613 [2024-11-19 08:01:28.226415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.613 qpair failed and we were unable to recover it. 00:37:36.613 [2024-11-19 08:01:28.226549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.613 [2024-11-19 08:01:28.226584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.613 qpair failed and we were unable to recover it. 00:37:36.613 [2024-11-19 08:01:28.226701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.613 [2024-11-19 08:01:28.226743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.613 qpair failed and we were unable to recover it. 00:37:36.613 [2024-11-19 08:01:28.226851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.613 [2024-11-19 08:01:28.226887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.613 qpair failed and we were unable to recover it. 00:37:36.614 [2024-11-19 08:01:28.227029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.614 [2024-11-19 08:01:28.227066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.614 qpair failed and we were unable to recover it. 00:37:36.614 [2024-11-19 08:01:28.227230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.614 [2024-11-19 08:01:28.227266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.614 qpair failed and we were unable to recover it. 00:37:36.614 [2024-11-19 08:01:28.227374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.614 [2024-11-19 08:01:28.227409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.614 qpair failed and we were unable to recover it. 00:37:36.614 [2024-11-19 08:01:28.227551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.614 [2024-11-19 08:01:28.227587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.614 qpair failed and we were unable to recover it. 00:37:36.614 [2024-11-19 08:01:28.227704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.614 [2024-11-19 08:01:28.227740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.614 qpair failed and we were unable to recover it. 00:37:36.614 [2024-11-19 08:01:28.227877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.614 [2024-11-19 08:01:28.227914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.614 qpair failed and we were unable to recover it. 00:37:36.614 [2024-11-19 08:01:28.228045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.614 [2024-11-19 08:01:28.228080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.614 qpair failed and we were unable to recover it. 00:37:36.614 [2024-11-19 08:01:28.228188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.614 [2024-11-19 08:01:28.228224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.614 qpair failed and we were unable to recover it. 00:37:36.614 [2024-11-19 08:01:28.228330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.614 [2024-11-19 08:01:28.228366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.614 qpair failed and we were unable to recover it. 00:37:36.614 [2024-11-19 08:01:28.228474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.614 [2024-11-19 08:01:28.228511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.614 qpair failed and we were unable to recover it. 00:37:36.614 [2024-11-19 08:01:28.228614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.614 [2024-11-19 08:01:28.228650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.614 qpair failed and we were unable to recover it. 00:37:36.614 [2024-11-19 08:01:28.228771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.614 [2024-11-19 08:01:28.228809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.614 qpair failed and we were unable to recover it. 00:37:36.614 [2024-11-19 08:01:28.228926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.614 [2024-11-19 08:01:28.228962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.614 qpair failed and we were unable to recover it. 00:37:36.614 [2024-11-19 08:01:28.229092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.614 [2024-11-19 08:01:28.229127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.614 qpair failed and we were unable to recover it. 00:37:36.614 [2024-11-19 08:01:28.229292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.614 [2024-11-19 08:01:28.229327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.614 qpair failed and we were unable to recover it. 00:37:36.614 [2024-11-19 08:01:28.229441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.614 [2024-11-19 08:01:28.229477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.614 qpair failed and we were unable to recover it. 00:37:36.614 [2024-11-19 08:01:28.229585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.614 [2024-11-19 08:01:28.229620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.614 qpair failed and we were unable to recover it. 00:37:36.614 [2024-11-19 08:01:28.229755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.614 [2024-11-19 08:01:28.229792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.614 qpair failed and we were unable to recover it. 00:37:36.614 [2024-11-19 08:01:28.229914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.614 [2024-11-19 08:01:28.229950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.614 qpair failed and we were unable to recover it. 00:37:36.614 [2024-11-19 08:01:28.230080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.614 [2024-11-19 08:01:28.230116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.614 qpair failed and we were unable to recover it. 00:37:36.614 [2024-11-19 08:01:28.230215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.614 [2024-11-19 08:01:28.230250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.614 qpair failed and we were unable to recover it. 00:37:36.614 [2024-11-19 08:01:28.230368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.614 [2024-11-19 08:01:28.230405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.614 qpair failed and we were unable to recover it. 00:37:36.614 [2024-11-19 08:01:28.230535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.614 [2024-11-19 08:01:28.230584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.614 qpair failed and we were unable to recover it. 00:37:36.614 [2024-11-19 08:01:28.230771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.614 [2024-11-19 08:01:28.230817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.614 qpair failed and we were unable to recover it. 00:37:36.614 [2024-11-19 08:01:28.230958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.614 [2024-11-19 08:01:28.230993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.614 qpair failed and we were unable to recover it. 00:37:36.614 [2024-11-19 08:01:28.231134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.614 [2024-11-19 08:01:28.231169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.614 qpair failed and we were unable to recover it. 00:37:36.614 [2024-11-19 08:01:28.231280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.614 [2024-11-19 08:01:28.231316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.614 qpair failed and we were unable to recover it. 00:37:36.614 [2024-11-19 08:01:28.231464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.614 [2024-11-19 08:01:28.231501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.614 qpair failed and we were unable to recover it. 00:37:36.614 [2024-11-19 08:01:28.231629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.614 [2024-11-19 08:01:28.231678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.614 qpair failed and we were unable to recover it. 00:37:36.614 [2024-11-19 08:01:28.231834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.614 [2024-11-19 08:01:28.231873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.614 qpair failed and we were unable to recover it. 00:37:36.614 [2024-11-19 08:01:28.232004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.614 [2024-11-19 08:01:28.232040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.614 qpair failed and we were unable to recover it. 00:37:36.614 [2024-11-19 08:01:28.232177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.614 [2024-11-19 08:01:28.232214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.614 qpair failed and we were unable to recover it. 00:37:36.614 [2024-11-19 08:01:28.232367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.614 [2024-11-19 08:01:28.232417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.614 qpair failed and we were unable to recover it. 00:37:36.614 [2024-11-19 08:01:28.232568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.614 [2024-11-19 08:01:28.232605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.614 qpair failed and we were unable to recover it. 00:37:36.614 [2024-11-19 08:01:28.232714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.614 [2024-11-19 08:01:28.232751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.614 qpair failed and we were unable to recover it. 00:37:36.614 [2024-11-19 08:01:28.232870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.614 [2024-11-19 08:01:28.232908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.614 qpair failed and we were unable to recover it. 00:37:36.614 [2024-11-19 08:01:28.233022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.614 [2024-11-19 08:01:28.233055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.614 qpair failed and we were unable to recover it. 00:37:36.615 [2024-11-19 08:01:28.233165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.615 [2024-11-19 08:01:28.233208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.615 qpair failed and we were unable to recover it. 00:37:36.615 [2024-11-19 08:01:28.233322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.615 [2024-11-19 08:01:28.233362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.615 qpair failed and we were unable to recover it. 00:37:36.615 [2024-11-19 08:01:28.233509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.615 [2024-11-19 08:01:28.233549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.615 qpair failed and we were unable to recover it. 00:37:36.615 [2024-11-19 08:01:28.233671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.615 [2024-11-19 08:01:28.233722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.615 qpair failed and we were unable to recover it. 00:37:36.615 [2024-11-19 08:01:28.233848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.615 [2024-11-19 08:01:28.233907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.615 qpair failed and we were unable to recover it. 00:37:36.615 [2024-11-19 08:01:28.234088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.615 [2024-11-19 08:01:28.234124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.615 qpair failed and we were unable to recover it. 00:37:36.615 [2024-11-19 08:01:28.234242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.615 [2024-11-19 08:01:28.234277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.615 qpair failed and we were unable to recover it. 00:37:36.615 [2024-11-19 08:01:28.234384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.615 [2024-11-19 08:01:28.234418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.615 qpair failed and we were unable to recover it. 00:37:36.615 [2024-11-19 08:01:28.234528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.615 [2024-11-19 08:01:28.234563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.615 qpair failed and we were unable to recover it. 00:37:36.615 [2024-11-19 08:01:28.234664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.615 [2024-11-19 08:01:28.234706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.615 qpair failed and we were unable to recover it. 00:37:36.615 [2024-11-19 08:01:28.234815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.615 [2024-11-19 08:01:28.234859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.615 qpair failed and we were unable to recover it. 00:37:36.615 [2024-11-19 08:01:28.234996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.615 [2024-11-19 08:01:28.235032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.615 qpair failed and we were unable to recover it. 00:37:36.615 [2024-11-19 08:01:28.235135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.615 [2024-11-19 08:01:28.235170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.615 qpair failed and we were unable to recover it. 00:37:36.615 [2024-11-19 08:01:28.235310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.615 [2024-11-19 08:01:28.235345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.615 qpair failed and we were unable to recover it. 00:37:36.615 [2024-11-19 08:01:28.235485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.615 [2024-11-19 08:01:28.235525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.615 qpair failed and we were unable to recover it. 00:37:36.615 [2024-11-19 08:01:28.235681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.615 [2024-11-19 08:01:28.235748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.615 qpair failed and we were unable to recover it. 00:37:36.615 [2024-11-19 08:01:28.235920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.615 [2024-11-19 08:01:28.235969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.615 qpair failed and we were unable to recover it. 00:37:36.615 [2024-11-19 08:01:28.236155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.615 [2024-11-19 08:01:28.236191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.615 qpair failed and we were unable to recover it. 00:37:36.615 [2024-11-19 08:01:28.236300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.615 [2024-11-19 08:01:28.236336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.615 qpair failed and we were unable to recover it. 00:37:36.615 [2024-11-19 08:01:28.236473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.615 [2024-11-19 08:01:28.236508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.615 qpair failed and we were unable to recover it. 00:37:36.615 [2024-11-19 08:01:28.236615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.615 [2024-11-19 08:01:28.236650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.615 qpair failed and we were unable to recover it. 00:37:36.615 [2024-11-19 08:01:28.236803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.615 [2024-11-19 08:01:28.236842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.615 qpair failed and we were unable to recover it. 00:37:36.615 [2024-11-19 08:01:28.236951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.615 [2024-11-19 08:01:28.236987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.615 qpair failed and we were unable to recover it. 00:37:36.615 [2024-11-19 08:01:28.237127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.615 [2024-11-19 08:01:28.237163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.615 qpair failed and we were unable to recover it. 00:37:36.615 [2024-11-19 08:01:28.237300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.615 [2024-11-19 08:01:28.237336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.615 qpair failed and we were unable to recover it. 00:37:36.615 [2024-11-19 08:01:28.237439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.615 [2024-11-19 08:01:28.237474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.615 qpair failed and we were unable to recover it. 00:37:36.615 [2024-11-19 08:01:28.237617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.615 [2024-11-19 08:01:28.237656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.615 qpair failed and we were unable to recover it. 00:37:36.615 [2024-11-19 08:01:28.237769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.615 [2024-11-19 08:01:28.237806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.615 qpair failed and we were unable to recover it. 00:37:36.615 [2024-11-19 08:01:28.237940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.615 [2024-11-19 08:01:28.237991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.615 qpair failed and we were unable to recover it. 00:37:36.615 [2024-11-19 08:01:28.238109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.615 [2024-11-19 08:01:28.238148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.615 qpair failed and we were unable to recover it. 00:37:36.615 [2024-11-19 08:01:28.238293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.615 [2024-11-19 08:01:28.238330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.615 qpair failed and we were unable to recover it. 00:37:36.615 [2024-11-19 08:01:28.238478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.615 [2024-11-19 08:01:28.238514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.615 qpair failed and we were unable to recover it. 00:37:36.615 [2024-11-19 08:01:28.238653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.615 [2024-11-19 08:01:28.238696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.615 qpair failed and we were unable to recover it. 00:37:36.615 [2024-11-19 08:01:28.238817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.615 [2024-11-19 08:01:28.238856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.615 qpair failed and we were unable to recover it. 00:37:36.615 [2024-11-19 08:01:28.238996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.615 [2024-11-19 08:01:28.239032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.615 qpair failed and we were unable to recover it. 00:37:36.615 [2024-11-19 08:01:28.239178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.615 [2024-11-19 08:01:28.239213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.615 qpair failed and we were unable to recover it. 00:37:36.615 [2024-11-19 08:01:28.239322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.615 [2024-11-19 08:01:28.239357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.615 qpair failed and we were unable to recover it. 00:37:36.615 [2024-11-19 08:01:28.239467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.615 [2024-11-19 08:01:28.239502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.616 qpair failed and we were unable to recover it. 00:37:36.616 [2024-11-19 08:01:28.239655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.616 [2024-11-19 08:01:28.239711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.616 qpair failed and we were unable to recover it. 00:37:36.616 [2024-11-19 08:01:28.239833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.616 [2024-11-19 08:01:28.239871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.616 qpair failed and we were unable to recover it. 00:37:36.616 [2024-11-19 08:01:28.240010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.616 [2024-11-19 08:01:28.240046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.616 qpair failed and we were unable to recover it. 00:37:36.616 [2024-11-19 08:01:28.240157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.616 [2024-11-19 08:01:28.240200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.616 qpair failed and we were unable to recover it. 00:37:36.616 [2024-11-19 08:01:28.240315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.616 [2024-11-19 08:01:28.240350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.616 qpair failed and we were unable to recover it. 00:37:36.616 [2024-11-19 08:01:28.240462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.616 [2024-11-19 08:01:28.240500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.616 qpair failed and we were unable to recover it. 00:37:36.616 [2024-11-19 08:01:28.240625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.616 [2024-11-19 08:01:28.240674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.616 qpair failed and we were unable to recover it. 00:37:36.616 [2024-11-19 08:01:28.240811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.616 [2024-11-19 08:01:28.240849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.616 qpair failed and we were unable to recover it. 00:37:36.616 [2024-11-19 08:01:28.240962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.616 [2024-11-19 08:01:28.240997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.616 qpair failed and we were unable to recover it. 00:37:36.616 [2024-11-19 08:01:28.241129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.616 [2024-11-19 08:01:28.241164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.616 qpair failed and we were unable to recover it. 00:37:36.616 [2024-11-19 08:01:28.241267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.616 [2024-11-19 08:01:28.241302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.616 qpair failed and we were unable to recover it. 00:37:36.616 [2024-11-19 08:01:28.241443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.616 [2024-11-19 08:01:28.241480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.616 qpair failed and we were unable to recover it. 00:37:36.616 [2024-11-19 08:01:28.241597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.616 [2024-11-19 08:01:28.241634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.616 qpair failed and we were unable to recover it. 00:37:36.616 [2024-11-19 08:01:28.241756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.616 [2024-11-19 08:01:28.241793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.616 qpair failed and we were unable to recover it. 00:37:36.616 [2024-11-19 08:01:28.241928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.616 [2024-11-19 08:01:28.241964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.616 qpair failed and we were unable to recover it. 00:37:36.616 [2024-11-19 08:01:28.242135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.616 [2024-11-19 08:01:28.242171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.616 qpair failed and we were unable to recover it. 00:37:36.616 [2024-11-19 08:01:28.242345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.616 [2024-11-19 08:01:28.242394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.616 qpair failed and we were unable to recover it. 00:37:36.616 [2024-11-19 08:01:28.242514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.616 [2024-11-19 08:01:28.242551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.616 qpair failed and we were unable to recover it. 00:37:36.616 [2024-11-19 08:01:28.242659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.616 [2024-11-19 08:01:28.242699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.616 qpair failed and we were unable to recover it. 00:37:36.616 [2024-11-19 08:01:28.242812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.616 [2024-11-19 08:01:28.242847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.616 qpair failed and we were unable to recover it. 00:37:36.616 [2024-11-19 08:01:28.242985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.616 [2024-11-19 08:01:28.243020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.616 qpair failed and we were unable to recover it. 00:37:36.616 [2024-11-19 08:01:28.243129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.616 [2024-11-19 08:01:28.243163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.616 qpair failed and we were unable to recover it. 00:37:36.616 [2024-11-19 08:01:28.243294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.616 [2024-11-19 08:01:28.243330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.616 qpair failed and we were unable to recover it. 00:37:36.616 [2024-11-19 08:01:28.243447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.616 [2024-11-19 08:01:28.243487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.616 qpair failed and we were unable to recover it. 00:37:36.616 [2024-11-19 08:01:28.243644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.616 [2024-11-19 08:01:28.243706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.616 qpair failed and we were unable to recover it. 00:37:36.616 [2024-11-19 08:01:28.243816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.616 [2024-11-19 08:01:28.243852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.616 qpair failed and we were unable to recover it. 00:37:36.616 [2024-11-19 08:01:28.243989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.616 [2024-11-19 08:01:28.244024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.616 qpair failed and we were unable to recover it. 00:37:36.616 [2024-11-19 08:01:28.244156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.616 [2024-11-19 08:01:28.244191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.616 qpair failed and we were unable to recover it. 00:37:36.616 [2024-11-19 08:01:28.244290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.616 [2024-11-19 08:01:28.244325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.616 qpair failed and we were unable to recover it. 00:37:36.616 [2024-11-19 08:01:28.244477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.616 [2024-11-19 08:01:28.244514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.616 qpair failed and we were unable to recover it. 00:37:36.616 [2024-11-19 08:01:28.244665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.616 [2024-11-19 08:01:28.244715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.616 qpair failed and we were unable to recover it. 00:37:36.616 [2024-11-19 08:01:28.244839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.616 [2024-11-19 08:01:28.244891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.616 qpair failed and we were unable to recover it. 00:37:36.616 [2024-11-19 08:01:28.245042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.616 [2024-11-19 08:01:28.245079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.616 qpair failed and we were unable to recover it. 00:37:36.616 [2024-11-19 08:01:28.245191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.616 [2024-11-19 08:01:28.245227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.616 qpair failed and we were unable to recover it. 00:37:36.617 [2024-11-19 08:01:28.245337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.617 [2024-11-19 08:01:28.245372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.617 qpair failed and we were unable to recover it. 00:37:36.617 [2024-11-19 08:01:28.245505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.617 [2024-11-19 08:01:28.245540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.617 qpair failed and we were unable to recover it. 00:37:36.617 [2024-11-19 08:01:28.245653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.617 [2024-11-19 08:01:28.245696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.617 qpair failed and we were unable to recover it. 00:37:36.617 [2024-11-19 08:01:28.245809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.617 [2024-11-19 08:01:28.245846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.617 qpair failed and we were unable to recover it. 00:37:36.617 [2024-11-19 08:01:28.245962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.617 [2024-11-19 08:01:28.246002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.617 qpair failed and we were unable to recover it. 00:37:36.617 [2024-11-19 08:01:28.246117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.617 [2024-11-19 08:01:28.246154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.617 qpair failed and we were unable to recover it. 00:37:36.617 [2024-11-19 08:01:28.246268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.617 [2024-11-19 08:01:28.246304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.617 qpair failed and we were unable to recover it. 00:37:36.617 [2024-11-19 08:01:28.246458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.617 [2024-11-19 08:01:28.246494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.617 qpair failed and we were unable to recover it. 00:37:36.617 [2024-11-19 08:01:28.246662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.617 [2024-11-19 08:01:28.246710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.617 qpair failed and we were unable to recover it. 00:37:36.617 [2024-11-19 08:01:28.246851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.617 [2024-11-19 08:01:28.246901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.617 qpair failed and we were unable to recover it. 00:37:36.617 [2024-11-19 08:01:28.247016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.617 [2024-11-19 08:01:28.247053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.617 qpair failed and we were unable to recover it. 00:37:36.617 [2024-11-19 08:01:28.247168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.617 [2024-11-19 08:01:28.247203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.617 qpair failed and we were unable to recover it. 00:37:36.617 [2024-11-19 08:01:28.247339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.617 [2024-11-19 08:01:28.247374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.617 qpair failed and we were unable to recover it. 00:37:36.617 [2024-11-19 08:01:28.247510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.617 [2024-11-19 08:01:28.247544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.617 qpair failed and we were unable to recover it. 00:37:36.617 [2024-11-19 08:01:28.247658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.617 [2024-11-19 08:01:28.247699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.617 qpair failed and we were unable to recover it. 00:37:36.617 [2024-11-19 08:01:28.247883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.617 [2024-11-19 08:01:28.247933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.617 qpair failed and we were unable to recover it. 00:37:36.617 [2024-11-19 08:01:28.248082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.617 [2024-11-19 08:01:28.248120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.617 qpair failed and we were unable to recover it. 00:37:36.617 [2024-11-19 08:01:28.248234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.617 [2024-11-19 08:01:28.248270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.617 qpair failed and we were unable to recover it. 00:37:36.617 [2024-11-19 08:01:28.248402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.617 [2024-11-19 08:01:28.248437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.617 qpair failed and we were unable to recover it. 00:37:36.617 [2024-11-19 08:01:28.248555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.617 [2024-11-19 08:01:28.248605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.617 qpair failed and we were unable to recover it. 00:37:36.617 [2024-11-19 08:01:28.248754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.617 [2024-11-19 08:01:28.248793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.617 qpair failed and we were unable to recover it. 00:37:36.617 [2024-11-19 08:01:28.248903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.617 [2024-11-19 08:01:28.248939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.617 qpair failed and we were unable to recover it. 00:37:36.617 [2024-11-19 08:01:28.249050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.617 [2024-11-19 08:01:28.249086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.617 qpair failed and we were unable to recover it. 00:37:36.617 [2024-11-19 08:01:28.249299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.617 [2024-11-19 08:01:28.249336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.617 qpair failed and we were unable to recover it. 00:37:36.617 [2024-11-19 08:01:28.249448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.617 [2024-11-19 08:01:28.249484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.617 qpair failed and we were unable to recover it. 00:37:36.617 [2024-11-19 08:01:28.249621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.617 [2024-11-19 08:01:28.249658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.617 qpair failed and we were unable to recover it. 00:37:36.617 [2024-11-19 08:01:28.249828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.617 [2024-11-19 08:01:28.249878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.617 qpair failed and we were unable to recover it. 00:37:36.617 [2024-11-19 08:01:28.250025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.617 [2024-11-19 08:01:28.250061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.617 qpair failed and we were unable to recover it. 00:37:36.617 [2024-11-19 08:01:28.250201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.617 [2024-11-19 08:01:28.250237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.617 qpair failed and we were unable to recover it. 00:37:36.617 [2024-11-19 08:01:28.250349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.617 [2024-11-19 08:01:28.250385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.617 qpair failed and we were unable to recover it. 00:37:36.617 [2024-11-19 08:01:28.250495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.617 [2024-11-19 08:01:28.250530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.617 qpair failed and we were unable to recover it. 00:37:36.617 [2024-11-19 08:01:28.250658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.617 [2024-11-19 08:01:28.250717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.617 qpair failed and we were unable to recover it. 00:37:36.617 [2024-11-19 08:01:28.250875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.617 [2024-11-19 08:01:28.250912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.617 qpair failed and we were unable to recover it. 00:37:36.617 [2024-11-19 08:01:28.251050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.617 [2024-11-19 08:01:28.251086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.617 qpair failed and we were unable to recover it. 00:37:36.617 [2024-11-19 08:01:28.251193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.617 [2024-11-19 08:01:28.251229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.617 qpair failed and we were unable to recover it. 00:37:36.617 [2024-11-19 08:01:28.251367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.617 [2024-11-19 08:01:28.251403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.617 qpair failed and we were unable to recover it. 00:37:36.617 [2024-11-19 08:01:28.251532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.617 [2024-11-19 08:01:28.251572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.617 qpair failed and we were unable to recover it. 00:37:36.617 [2024-11-19 08:01:28.251682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.617 [2024-11-19 08:01:28.251734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.618 qpair failed and we were unable to recover it. 00:37:36.618 [2024-11-19 08:01:28.251850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.618 [2024-11-19 08:01:28.251889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.618 qpair failed and we were unable to recover it. 00:37:36.618 [2024-11-19 08:01:28.252023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.618 [2024-11-19 08:01:28.252059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.618 qpair failed and we were unable to recover it. 00:37:36.618 [2024-11-19 08:01:28.252174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.618 [2024-11-19 08:01:28.252209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.618 qpair failed and we were unable to recover it. 00:37:36.618 [2024-11-19 08:01:28.252326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.618 [2024-11-19 08:01:28.252361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.618 qpair failed and we were unable to recover it. 00:37:36.618 [2024-11-19 08:01:28.252477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.618 [2024-11-19 08:01:28.252514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.618 qpair failed and we were unable to recover it. 00:37:36.618 [2024-11-19 08:01:28.252635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.618 [2024-11-19 08:01:28.252686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.618 qpair failed and we were unable to recover it. 00:37:36.618 [2024-11-19 08:01:28.252823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.618 [2024-11-19 08:01:28.252873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.618 qpair failed and we were unable to recover it. 00:37:36.618 [2024-11-19 08:01:28.253016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.618 [2024-11-19 08:01:28.253052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.618 qpair failed and we were unable to recover it. 00:37:36.618 [2024-11-19 08:01:28.253195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.618 [2024-11-19 08:01:28.253230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.618 qpair failed and we were unable to recover it. 00:37:36.618 [2024-11-19 08:01:28.253343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.618 [2024-11-19 08:01:28.253378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.618 qpair failed and we were unable to recover it. 00:37:36.618 [2024-11-19 08:01:28.253540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.618 [2024-11-19 08:01:28.253574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.618 qpair failed and we were unable to recover it. 00:37:36.618 [2024-11-19 08:01:28.253683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.618 [2024-11-19 08:01:28.253730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.618 qpair failed and we were unable to recover it. 00:37:36.618 [2024-11-19 08:01:28.253884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.618 [2024-11-19 08:01:28.253925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.618 qpair failed and we were unable to recover it. 00:37:36.618 [2024-11-19 08:01:28.254040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.618 [2024-11-19 08:01:28.254078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.618 qpair failed and we were unable to recover it. 00:37:36.618 [2024-11-19 08:01:28.254187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.618 [2024-11-19 08:01:28.254223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.618 qpair failed and we were unable to recover it. 00:37:36.618 [2024-11-19 08:01:28.254390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.618 [2024-11-19 08:01:28.254425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.618 qpair failed and we were unable to recover it. 00:37:36.618 [2024-11-19 08:01:28.254545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.618 [2024-11-19 08:01:28.254581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.618 qpair failed and we were unable to recover it. 00:37:36.618 [2024-11-19 08:01:28.254700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.618 [2024-11-19 08:01:28.254737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.618 qpair failed and we were unable to recover it. 00:37:36.618 [2024-11-19 08:01:28.254843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.618 [2024-11-19 08:01:28.254879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.618 qpair failed and we were unable to recover it. 00:37:36.618 [2024-11-19 08:01:28.255025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.618 [2024-11-19 08:01:28.255065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.618 qpair failed and we were unable to recover it. 00:37:36.618 [2024-11-19 08:01:28.255185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.618 [2024-11-19 08:01:28.255222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.618 qpair failed and we were unable to recover it. 00:37:36.618 [2024-11-19 08:01:28.255335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.618 [2024-11-19 08:01:28.255371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.618 qpair failed and we were unable to recover it. 00:37:36.618 [2024-11-19 08:01:28.255482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.618 [2024-11-19 08:01:28.255518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.618 qpair failed and we were unable to recover it. 00:37:36.618 [2024-11-19 08:01:28.255661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.618 [2024-11-19 08:01:28.255708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.618 qpair failed and we were unable to recover it. 00:37:36.618 [2024-11-19 08:01:28.255850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.618 [2024-11-19 08:01:28.255893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.618 qpair failed and we were unable to recover it. 00:37:36.618 [2024-11-19 08:01:28.256010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.618 [2024-11-19 08:01:28.256046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.618 qpair failed and we were unable to recover it. 00:37:36.618 [2024-11-19 08:01:28.256157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.618 [2024-11-19 08:01:28.256193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.618 qpair failed and we were unable to recover it. 00:37:36.618 [2024-11-19 08:01:28.256326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.618 [2024-11-19 08:01:28.256361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.618 qpair failed and we were unable to recover it. 00:37:36.618 [2024-11-19 08:01:28.256504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.618 [2024-11-19 08:01:28.256540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.618 qpair failed and we were unable to recover it. 00:37:36.618 [2024-11-19 08:01:28.256681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.618 [2024-11-19 08:01:28.256731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.618 qpair failed and we were unable to recover it. 00:37:36.618 [2024-11-19 08:01:28.256911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.618 [2024-11-19 08:01:28.256961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.618 qpair failed and we were unable to recover it. 00:37:36.618 [2024-11-19 08:01:28.257082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.618 [2024-11-19 08:01:28.257119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.618 qpair failed and we were unable to recover it. 00:37:36.618 [2024-11-19 08:01:28.257264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.618 [2024-11-19 08:01:28.257300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.618 qpair failed and we were unable to recover it. 00:37:36.618 [2024-11-19 08:01:28.257402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.618 [2024-11-19 08:01:28.257439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.618 qpair failed and we were unable to recover it. 00:37:36.618 [2024-11-19 08:01:28.257549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.618 [2024-11-19 08:01:28.257586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.618 qpair failed and we were unable to recover it. 00:37:36.618 [2024-11-19 08:01:28.257747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.618 [2024-11-19 08:01:28.257797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.618 qpair failed and we were unable to recover it. 00:37:36.618 [2024-11-19 08:01:28.257930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.618 [2024-11-19 08:01:28.257980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.618 qpair failed and we were unable to recover it. 00:37:36.618 [2024-11-19 08:01:28.258126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.619 [2024-11-19 08:01:28.258165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.619 qpair failed and we were unable to recover it. 00:37:36.619 [2024-11-19 08:01:28.258337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.619 [2024-11-19 08:01:28.258374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.619 qpair failed and we were unable to recover it. 00:37:36.619 [2024-11-19 08:01:28.258484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.619 [2024-11-19 08:01:28.258520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.619 qpair failed and we were unable to recover it. 00:37:36.619 [2024-11-19 08:01:28.258620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.619 [2024-11-19 08:01:28.258656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.619 qpair failed and we were unable to recover it. 00:37:36.619 [2024-11-19 08:01:28.258773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.619 [2024-11-19 08:01:28.258810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.619 qpair failed and we were unable to recover it. 00:37:36.619 [2024-11-19 08:01:28.258934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.619 [2024-11-19 08:01:28.258984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.619 qpair failed and we were unable to recover it. 00:37:36.619 [2024-11-19 08:01:28.259127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.619 [2024-11-19 08:01:28.259166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.619 qpair failed and we were unable to recover it. 00:37:36.619 [2024-11-19 08:01:28.259285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.619 [2024-11-19 08:01:28.259321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.619 qpair failed and we were unable to recover it. 00:37:36.619 [2024-11-19 08:01:28.259434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.619 [2024-11-19 08:01:28.259471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.619 qpair failed and we were unable to recover it. 00:37:36.619 [2024-11-19 08:01:28.259612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.619 [2024-11-19 08:01:28.259649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.619 qpair failed and we were unable to recover it. 00:37:36.619 [2024-11-19 08:01:28.259780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.619 [2024-11-19 08:01:28.259830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.619 qpair failed and we were unable to recover it. 00:37:36.619 [2024-11-19 08:01:28.259946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.619 [2024-11-19 08:01:28.259984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.619 qpair failed and we were unable to recover it. 00:37:36.619 [2024-11-19 08:01:28.260139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.619 [2024-11-19 08:01:28.260179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.619 qpair failed and we were unable to recover it. 00:37:36.619 [2024-11-19 08:01:28.260294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.619 [2024-11-19 08:01:28.260331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.619 qpair failed and we were unable to recover it. 00:37:36.619 [2024-11-19 08:01:28.260475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.619 [2024-11-19 08:01:28.260518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.619 qpair failed and we were unable to recover it. 00:37:36.619 [2024-11-19 08:01:28.260665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.619 [2024-11-19 08:01:28.260711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.619 qpair failed and we were unable to recover it. 00:37:36.619 [2024-11-19 08:01:28.260856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.619 [2024-11-19 08:01:28.260892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.619 qpair failed and we were unable to recover it. 00:37:36.619 [2024-11-19 08:01:28.260999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.619 [2024-11-19 08:01:28.261034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.619 qpair failed and we were unable to recover it. 00:37:36.619 [2024-11-19 08:01:28.261161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.619 [2024-11-19 08:01:28.261195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.619 qpair failed and we were unable to recover it. 00:37:36.619 [2024-11-19 08:01:28.261304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.619 [2024-11-19 08:01:28.261342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.619 qpair failed and we were unable to recover it. 00:37:36.619 [2024-11-19 08:01:28.261468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.619 [2024-11-19 08:01:28.261505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.619 qpair failed and we were unable to recover it. 00:37:36.619 [2024-11-19 08:01:28.261611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.619 [2024-11-19 08:01:28.261646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.619 qpair failed and we were unable to recover it. 00:37:36.619 [2024-11-19 08:01:28.261767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.619 [2024-11-19 08:01:28.261803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.619 qpair failed and we were unable to recover it. 00:37:36.619 [2024-11-19 08:01:28.261940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.619 [2024-11-19 08:01:28.261975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.619 qpair failed and we were unable to recover it. 00:37:36.619 [2024-11-19 08:01:28.262109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.619 [2024-11-19 08:01:28.262145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.619 qpair failed and we were unable to recover it. 00:37:36.619 [2024-11-19 08:01:28.262249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.619 [2024-11-19 08:01:28.262285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.619 qpair failed and we were unable to recover it. 00:37:36.619 [2024-11-19 08:01:28.262389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.619 [2024-11-19 08:01:28.262425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.619 qpair failed and we were unable to recover it. 00:37:36.619 [2024-11-19 08:01:28.262554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.619 [2024-11-19 08:01:28.262606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.619 qpair failed and we were unable to recover it. 00:37:36.619 [2024-11-19 08:01:28.262761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.619 [2024-11-19 08:01:28.262798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.619 qpair failed and we were unable to recover it. 00:37:36.619 [2024-11-19 08:01:28.262906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.619 [2024-11-19 08:01:28.262942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.619 qpair failed and we were unable to recover it. 00:37:36.619 [2024-11-19 08:01:28.263097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.619 [2024-11-19 08:01:28.263133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.619 qpair failed and we were unable to recover it. 00:37:36.619 [2024-11-19 08:01:28.263243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.619 [2024-11-19 08:01:28.263291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.619 qpair failed and we were unable to recover it. 00:37:36.619 [2024-11-19 08:01:28.263435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.619 [2024-11-19 08:01:28.263471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.619 qpair failed and we were unable to recover it. 00:37:36.619 [2024-11-19 08:01:28.263588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.619 [2024-11-19 08:01:28.263623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.619 qpair failed and we were unable to recover it. 00:37:36.619 [2024-11-19 08:01:28.263735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.619 [2024-11-19 08:01:28.263770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.619 qpair failed and we were unable to recover it. 00:37:36.619 [2024-11-19 08:01:28.263884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.619 [2024-11-19 08:01:28.263922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.619 qpair failed and we were unable to recover it. 00:37:36.619 [2024-11-19 08:01:28.264035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.619 [2024-11-19 08:01:28.264071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.619 qpair failed and we were unable to recover it. 00:37:36.619 [2024-11-19 08:01:28.264216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.619 [2024-11-19 08:01:28.264251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.619 qpair failed and we were unable to recover it. 00:37:36.619 [2024-11-19 08:01:28.264363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.620 [2024-11-19 08:01:28.264398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.620 qpair failed and we were unable to recover it. 00:37:36.620 [2024-11-19 08:01:28.264553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.620 [2024-11-19 08:01:28.264603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.620 qpair failed and we were unable to recover it. 00:37:36.620 [2024-11-19 08:01:28.264722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.620 [2024-11-19 08:01:28.264761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.620 qpair failed and we were unable to recover it. 00:37:36.620 [2024-11-19 08:01:28.264868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.620 [2024-11-19 08:01:28.264904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.620 qpair failed and we were unable to recover it. 00:37:36.620 [2024-11-19 08:01:28.265015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.620 [2024-11-19 08:01:28.265052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.620 qpair failed and we were unable to recover it. 00:37:36.620 [2024-11-19 08:01:28.265165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.620 [2024-11-19 08:01:28.265200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.620 qpair failed and we were unable to recover it. 00:37:36.620 [2024-11-19 08:01:28.265339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.620 [2024-11-19 08:01:28.265375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.620 qpair failed and we were unable to recover it. 00:37:36.620 [2024-11-19 08:01:28.265520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.620 [2024-11-19 08:01:28.265555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.620 qpair failed and we were unable to recover it. 00:37:36.620 [2024-11-19 08:01:28.265669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.620 [2024-11-19 08:01:28.265710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.620 qpair failed and we were unable to recover it. 00:37:36.620 [2024-11-19 08:01:28.265831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.620 [2024-11-19 08:01:28.265869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.620 qpair failed and we were unable to recover it. 00:37:36.620 [2024-11-19 08:01:28.266016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.620 [2024-11-19 08:01:28.266052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.620 qpair failed and we were unable to recover it. 00:37:36.620 [2024-11-19 08:01:28.266178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.620 [2024-11-19 08:01:28.266228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.620 qpair failed and we were unable to recover it. 00:37:36.620 [2024-11-19 08:01:28.266349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.620 [2024-11-19 08:01:28.266388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.620 qpair failed and we were unable to recover it. 00:37:36.620 [2024-11-19 08:01:28.266512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.620 [2024-11-19 08:01:28.266549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.620 qpair failed and we were unable to recover it. 00:37:36.620 [2024-11-19 08:01:28.266651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.620 [2024-11-19 08:01:28.266705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.620 qpair failed and we were unable to recover it. 00:37:36.620 [2024-11-19 08:01:28.266817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.620 [2024-11-19 08:01:28.266854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.620 qpair failed and we were unable to recover it. 00:37:36.620 [2024-11-19 08:01:28.266971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.620 [2024-11-19 08:01:28.267013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.620 qpair failed and we were unable to recover it. 00:37:36.620 [2024-11-19 08:01:28.267122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.620 [2024-11-19 08:01:28.267158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.620 qpair failed and we were unable to recover it. 00:37:36.620 [2024-11-19 08:01:28.267264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.620 [2024-11-19 08:01:28.267301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.620 qpair failed and we were unable to recover it. 00:37:36.620 [2024-11-19 08:01:28.267417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.620 [2024-11-19 08:01:28.267456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.620 qpair failed and we were unable to recover it. 00:37:36.620 [2024-11-19 08:01:28.267571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.620 [2024-11-19 08:01:28.267609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.620 qpair failed and we were unable to recover it. 00:37:36.620 [2024-11-19 08:01:28.267778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.620 [2024-11-19 08:01:28.267828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.620 qpair failed and we were unable to recover it. 00:37:36.620 [2024-11-19 08:01:28.267952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.620 [2024-11-19 08:01:28.267990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.620 qpair failed and we were unable to recover it. 00:37:36.620 [2024-11-19 08:01:28.268159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.620 [2024-11-19 08:01:28.268195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.620 qpair failed and we were unable to recover it. 00:37:36.620 [2024-11-19 08:01:28.268307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.620 [2024-11-19 08:01:28.268343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.620 qpair failed and we were unable to recover it. 00:37:36.620 [2024-11-19 08:01:28.268477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.620 [2024-11-19 08:01:28.268513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.620 qpair failed and we were unable to recover it. 00:37:36.620 [2024-11-19 08:01:28.268642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.620 [2024-11-19 08:01:28.268702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.620 qpair failed and we were unable to recover it. 00:37:36.620 [2024-11-19 08:01:28.268822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.620 [2024-11-19 08:01:28.268857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.620 qpair failed and we were unable to recover it. 00:37:36.620 [2024-11-19 08:01:28.268993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.620 [2024-11-19 08:01:28.269028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.620 qpair failed and we were unable to recover it. 00:37:36.620 [2024-11-19 08:01:28.269137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.620 [2024-11-19 08:01:28.269172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.620 qpair failed and we were unable to recover it. 00:37:36.620 [2024-11-19 08:01:28.269320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.620 [2024-11-19 08:01:28.269355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.620 qpair failed and we were unable to recover it. 00:37:36.620 [2024-11-19 08:01:28.269467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.620 [2024-11-19 08:01:28.269504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.620 qpair failed and we were unable to recover it. 00:37:36.620 [2024-11-19 08:01:28.269647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.620 [2024-11-19 08:01:28.269683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.620 qpair failed and we were unable to recover it. 00:37:36.621 [2024-11-19 08:01:28.269827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.621 [2024-11-19 08:01:28.269877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.621 qpair failed and we were unable to recover it. 00:37:36.621 [2024-11-19 08:01:28.270027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.621 [2024-11-19 08:01:28.270064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.621 qpair failed and we were unable to recover it. 00:37:36.621 [2024-11-19 08:01:28.270202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.621 [2024-11-19 08:01:28.270237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.621 qpair failed and we were unable to recover it. 00:37:36.621 [2024-11-19 08:01:28.270362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.621 [2024-11-19 08:01:28.270397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.621 qpair failed and we were unable to recover it. 00:37:36.621 [2024-11-19 08:01:28.270510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.621 [2024-11-19 08:01:28.270545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.621 qpair failed and we were unable to recover it. 00:37:36.621 [2024-11-19 08:01:28.270693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.621 [2024-11-19 08:01:28.270731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.621 qpair failed and we were unable to recover it. 00:37:36.621 [2024-11-19 08:01:28.270857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.621 [2024-11-19 08:01:28.270909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.621 qpair failed and we were unable to recover it. 00:37:36.621 [2024-11-19 08:01:28.271032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.621 [2024-11-19 08:01:28.271072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.621 qpair failed and we were unable to recover it. 00:37:36.621 [2024-11-19 08:01:28.271212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.621 [2024-11-19 08:01:28.271249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.621 qpair failed and we were unable to recover it. 00:37:36.621 [2024-11-19 08:01:28.271358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.621 [2024-11-19 08:01:28.271394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.621 qpair failed and we were unable to recover it. 00:37:36.621 [2024-11-19 08:01:28.271542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.621 [2024-11-19 08:01:28.271581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.621 qpair failed and we were unable to recover it. 00:37:36.621 [2024-11-19 08:01:28.271714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.621 [2024-11-19 08:01:28.271765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.621 qpair failed and we were unable to recover it. 00:37:36.621 [2024-11-19 08:01:28.271888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.621 [2024-11-19 08:01:28.271926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.621 qpair failed and we were unable to recover it. 00:37:36.621 [2024-11-19 08:01:28.272072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.621 [2024-11-19 08:01:28.272108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.621 qpair failed and we were unable to recover it. 00:37:36.621 [2024-11-19 08:01:28.272215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.621 [2024-11-19 08:01:28.272252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.621 qpair failed and we were unable to recover it. 00:37:36.621 [2024-11-19 08:01:28.272391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.621 [2024-11-19 08:01:28.272426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.621 qpair failed and we were unable to recover it. 00:37:36.621 [2024-11-19 08:01:28.272555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.621 [2024-11-19 08:01:28.272590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.621 qpair failed and we were unable to recover it. 00:37:36.621 [2024-11-19 08:01:28.272721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.621 [2024-11-19 08:01:28.272771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.621 qpair failed and we were unable to recover it. 00:37:36.621 [2024-11-19 08:01:28.272906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.621 [2024-11-19 08:01:28.272956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.621 qpair failed and we were unable to recover it. 00:37:36.621 [2024-11-19 08:01:28.273075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.621 [2024-11-19 08:01:28.273126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.621 qpair failed and we were unable to recover it. 00:37:36.621 [2024-11-19 08:01:28.273298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.621 [2024-11-19 08:01:28.273336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.621 qpair failed and we were unable to recover it. 00:37:36.621 [2024-11-19 08:01:28.273445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.621 [2024-11-19 08:01:28.273482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.621 qpair failed and we were unable to recover it. 00:37:36.621 [2024-11-19 08:01:28.273621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.621 [2024-11-19 08:01:28.273656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.621 qpair failed and we were unable to recover it. 00:37:36.621 [2024-11-19 08:01:28.273800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.621 [2024-11-19 08:01:28.273843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.621 qpair failed and we were unable to recover it. 00:37:36.621 [2024-11-19 08:01:28.274014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.621 [2024-11-19 08:01:28.274065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.621 qpair failed and we were unable to recover it. 00:37:36.621 [2024-11-19 08:01:28.274179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.621 [2024-11-19 08:01:28.274217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.621 qpair failed and we were unable to recover it. 00:37:36.621 [2024-11-19 08:01:28.274337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.621 [2024-11-19 08:01:28.274374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.621 qpair failed and we were unable to recover it. 00:37:36.621 [2024-11-19 08:01:28.274515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.621 [2024-11-19 08:01:28.274553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.621 qpair failed and we were unable to recover it. 00:37:36.621 [2024-11-19 08:01:28.274723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.621 [2024-11-19 08:01:28.274773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.621 qpair failed and we were unable to recover it. 00:37:36.621 [2024-11-19 08:01:28.274891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.621 [2024-11-19 08:01:28.274929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.621 qpair failed and we were unable to recover it. 00:37:36.621 [2024-11-19 08:01:28.275040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.621 [2024-11-19 08:01:28.275076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.621 qpair failed and we were unable to recover it. 00:37:36.621 [2024-11-19 08:01:28.275300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.621 [2024-11-19 08:01:28.275335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.621 qpair failed and we were unable to recover it. 00:37:36.621 [2024-11-19 08:01:28.275500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.621 [2024-11-19 08:01:28.275535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.621 qpair failed and we were unable to recover it. 00:37:36.621 [2024-11-19 08:01:28.275670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.621 [2024-11-19 08:01:28.275714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.621 qpair failed and we were unable to recover it. 00:37:36.621 [2024-11-19 08:01:28.275866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.621 [2024-11-19 08:01:28.275902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.621 qpair failed and we were unable to recover it. 00:37:36.621 [2024-11-19 08:01:28.276030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.621 [2024-11-19 08:01:28.276080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.621 qpair failed and we were unable to recover it. 00:37:36.621 [2024-11-19 08:01:28.276249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.622 [2024-11-19 08:01:28.276285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.622 qpair failed and we were unable to recover it. 00:37:36.622 [2024-11-19 08:01:28.276423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.622 [2024-11-19 08:01:28.276459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.622 qpair failed and we were unable to recover it. 00:37:36.622 [2024-11-19 08:01:28.276568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.622 [2024-11-19 08:01:28.276604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.622 qpair failed and we were unable to recover it. 00:37:36.622 [2024-11-19 08:01:28.276713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.622 [2024-11-19 08:01:28.276748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.622 qpair failed and we were unable to recover it. 00:37:36.622 [2024-11-19 08:01:28.276882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.622 [2024-11-19 08:01:28.276918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.622 qpair failed and we were unable to recover it. 00:37:36.622 [2024-11-19 08:01:28.277082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.622 [2024-11-19 08:01:28.277118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.622 qpair failed and we were unable to recover it. 00:37:36.622 [2024-11-19 08:01:28.277226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.622 [2024-11-19 08:01:28.277261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.622 qpair failed and we were unable to recover it. 00:37:36.622 [2024-11-19 08:01:28.277397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.622 [2024-11-19 08:01:28.277434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.622 qpair failed and we were unable to recover it. 00:37:36.622 [2024-11-19 08:01:28.277572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.622 [2024-11-19 08:01:28.277609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.622 qpair failed and we were unable to recover it. 00:37:36.622 [2024-11-19 08:01:28.277756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.622 [2024-11-19 08:01:28.277806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.622 qpair failed and we were unable to recover it. 00:37:36.622 [2024-11-19 08:01:28.277981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.622 [2024-11-19 08:01:28.278019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.622 qpair failed and we were unable to recover it. 00:37:36.622 [2024-11-19 08:01:28.278150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.622 [2024-11-19 08:01:28.278186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.622 qpair failed and we were unable to recover it. 00:37:36.622 [2024-11-19 08:01:28.278350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.622 [2024-11-19 08:01:28.278386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.622 qpair failed and we were unable to recover it. 00:37:36.622 [2024-11-19 08:01:28.278482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.622 [2024-11-19 08:01:28.278519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.622 qpair failed and we were unable to recover it. 00:37:36.622 [2024-11-19 08:01:28.278705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.622 [2024-11-19 08:01:28.278767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.622 qpair failed and we were unable to recover it. 00:37:36.622 [2024-11-19 08:01:28.278887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.622 [2024-11-19 08:01:28.278925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.622 qpair failed and we were unable to recover it. 00:37:36.622 [2024-11-19 08:01:28.279156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.622 [2024-11-19 08:01:28.279192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.622 qpair failed and we were unable to recover it. 00:37:36.622 [2024-11-19 08:01:28.279335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.622 [2024-11-19 08:01:28.279370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.622 qpair failed and we were unable to recover it. 00:37:36.622 [2024-11-19 08:01:28.279508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.622 [2024-11-19 08:01:28.279544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.622 qpair failed and we were unable to recover it. 00:37:36.622 [2024-11-19 08:01:28.279697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.622 [2024-11-19 08:01:28.279734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.622 qpair failed and we were unable to recover it. 00:37:36.622 [2024-11-19 08:01:28.279847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.622 [2024-11-19 08:01:28.279884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.622 qpair failed and we were unable to recover it. 00:37:36.622 [2024-11-19 08:01:28.280068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.622 [2024-11-19 08:01:28.280118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.622 qpair failed and we were unable to recover it. 00:37:36.622 [2024-11-19 08:01:28.280285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.622 [2024-11-19 08:01:28.280323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.622 qpair failed and we were unable to recover it. 00:37:36.622 [2024-11-19 08:01:28.280457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.622 [2024-11-19 08:01:28.280493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.622 qpair failed and we were unable to recover it. 00:37:36.622 [2024-11-19 08:01:28.280600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.622 [2024-11-19 08:01:28.280636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.622 qpair failed and we were unable to recover it. 00:37:36.622 [2024-11-19 08:01:28.280807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.622 [2024-11-19 08:01:28.280843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.622 qpair failed and we were unable to recover it. 00:37:36.622 [2024-11-19 08:01:28.280943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.622 [2024-11-19 08:01:28.280978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.622 qpair failed and we were unable to recover it. 00:37:36.622 [2024-11-19 08:01:28.281116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.622 [2024-11-19 08:01:28.281157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.622 qpair failed and we were unable to recover it. 00:37:36.622 [2024-11-19 08:01:28.281268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.622 [2024-11-19 08:01:28.281304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.622 qpair failed and we were unable to recover it. 00:37:36.622 [2024-11-19 08:01:28.281472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.622 [2024-11-19 08:01:28.281510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.622 qpair failed and we were unable to recover it. 00:37:36.622 [2024-11-19 08:01:28.281667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.622 [2024-11-19 08:01:28.281710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.622 qpair failed and we were unable to recover it. 00:37:36.622 [2024-11-19 08:01:28.281850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.622 [2024-11-19 08:01:28.281885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.622 qpair failed and we were unable to recover it. 00:37:36.622 [2024-11-19 08:01:28.282023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.622 [2024-11-19 08:01:28.282059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.622 qpair failed and we were unable to recover it. 00:37:36.622 [2024-11-19 08:01:28.282169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.622 [2024-11-19 08:01:28.282204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.622 qpair failed and we were unable to recover it. 00:37:36.622 [2024-11-19 08:01:28.282355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.622 [2024-11-19 08:01:28.282405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.622 qpair failed and we were unable to recover it. 00:37:36.622 [2024-11-19 08:01:28.282529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.622 [2024-11-19 08:01:28.282565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.622 qpair failed and we were unable to recover it. 00:37:36.622 [2024-11-19 08:01:28.282721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.622 [2024-11-19 08:01:28.282771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.622 qpair failed and we were unable to recover it. 00:37:36.622 [2024-11-19 08:01:28.282917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.622 [2024-11-19 08:01:28.282954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.622 qpair failed and we were unable to recover it. 00:37:36.623 [2024-11-19 08:01:28.283092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.623 [2024-11-19 08:01:28.283127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.623 qpair failed and we were unable to recover it. 00:37:36.623 [2024-11-19 08:01:28.283260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.623 [2024-11-19 08:01:28.283295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.623 qpair failed and we were unable to recover it. 00:37:36.623 [2024-11-19 08:01:28.283456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.623 [2024-11-19 08:01:28.283491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.623 qpair failed and we were unable to recover it. 00:37:36.623 [2024-11-19 08:01:28.283598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.623 [2024-11-19 08:01:28.283633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.623 qpair failed and we were unable to recover it. 00:37:36.623 [2024-11-19 08:01:28.283775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.623 [2024-11-19 08:01:28.283825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.623 qpair failed and we were unable to recover it. 00:37:36.623 [2024-11-19 08:01:28.283959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.623 [2024-11-19 08:01:28.284010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.623 qpair failed and we were unable to recover it. 00:37:36.623 [2024-11-19 08:01:28.284127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.623 [2024-11-19 08:01:28.284166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.623 qpair failed and we were unable to recover it. 00:37:36.623 [2024-11-19 08:01:28.284266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.623 [2024-11-19 08:01:28.284302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.623 qpair failed and we were unable to recover it. 00:37:36.623 [2024-11-19 08:01:28.284466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.623 [2024-11-19 08:01:28.284502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.623 qpair failed and we were unable to recover it. 00:37:36.623 [2024-11-19 08:01:28.284663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.623 [2024-11-19 08:01:28.284708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.623 qpair failed and we were unable to recover it. 00:37:36.623 [2024-11-19 08:01:28.284852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.623 [2024-11-19 08:01:28.284888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.623 qpair failed and we were unable to recover it. 00:37:36.623 [2024-11-19 08:01:28.284998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.623 [2024-11-19 08:01:28.285037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.623 qpair failed and we were unable to recover it. 00:37:36.623 [2024-11-19 08:01:28.285182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.623 [2024-11-19 08:01:28.285219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.623 qpair failed and we were unable to recover it. 00:37:36.623 [2024-11-19 08:01:28.285359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.623 [2024-11-19 08:01:28.285394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.623 qpair failed and we were unable to recover it. 00:37:36.623 [2024-11-19 08:01:28.285530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.623 [2024-11-19 08:01:28.285566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.623 qpair failed and we were unable to recover it. 00:37:36.623 [2024-11-19 08:01:28.285730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.623 [2024-11-19 08:01:28.285766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.623 qpair failed and we were unable to recover it. 00:37:36.623 [2024-11-19 08:01:28.285884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.623 [2024-11-19 08:01:28.285921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.623 qpair failed and we were unable to recover it. 00:37:36.623 [2024-11-19 08:01:28.286059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.623 [2024-11-19 08:01:28.286095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.623 qpair failed and we were unable to recover it. 00:37:36.623 [2024-11-19 08:01:28.286256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.623 [2024-11-19 08:01:28.286292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.623 qpair failed and we were unable to recover it. 00:37:36.623 [2024-11-19 08:01:28.286390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.623 [2024-11-19 08:01:28.286425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.623 qpair failed and we were unable to recover it. 00:37:36.623 [2024-11-19 08:01:28.286603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.623 [2024-11-19 08:01:28.286654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.623 qpair failed and we were unable to recover it. 00:37:36.623 [2024-11-19 08:01:28.286778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.623 [2024-11-19 08:01:28.286828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.623 qpair failed and we were unable to recover it. 00:37:36.623 [2024-11-19 08:01:28.286951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.623 [2024-11-19 08:01:28.286988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.623 qpair failed and we were unable to recover it. 00:37:36.623 [2024-11-19 08:01:28.287112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.623 [2024-11-19 08:01:28.287148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.623 qpair failed and we were unable to recover it. 00:37:36.623 [2024-11-19 08:01:28.287313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.623 [2024-11-19 08:01:28.287360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.623 qpair failed and we were unable to recover it. 00:37:36.623 [2024-11-19 08:01:28.287497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.623 [2024-11-19 08:01:28.287533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.623 qpair failed and we were unable to recover it. 00:37:36.623 [2024-11-19 08:01:28.287675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.623 [2024-11-19 08:01:28.287724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.623 qpair failed and we were unable to recover it. 00:37:36.623 [2024-11-19 08:01:28.287841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.623 [2024-11-19 08:01:28.287891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.623 qpair failed and we were unable to recover it. 00:37:36.623 [2024-11-19 08:01:28.288035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.623 [2024-11-19 08:01:28.288075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.623 qpair failed and we were unable to recover it. 00:37:36.623 [2024-11-19 08:01:28.288193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.623 [2024-11-19 08:01:28.288235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.623 qpair failed and we were unable to recover it. 00:37:36.623 [2024-11-19 08:01:28.288396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.623 [2024-11-19 08:01:28.288431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.623 qpair failed and we were unable to recover it. 00:37:36.623 [2024-11-19 08:01:28.288568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.623 [2024-11-19 08:01:28.288604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.623 qpair failed and we were unable to recover it. 00:37:36.623 [2024-11-19 08:01:28.288741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.623 [2024-11-19 08:01:28.288777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.623 qpair failed and we were unable to recover it. 00:37:36.623 [2024-11-19 08:01:28.288912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.623 [2024-11-19 08:01:28.288947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.623 qpair failed and we were unable to recover it. 00:37:36.623 [2024-11-19 08:01:28.289045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.623 [2024-11-19 08:01:28.289080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.623 qpair failed and we were unable to recover it. 00:37:36.623 [2024-11-19 08:01:28.289215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.623 [2024-11-19 08:01:28.289250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.623 qpair failed and we were unable to recover it. 00:37:36.623 [2024-11-19 08:01:28.289388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.623 [2024-11-19 08:01:28.289425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.623 qpair failed and we were unable to recover it. 00:37:36.623 [2024-11-19 08:01:28.289578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.623 [2024-11-19 08:01:28.289629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.624 qpair failed and we were unable to recover it. 00:37:36.624 [2024-11-19 08:01:28.289767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.624 [2024-11-19 08:01:28.289817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.624 qpair failed and we were unable to recover it. 00:37:36.624 [2024-11-19 08:01:28.289940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.624 [2024-11-19 08:01:28.289975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.624 qpair failed and we were unable to recover it. 00:37:36.624 [2024-11-19 08:01:28.290177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.624 [2024-11-19 08:01:28.290212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.624 qpair failed and we were unable to recover it. 00:37:36.624 [2024-11-19 08:01:28.290349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.624 [2024-11-19 08:01:28.290385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.624 qpair failed and we were unable to recover it. 00:37:36.624 [2024-11-19 08:01:28.290497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.624 [2024-11-19 08:01:28.290531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.624 qpair failed and we were unable to recover it. 00:37:36.624 [2024-11-19 08:01:28.290674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.624 [2024-11-19 08:01:28.290716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.624 qpair failed and we were unable to recover it. 00:37:36.624 [2024-11-19 08:01:28.290821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.624 [2024-11-19 08:01:28.290855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.624 qpair failed and we were unable to recover it. 00:37:36.624 [2024-11-19 08:01:28.290967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.624 [2024-11-19 08:01:28.291002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.624 qpair failed and we were unable to recover it. 00:37:36.624 [2024-11-19 08:01:28.291142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.624 [2024-11-19 08:01:28.291176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.624 qpair failed and we were unable to recover it. 00:37:36.624 [2024-11-19 08:01:28.291312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.624 [2024-11-19 08:01:28.291347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.624 qpair failed and we were unable to recover it. 00:37:36.624 [2024-11-19 08:01:28.291494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.624 [2024-11-19 08:01:28.291534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.624 qpair failed and we were unable to recover it. 00:37:36.624 [2024-11-19 08:01:28.291668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.624 [2024-11-19 08:01:28.291727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.624 qpair failed and we were unable to recover it. 00:37:36.624 [2024-11-19 08:01:28.291869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.624 [2024-11-19 08:01:28.291920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.624 qpair failed and we were unable to recover it. 00:37:36.624 [2024-11-19 08:01:28.292090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.624 [2024-11-19 08:01:28.292126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.624 qpair failed and we were unable to recover it. 00:37:36.624 [2024-11-19 08:01:28.292263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.624 [2024-11-19 08:01:28.292297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.624 qpair failed and we were unable to recover it. 00:37:36.624 [2024-11-19 08:01:28.292433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.624 [2024-11-19 08:01:28.292468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.624 qpair failed and we were unable to recover it. 00:37:36.624 [2024-11-19 08:01:28.292566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.624 [2024-11-19 08:01:28.292601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.624 qpair failed and we were unable to recover it. 00:37:36.624 [2024-11-19 08:01:28.292734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.624 [2024-11-19 08:01:28.292785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.624 qpair failed and we were unable to recover it. 00:37:36.624 [2024-11-19 08:01:28.292948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.624 [2024-11-19 08:01:28.292998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.624 qpair failed and we were unable to recover it. 00:37:36.624 [2024-11-19 08:01:28.293141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.624 [2024-11-19 08:01:28.293179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.624 qpair failed and we were unable to recover it. 00:37:36.624 [2024-11-19 08:01:28.293291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.624 [2024-11-19 08:01:28.293326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.624 qpair failed and we were unable to recover it. 00:37:36.624 [2024-11-19 08:01:28.293440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.624 [2024-11-19 08:01:28.293475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.624 qpair failed and we were unable to recover it. 00:37:36.624 [2024-11-19 08:01:28.293614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.624 [2024-11-19 08:01:28.293649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.624 qpair failed and we were unable to recover it. 00:37:36.624 [2024-11-19 08:01:28.293790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.624 [2024-11-19 08:01:28.293826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.624 qpair failed and we were unable to recover it. 00:37:36.624 [2024-11-19 08:01:28.293960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.624 [2024-11-19 08:01:28.293995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.624 qpair failed and we were unable to recover it. 00:37:36.624 [2024-11-19 08:01:28.294129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.624 [2024-11-19 08:01:28.294164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.624 qpair failed and we were unable to recover it. 00:37:36.624 [2024-11-19 08:01:28.294268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.624 [2024-11-19 08:01:28.294302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.624 qpair failed and we were unable to recover it. 00:37:36.624 [2024-11-19 08:01:28.294434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.624 [2024-11-19 08:01:28.294470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.624 qpair failed and we were unable to recover it. 00:37:36.624 [2024-11-19 08:01:28.294610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.624 [2024-11-19 08:01:28.294645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.624 qpair failed and we were unable to recover it. 00:37:36.624 [2024-11-19 08:01:28.294808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.624 [2024-11-19 08:01:28.294859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.624 qpair failed and we were unable to recover it. 00:37:36.624 [2024-11-19 08:01:28.295021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.624 [2024-11-19 08:01:28.295070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.624 qpair failed and we were unable to recover it. 00:37:36.624 [2024-11-19 08:01:28.295187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.624 [2024-11-19 08:01:28.295225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.624 qpair failed and we were unable to recover it. 00:37:36.624 [2024-11-19 08:01:28.295328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.624 [2024-11-19 08:01:28.295364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.624 qpair failed and we were unable to recover it. 00:37:36.624 [2024-11-19 08:01:28.295495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.624 [2024-11-19 08:01:28.295531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.624 qpair failed and we were unable to recover it. 00:37:36.624 [2024-11-19 08:01:28.295675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.624 [2024-11-19 08:01:28.295721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.624 qpair failed and we were unable to recover it. 00:37:36.624 [2024-11-19 08:01:28.295861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.624 [2024-11-19 08:01:28.295897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.624 qpair failed and we were unable to recover it. 00:37:36.624 [2024-11-19 08:01:28.296039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.624 [2024-11-19 08:01:28.296074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.625 qpair failed and we were unable to recover it. 00:37:36.625 [2024-11-19 08:01:28.296186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.625 [2024-11-19 08:01:28.296221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.625 qpair failed and we were unable to recover it. 00:37:36.625 [2024-11-19 08:01:28.296361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.625 [2024-11-19 08:01:28.296396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.625 qpair failed and we were unable to recover it. 00:37:36.625 [2024-11-19 08:01:28.296512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.625 [2024-11-19 08:01:28.296547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.625 qpair failed and we were unable to recover it. 00:37:36.625 [2024-11-19 08:01:28.296681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.625 [2024-11-19 08:01:28.296724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.625 qpair failed and we were unable to recover it. 00:37:36.625 [2024-11-19 08:01:28.296861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.625 [2024-11-19 08:01:28.296897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.625 qpair failed and we were unable to recover it. 00:37:36.625 [2024-11-19 08:01:28.297035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.625 [2024-11-19 08:01:28.297070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.625 qpair failed and we were unable to recover it. 00:37:36.625 [2024-11-19 08:01:28.297241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.625 [2024-11-19 08:01:28.297277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.625 qpair failed and we were unable to recover it. 00:37:36.625 [2024-11-19 08:01:28.297441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.625 [2024-11-19 08:01:28.297476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.625 qpair failed and we were unable to recover it. 00:37:36.625 [2024-11-19 08:01:28.297633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.625 [2024-11-19 08:01:28.297683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.625 qpair failed and we were unable to recover it. 00:37:36.625 [2024-11-19 08:01:28.297816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.625 [2024-11-19 08:01:28.297854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.625 qpair failed and we were unable to recover it. 00:37:36.625 [2024-11-19 08:01:28.297991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.625 [2024-11-19 08:01:28.298028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.625 qpair failed and we were unable to recover it. 00:37:36.625 [2024-11-19 08:01:28.298165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.625 [2024-11-19 08:01:28.298201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.625 qpair failed and we were unable to recover it. 00:37:36.625 [2024-11-19 08:01:28.298314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.625 [2024-11-19 08:01:28.298349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.625 qpair failed and we were unable to recover it. 00:37:36.625 [2024-11-19 08:01:28.298476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.625 [2024-11-19 08:01:28.298511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.625 qpair failed and we were unable to recover it. 00:37:36.625 [2024-11-19 08:01:28.298616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.625 [2024-11-19 08:01:28.298651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.625 qpair failed and we were unable to recover it. 00:37:36.625 [2024-11-19 08:01:28.298774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.625 [2024-11-19 08:01:28.298810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.625 qpair failed and we were unable to recover it. 00:37:36.625 [2024-11-19 08:01:28.298922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.625 [2024-11-19 08:01:28.298961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.625 qpair failed and we were unable to recover it. 00:37:36.625 [2024-11-19 08:01:28.299103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.625 [2024-11-19 08:01:28.299140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.625 qpair failed and we were unable to recover it. 00:37:36.625 [2024-11-19 08:01:28.299274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.625 [2024-11-19 08:01:28.299310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.625 qpair failed and we were unable to recover it. 00:37:36.625 [2024-11-19 08:01:28.299466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.625 [2024-11-19 08:01:28.299501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.625 qpair failed and we were unable to recover it. 00:37:36.625 [2024-11-19 08:01:28.299633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.625 [2024-11-19 08:01:28.299668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.625 qpair failed and we were unable to recover it. 00:37:36.625 [2024-11-19 08:01:28.299829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.625 [2024-11-19 08:01:28.299885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.625 qpair failed and we were unable to recover it. 00:37:36.625 [2024-11-19 08:01:28.300057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.625 [2024-11-19 08:01:28.300093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.625 qpair failed and we were unable to recover it. 00:37:36.625 [2024-11-19 08:01:28.300235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.625 [2024-11-19 08:01:28.300270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.625 qpair failed and we were unable to recover it. 00:37:36.625 [2024-11-19 08:01:28.300402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.625 [2024-11-19 08:01:28.300437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.625 qpair failed and we were unable to recover it. 00:37:36.625 [2024-11-19 08:01:28.300569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.625 [2024-11-19 08:01:28.300610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.625 qpair failed and we were unable to recover it. 00:37:36.625 [2024-11-19 08:01:28.300741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.625 [2024-11-19 08:01:28.300777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.625 qpair failed and we were unable to recover it. 00:37:36.625 [2024-11-19 08:01:28.300886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.625 [2024-11-19 08:01:28.300922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.625 qpair failed and we were unable to recover it. 00:37:36.625 [2024-11-19 08:01:28.301040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.625 [2024-11-19 08:01:28.301076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.625 qpair failed and we were unable to recover it. 00:37:36.625 [2024-11-19 08:01:28.301206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.625 [2024-11-19 08:01:28.301241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.625 qpair failed and we were unable to recover it. 00:37:36.625 [2024-11-19 08:01:28.301394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.625 [2024-11-19 08:01:28.301430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.625 qpair failed and we were unable to recover it. 00:37:36.625 [2024-11-19 08:01:28.301601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.625 [2024-11-19 08:01:28.301642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.625 qpair failed and we were unable to recover it. 00:37:36.625 [2024-11-19 08:01:28.301778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.625 [2024-11-19 08:01:28.301828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.625 qpair failed and we were unable to recover it. 00:37:36.625 [2024-11-19 08:01:28.301976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.626 [2024-11-19 08:01:28.302014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.626 qpair failed and we were unable to recover it. 00:37:36.626 [2024-11-19 08:01:28.302167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.626 [2024-11-19 08:01:28.302203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.626 qpair failed and we were unable to recover it. 00:37:36.626 [2024-11-19 08:01:28.302369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.626 [2024-11-19 08:01:28.302405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.626 qpair failed and we were unable to recover it. 00:37:36.626 [2024-11-19 08:01:28.302513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.626 [2024-11-19 08:01:28.302549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.626 qpair failed and we were unable to recover it. 00:37:36.626 [2024-11-19 08:01:28.302715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.626 [2024-11-19 08:01:28.302751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.626 qpair failed and we were unable to recover it. 00:37:36.626 [2024-11-19 08:01:28.302878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.626 [2024-11-19 08:01:28.302929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.626 qpair failed and we were unable to recover it. 00:37:36.626 [2024-11-19 08:01:28.303074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.626 [2024-11-19 08:01:28.303113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.626 qpair failed and we were unable to recover it. 00:37:36.626 [2024-11-19 08:01:28.303220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.626 [2024-11-19 08:01:28.303257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.626 qpair failed and we were unable to recover it. 00:37:36.626 [2024-11-19 08:01:28.303394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.626 [2024-11-19 08:01:28.303430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.626 qpair failed and we were unable to recover it. 00:37:36.626 [2024-11-19 08:01:28.303569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.626 [2024-11-19 08:01:28.303604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.626 qpair failed and we were unable to recover it. 00:37:36.626 [2024-11-19 08:01:28.303726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.626 [2024-11-19 08:01:28.303765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.626 qpair failed and we were unable to recover it. 00:37:36.626 [2024-11-19 08:01:28.303877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.626 [2024-11-19 08:01:28.303913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.626 qpair failed and we were unable to recover it. 00:37:36.626 [2024-11-19 08:01:28.304017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.626 [2024-11-19 08:01:28.304052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.626 qpair failed and we were unable to recover it. 00:37:36.626 [2024-11-19 08:01:28.304158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.626 [2024-11-19 08:01:28.304193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.626 qpair failed and we were unable to recover it. 00:37:36.626 [2024-11-19 08:01:28.304304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.626 [2024-11-19 08:01:28.304339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.626 qpair failed and we were unable to recover it. 00:37:36.626 [2024-11-19 08:01:28.304485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.626 [2024-11-19 08:01:28.304523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.626 qpair failed and we were unable to recover it. 00:37:36.626 [2024-11-19 08:01:28.304657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.626 [2024-11-19 08:01:28.304701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.626 qpair failed and we were unable to recover it. 00:37:36.626 [2024-11-19 08:01:28.304842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.626 [2024-11-19 08:01:28.304879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.626 qpair failed and we were unable to recover it. 00:37:36.626 [2024-11-19 08:01:28.304994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.626 [2024-11-19 08:01:28.305030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.626 qpair failed and we were unable to recover it. 00:37:36.626 [2024-11-19 08:01:28.305137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.626 [2024-11-19 08:01:28.305172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.626 qpair failed and we were unable to recover it. 00:37:36.626 [2024-11-19 08:01:28.305332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.626 [2024-11-19 08:01:28.305367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.626 qpair failed and we were unable to recover it. 00:37:36.626 [2024-11-19 08:01:28.305481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.626 [2024-11-19 08:01:28.305517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.626 qpair failed and we were unable to recover it. 00:37:36.626 [2024-11-19 08:01:28.305623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.626 [2024-11-19 08:01:28.305659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.626 qpair failed and we were unable to recover it. 00:37:36.626 [2024-11-19 08:01:28.305805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.626 [2024-11-19 08:01:28.305842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.626 qpair failed and we were unable to recover it. 00:37:36.626 [2024-11-19 08:01:28.305953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.626 [2024-11-19 08:01:28.305989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.626 qpair failed and we were unable to recover it. 00:37:36.626 [2024-11-19 08:01:28.306170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.626 [2024-11-19 08:01:28.306232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.626 qpair failed and we were unable to recover it. 00:37:36.626 [2024-11-19 08:01:28.306356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.626 [2024-11-19 08:01:28.306394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.626 qpair failed and we were unable to recover it. 00:37:36.626 [2024-11-19 08:01:28.306536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.626 [2024-11-19 08:01:28.306573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.626 qpair failed and we were unable to recover it. 00:37:36.626 [2024-11-19 08:01:28.306740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.626 [2024-11-19 08:01:28.306783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.626 qpair failed and we were unable to recover it. 00:37:36.626 [2024-11-19 08:01:28.306950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.626 [2024-11-19 08:01:28.306986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.626 qpair failed and we were unable to recover it. 00:37:36.626 [2024-11-19 08:01:28.307098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.626 [2024-11-19 08:01:28.307133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.626 qpair failed and we were unable to recover it. 00:37:36.626 [2024-11-19 08:01:28.307299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.626 [2024-11-19 08:01:28.307334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.626 qpair failed and we were unable to recover it. 00:37:36.626 [2024-11-19 08:01:28.307439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.626 [2024-11-19 08:01:28.307474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.626 qpair failed and we were unable to recover it. 00:37:36.626 [2024-11-19 08:01:28.307609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.626 [2024-11-19 08:01:28.307659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.626 qpair failed and we were unable to recover it. 00:37:36.626 [2024-11-19 08:01:28.307794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.626 [2024-11-19 08:01:28.307832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.626 qpair failed and we were unable to recover it. 00:37:36.626 [2024-11-19 08:01:28.307954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.626 [2024-11-19 08:01:28.307991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.626 qpair failed and we were unable to recover it. 00:37:36.626 [2024-11-19 08:01:28.308108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.626 [2024-11-19 08:01:28.308144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.626 qpair failed and we were unable to recover it. 00:37:36.626 [2024-11-19 08:01:28.308295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.626 [2024-11-19 08:01:28.308345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.626 qpair failed and we were unable to recover it. 00:37:36.626 [2024-11-19 08:01:28.308481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.627 [2024-11-19 08:01:28.308519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.627 qpair failed and we were unable to recover it. 00:37:36.627 [2024-11-19 08:01:28.308651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.627 [2024-11-19 08:01:28.308698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.627 qpair failed and we were unable to recover it. 00:37:36.627 [2024-11-19 08:01:28.308839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.627 [2024-11-19 08:01:28.308875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.627 qpair failed and we were unable to recover it. 00:37:36.627 [2024-11-19 08:01:28.309036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.627 [2024-11-19 08:01:28.309071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.627 qpair failed and we were unable to recover it. 00:37:36.627 [2024-11-19 08:01:28.309219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.627 [2024-11-19 08:01:28.309255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.627 qpair failed and we were unable to recover it. 00:37:36.627 [2024-11-19 08:01:28.309421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.627 [2024-11-19 08:01:28.309457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.627 qpair failed and we were unable to recover it. 00:37:36.627 [2024-11-19 08:01:28.309600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.627 [2024-11-19 08:01:28.309640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.627 qpair failed and we were unable to recover it. 00:37:36.627 [2024-11-19 08:01:28.309833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.627 [2024-11-19 08:01:28.309882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.627 qpair failed and we were unable to recover it. 00:37:36.627 [2024-11-19 08:01:28.310013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.627 [2024-11-19 08:01:28.310052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.627 qpair failed and we were unable to recover it. 00:37:36.627 [2024-11-19 08:01:28.310178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.627 [2024-11-19 08:01:28.310214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.627 qpair failed and we were unable to recover it. 00:37:36.627 [2024-11-19 08:01:28.310362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.627 [2024-11-19 08:01:28.310398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.627 qpair failed and we were unable to recover it. 00:37:36.627 [2024-11-19 08:01:28.310536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.627 [2024-11-19 08:01:28.310572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.627 qpair failed and we were unable to recover it. 00:37:36.627 [2024-11-19 08:01:28.310733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.627 [2024-11-19 08:01:28.310768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.627 qpair failed and we were unable to recover it. 00:37:36.627 [2024-11-19 08:01:28.310902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.627 [2024-11-19 08:01:28.310952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.627 qpair failed and we were unable to recover it. 00:37:36.627 [2024-11-19 08:01:28.311074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.627 [2024-11-19 08:01:28.311113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.627 qpair failed and we were unable to recover it. 00:37:36.627 [2024-11-19 08:01:28.311278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.627 [2024-11-19 08:01:28.311314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.627 qpair failed and we were unable to recover it. 00:37:36.627 [2024-11-19 08:01:28.311449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.627 [2024-11-19 08:01:28.311485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.627 qpair failed and we were unable to recover it. 00:37:36.627 [2024-11-19 08:01:28.311683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.627 [2024-11-19 08:01:28.311743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.627 qpair failed and we were unable to recover it. 00:37:36.627 [2024-11-19 08:01:28.311885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.627 [2024-11-19 08:01:28.311936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.627 qpair failed and we were unable to recover it. 00:37:36.627 [2024-11-19 08:01:28.312081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.627 [2024-11-19 08:01:28.312117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.627 qpair failed and we were unable to recover it. 00:37:36.627 [2024-11-19 08:01:28.312225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.627 [2024-11-19 08:01:28.312260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.627 qpair failed and we were unable to recover it. 00:37:36.627 [2024-11-19 08:01:28.312377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.627 [2024-11-19 08:01:28.312411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.627 qpair failed and we were unable to recover it. 00:37:36.627 [2024-11-19 08:01:28.312551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.627 [2024-11-19 08:01:28.312587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.627 qpair failed and we were unable to recover it. 00:37:36.627 [2024-11-19 08:01:28.312707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.627 [2024-11-19 08:01:28.312745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.627 qpair failed and we were unable to recover it. 00:37:36.627 [2024-11-19 08:01:28.312900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.627 [2024-11-19 08:01:28.312951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.627 qpair failed and we were unable to recover it. 00:37:36.627 [2024-11-19 08:01:28.313127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.627 [2024-11-19 08:01:28.313166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.627 qpair failed and we were unable to recover it. 00:37:36.627 [2024-11-19 08:01:28.313309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.627 [2024-11-19 08:01:28.313345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.627 qpair failed and we were unable to recover it. 00:37:36.627 [2024-11-19 08:01:28.313484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.627 [2024-11-19 08:01:28.313519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.627 qpair failed and we were unable to recover it. 00:37:36.627 [2024-11-19 08:01:28.313655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.627 [2024-11-19 08:01:28.313700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.627 qpair failed and we were unable to recover it. 00:37:36.627 [2024-11-19 08:01:28.313843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.627 [2024-11-19 08:01:28.313879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.627 qpair failed and we were unable to recover it. 00:37:36.627 [2024-11-19 08:01:28.314053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.627 [2024-11-19 08:01:28.314094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.627 qpair failed and we were unable to recover it. 00:37:36.627 [2024-11-19 08:01:28.314231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.627 [2024-11-19 08:01:28.314265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.627 qpair failed and we were unable to recover it. 00:37:36.627 [2024-11-19 08:01:28.314401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.627 [2024-11-19 08:01:28.314437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.627 qpair failed and we were unable to recover it. 00:37:36.627 [2024-11-19 08:01:28.314577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.627 [2024-11-19 08:01:28.314612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.627 qpair failed and we were unable to recover it. 00:37:36.627 [2024-11-19 08:01:28.314794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.627 [2024-11-19 08:01:28.314846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.627 qpair failed and we were unable to recover it. 00:37:36.627 [2024-11-19 08:01:28.314961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.627 [2024-11-19 08:01:28.315000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.627 qpair failed and we were unable to recover it. 00:37:36.627 [2024-11-19 08:01:28.315140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.627 [2024-11-19 08:01:28.315176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.627 qpair failed and we were unable to recover it. 00:37:36.627 [2024-11-19 08:01:28.315300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.627 [2024-11-19 08:01:28.315335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.628 qpair failed and we were unable to recover it. 00:37:36.628 [2024-11-19 08:01:28.315492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.628 [2024-11-19 08:01:28.315542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.628 qpair failed and we were unable to recover it. 00:37:36.628 [2024-11-19 08:01:28.315700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.628 [2024-11-19 08:01:28.315738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.628 qpair failed and we were unable to recover it. 00:37:36.628 [2024-11-19 08:01:28.315844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.628 [2024-11-19 08:01:28.315880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.628 qpair failed and we were unable to recover it. 00:37:36.628 [2024-11-19 08:01:28.316018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.628 [2024-11-19 08:01:28.316054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.628 qpair failed and we were unable to recover it. 00:37:36.628 [2024-11-19 08:01:28.316179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.628 [2024-11-19 08:01:28.316214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.628 qpair failed and we were unable to recover it. 00:37:36.628 [2024-11-19 08:01:28.316344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.628 [2024-11-19 08:01:28.316380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.628 qpair failed and we were unable to recover it. 00:37:36.628 [2024-11-19 08:01:28.316531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.628 [2024-11-19 08:01:28.316568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.628 qpair failed and we were unable to recover it. 00:37:36.628 [2024-11-19 08:01:28.316715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.628 [2024-11-19 08:01:28.316751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.628 qpair failed and we were unable to recover it. 00:37:36.628 [2024-11-19 08:01:28.316867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.628 [2024-11-19 08:01:28.316902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.628 qpair failed and we were unable to recover it. 00:37:36.628 [2024-11-19 08:01:28.317004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.628 [2024-11-19 08:01:28.317038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.628 qpair failed and we were unable to recover it. 00:37:36.628 [2024-11-19 08:01:28.317181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.628 [2024-11-19 08:01:28.317220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.628 qpair failed and we were unable to recover it. 00:37:36.628 [2024-11-19 08:01:28.317365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.628 [2024-11-19 08:01:28.317402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.628 qpair failed and we were unable to recover it. 00:37:36.628 [2024-11-19 08:01:28.317510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.628 [2024-11-19 08:01:28.317546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.628 qpair failed and we were unable to recover it. 00:37:36.628 [2024-11-19 08:01:28.317685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.628 [2024-11-19 08:01:28.317729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.628 qpair failed and we were unable to recover it. 00:37:36.628 [2024-11-19 08:01:28.317885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.628 [2024-11-19 08:01:28.317936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.628 qpair failed and we were unable to recover it. 00:37:36.628 [2024-11-19 08:01:28.318090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.628 [2024-11-19 08:01:28.318128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.628 qpair failed and we were unable to recover it. 00:37:36.628 [2024-11-19 08:01:28.318280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.628 [2024-11-19 08:01:28.318316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.628 qpair failed and we were unable to recover it. 00:37:36.628 [2024-11-19 08:01:28.318487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.628 [2024-11-19 08:01:28.318523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.628 qpair failed and we were unable to recover it. 00:37:36.628 [2024-11-19 08:01:28.318671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.628 [2024-11-19 08:01:28.318725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.628 qpair failed and we were unable to recover it. 00:37:36.628 [2024-11-19 08:01:28.318866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.628 [2024-11-19 08:01:28.318916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.628 qpair failed and we were unable to recover it. 00:37:36.628 [2024-11-19 08:01:28.319090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.628 [2024-11-19 08:01:28.319128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.628 qpair failed and we were unable to recover it. 00:37:36.628 [2024-11-19 08:01:28.319258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.628 [2024-11-19 08:01:28.319293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.628 qpair failed and we were unable to recover it. 00:37:36.628 [2024-11-19 08:01:28.319432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.628 [2024-11-19 08:01:28.319468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.628 qpair failed and we were unable to recover it. 00:37:36.628 [2024-11-19 08:01:28.319580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.628 [2024-11-19 08:01:28.319615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.628 qpair failed and we were unable to recover it. 00:37:36.628 [2024-11-19 08:01:28.319720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.628 [2024-11-19 08:01:28.319756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.628 qpair failed and we were unable to recover it. 00:37:36.628 [2024-11-19 08:01:28.319880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.628 [2024-11-19 08:01:28.319917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.628 qpair failed and we were unable to recover it. 00:37:36.628 [2024-11-19 08:01:28.320030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.628 [2024-11-19 08:01:28.320066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.628 qpair failed and we were unable to recover it. 00:37:36.628 [2024-11-19 08:01:28.320230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.628 [2024-11-19 08:01:28.320267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.628 qpair failed and we were unable to recover it. 00:37:36.628 [2024-11-19 08:01:28.320384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.628 [2024-11-19 08:01:28.320419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.628 qpair failed and we were unable to recover it. 00:37:36.628 [2024-11-19 08:01:28.320533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.628 [2024-11-19 08:01:28.320569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.628 qpair failed and we were unable to recover it. 00:37:36.628 [2024-11-19 08:01:28.320720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.628 [2024-11-19 08:01:28.320757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.628 qpair failed and we were unable to recover it. 00:37:36.628 [2024-11-19 08:01:28.320869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.628 [2024-11-19 08:01:28.320906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.628 qpair failed and we were unable to recover it. 00:37:36.628 [2024-11-19 08:01:28.321100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.628 [2024-11-19 08:01:28.321142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.628 qpair failed and we were unable to recover it. 00:37:36.628 [2024-11-19 08:01:28.321246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.628 [2024-11-19 08:01:28.321293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.628 qpair failed and we were unable to recover it. 00:37:36.628 [2024-11-19 08:01:28.321460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.628 [2024-11-19 08:01:28.321496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.628 qpair failed and we were unable to recover it. 00:37:36.628 [2024-11-19 08:01:28.321604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.628 [2024-11-19 08:01:28.321640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.628 qpair failed and we were unable to recover it. 00:37:36.628 [2024-11-19 08:01:28.321791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.628 [2024-11-19 08:01:28.321828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.628 qpair failed and we were unable to recover it. 00:37:36.628 [2024-11-19 08:01:28.321962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.629 [2024-11-19 08:01:28.321998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.629 qpair failed and we were unable to recover it. 00:37:36.629 [2024-11-19 08:01:28.322147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.629 [2024-11-19 08:01:28.322198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.629 qpair failed and we were unable to recover it. 00:37:36.629 [2024-11-19 08:01:28.322325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.629 [2024-11-19 08:01:28.322378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.629 qpair failed and we were unable to recover it. 00:37:36.629 [2024-11-19 08:01:28.322540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.629 [2024-11-19 08:01:28.322577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.629 qpair failed and we were unable to recover it. 00:37:36.629 [2024-11-19 08:01:28.322703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.629 [2024-11-19 08:01:28.322740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.629 qpair failed and we were unable to recover it. 00:37:36.629 [2024-11-19 08:01:28.322855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.629 [2024-11-19 08:01:28.322890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.629 qpair failed and we were unable to recover it. 00:37:36.629 [2024-11-19 08:01:28.323056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.629 [2024-11-19 08:01:28.323093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.629 qpair failed and we were unable to recover it. 00:37:36.629 [2024-11-19 08:01:28.323241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.629 [2024-11-19 08:01:28.323278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.629 qpair failed and we were unable to recover it. 00:37:36.629 [2024-11-19 08:01:28.323429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.629 [2024-11-19 08:01:28.323479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.629 qpair failed and we were unable to recover it. 00:37:36.629 [2024-11-19 08:01:28.323615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.629 [2024-11-19 08:01:28.323665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.629 qpair failed and we were unable to recover it. 00:37:36.629 [2024-11-19 08:01:28.323829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.629 [2024-11-19 08:01:28.323866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.629 qpair failed and we were unable to recover it. 00:37:36.629 [2024-11-19 08:01:28.324009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.629 [2024-11-19 08:01:28.324045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.629 qpair failed and we were unable to recover it. 00:37:36.629 [2024-11-19 08:01:28.324186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.629 [2024-11-19 08:01:28.324222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.629 qpair failed and we were unable to recover it. 00:37:36.629 [2024-11-19 08:01:28.324336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.629 [2024-11-19 08:01:28.324373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.629 qpair failed and we were unable to recover it. 00:37:36.629 [2024-11-19 08:01:28.324517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.629 [2024-11-19 08:01:28.324554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.629 qpair failed and we were unable to recover it. 00:37:36.629 [2024-11-19 08:01:28.324684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.629 [2024-11-19 08:01:28.324750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.629 qpair failed and we were unable to recover it. 00:37:36.629 [2024-11-19 08:01:28.324937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.629 [2024-11-19 08:01:28.324988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.629 qpair failed and we were unable to recover it. 00:37:36.629 [2024-11-19 08:01:28.325107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.629 [2024-11-19 08:01:28.325145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.629 qpair failed and we were unable to recover it. 00:37:36.629 [2024-11-19 08:01:28.325287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.629 [2024-11-19 08:01:28.325323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.629 qpair failed and we were unable to recover it. 00:37:36.629 [2024-11-19 08:01:28.325441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.629 [2024-11-19 08:01:28.325477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.629 qpair failed and we were unable to recover it. 00:37:36.629 [2024-11-19 08:01:28.325587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.629 [2024-11-19 08:01:28.325623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.629 qpair failed and we were unable to recover it. 00:37:36.629 [2024-11-19 08:01:28.325811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.629 [2024-11-19 08:01:28.325862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.629 qpair failed and we were unable to recover it. 00:37:36.629 [2024-11-19 08:01:28.326037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.629 [2024-11-19 08:01:28.326074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.629 qpair failed and we were unable to recover it. 00:37:36.629 [2024-11-19 08:01:28.326220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.629 [2024-11-19 08:01:28.326257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.629 qpair failed and we were unable to recover it. 00:37:36.629 [2024-11-19 08:01:28.326420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.629 [2024-11-19 08:01:28.326456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.629 qpair failed and we were unable to recover it. 00:37:36.629 [2024-11-19 08:01:28.326592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.629 [2024-11-19 08:01:28.326629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.629 qpair failed and we were unable to recover it. 00:37:36.629 [2024-11-19 08:01:28.326769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.629 [2024-11-19 08:01:28.326806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.629 qpair failed and we were unable to recover it. 00:37:36.629 [2024-11-19 08:01:28.326923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.629 [2024-11-19 08:01:28.326959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.629 qpair failed and we were unable to recover it. 00:37:36.629 [2024-11-19 08:01:28.327093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.629 [2024-11-19 08:01:28.327129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.629 qpair failed and we were unable to recover it. 00:37:36.629 [2024-11-19 08:01:28.327241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.629 [2024-11-19 08:01:28.327277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.629 qpair failed and we were unable to recover it. 00:37:36.629 [2024-11-19 08:01:28.327419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.629 [2024-11-19 08:01:28.327454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.629 qpair failed and we were unable to recover it. 00:37:36.629 [2024-11-19 08:01:28.327582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.629 [2024-11-19 08:01:28.327632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.629 qpair failed and we were unable to recover it. 00:37:36.629 [2024-11-19 08:01:28.327809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.629 [2024-11-19 08:01:28.327848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.629 qpair failed and we were unable to recover it. 00:37:36.629 [2024-11-19 08:01:28.328002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.629 [2024-11-19 08:01:28.328039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.629 qpair failed and we were unable to recover it. 00:37:36.629 [2024-11-19 08:01:28.328176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.629 [2024-11-19 08:01:28.328212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.629 qpair failed and we were unable to recover it. 00:37:36.629 [2024-11-19 08:01:28.328343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.629 [2024-11-19 08:01:28.328384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.629 qpair failed and we were unable to recover it. 00:37:36.629 [2024-11-19 08:01:28.328523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.629 [2024-11-19 08:01:28.328558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.629 qpair failed and we were unable to recover it. 00:37:36.629 [2024-11-19 08:01:28.328701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.630 [2024-11-19 08:01:28.328752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.630 qpair failed and we were unable to recover it. 00:37:36.630 [2024-11-19 08:01:28.328890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.630 [2024-11-19 08:01:28.328941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.630 qpair failed and we were unable to recover it. 00:37:36.630 [2024-11-19 08:01:28.329066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.630 [2024-11-19 08:01:28.329104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.630 qpair failed and we were unable to recover it. 00:37:36.630 [2024-11-19 08:01:28.329242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.630 [2024-11-19 08:01:28.329279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.630 qpair failed and we were unable to recover it. 00:37:36.630 [2024-11-19 08:01:28.329394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.630 [2024-11-19 08:01:28.329437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.630 qpair failed and we were unable to recover it. 00:37:36.630 [2024-11-19 08:01:28.329580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.630 [2024-11-19 08:01:28.329627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.630 qpair failed and we were unable to recover it. 00:37:36.630 [2024-11-19 08:01:28.329776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.630 [2024-11-19 08:01:28.329814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.630 qpair failed and we were unable to recover it. 00:37:36.630 [2024-11-19 08:01:28.329937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.630 [2024-11-19 08:01:28.329974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.630 qpair failed and we were unable to recover it. 00:37:36.630 [2024-11-19 08:01:28.330109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.630 [2024-11-19 08:01:28.330146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.630 qpair failed and we were unable to recover it. 00:37:36.630 [2024-11-19 08:01:28.330288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.630 [2024-11-19 08:01:28.330325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.630 qpair failed and we were unable to recover it. 00:37:36.630 [2024-11-19 08:01:28.330489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.630 [2024-11-19 08:01:28.330526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.630 qpair failed and we were unable to recover it. 00:37:36.630 [2024-11-19 08:01:28.330667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.630 [2024-11-19 08:01:28.330710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.630 qpair failed and we were unable to recover it. 00:37:36.630 [2024-11-19 08:01:28.330838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.630 [2024-11-19 08:01:28.330875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.630 qpair failed and we were unable to recover it. 00:37:36.630 [2024-11-19 08:01:28.331053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.630 [2024-11-19 08:01:28.331104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.630 qpair failed and we were unable to recover it. 00:37:36.630 [2024-11-19 08:01:28.331218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.630 [2024-11-19 08:01:28.331255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.630 qpair failed and we were unable to recover it. 00:37:36.630 [2024-11-19 08:01:28.331402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.630 [2024-11-19 08:01:28.331438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.630 qpair failed and we were unable to recover it. 00:37:36.630 [2024-11-19 08:01:28.331578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.630 [2024-11-19 08:01:28.331616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.630 qpair failed and we were unable to recover it. 00:37:36.630 [2024-11-19 08:01:28.331772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.630 [2024-11-19 08:01:28.331808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.630 qpair failed and we were unable to recover it. 00:37:36.630 [2024-11-19 08:01:28.331943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.630 [2024-11-19 08:01:28.331978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.630 qpair failed and we were unable to recover it. 00:37:36.630 [2024-11-19 08:01:28.332114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.630 [2024-11-19 08:01:28.332150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.630 qpair failed and we were unable to recover it. 00:37:36.630 [2024-11-19 08:01:28.332266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.630 [2024-11-19 08:01:28.332302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.630 qpair failed and we were unable to recover it. 00:37:36.630 [2024-11-19 08:01:28.332456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.630 [2024-11-19 08:01:28.332495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.630 qpair failed and we were unable to recover it. 00:37:36.630 [2024-11-19 08:01:28.332638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.630 [2024-11-19 08:01:28.332674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.630 qpair failed and we were unable to recover it. 00:37:36.630 [2024-11-19 08:01:28.332835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.630 [2024-11-19 08:01:28.332871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.630 qpair failed and we were unable to recover it. 00:37:36.630 [2024-11-19 08:01:28.332980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.630 [2024-11-19 08:01:28.333016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.630 qpair failed and we were unable to recover it. 00:37:36.630 [2024-11-19 08:01:28.333156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.630 [2024-11-19 08:01:28.333191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.630 qpair failed and we were unable to recover it. 00:37:36.630 [2024-11-19 08:01:28.333353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.630 [2024-11-19 08:01:28.333389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.630 qpair failed and we were unable to recover it. 00:37:36.630 [2024-11-19 08:01:28.333492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.630 [2024-11-19 08:01:28.333528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.630 qpair failed and we were unable to recover it. 00:37:36.630 [2024-11-19 08:01:28.333702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.630 [2024-11-19 08:01:28.333740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.630 qpair failed and we were unable to recover it. 00:37:36.630 [2024-11-19 08:01:28.333921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.630 [2024-11-19 08:01:28.333971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.630 qpair failed and we were unable to recover it. 00:37:36.630 [2024-11-19 08:01:28.334130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.630 [2024-11-19 08:01:28.334168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.630 qpair failed and we were unable to recover it. 00:37:36.630 [2024-11-19 08:01:28.334330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.630 [2024-11-19 08:01:28.334365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.630 qpair failed and we were unable to recover it. 00:37:36.630 [2024-11-19 08:01:28.334503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.630 [2024-11-19 08:01:28.334538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.630 qpair failed and we were unable to recover it. 00:37:36.630 [2024-11-19 08:01:28.334650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.630 [2024-11-19 08:01:28.334686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.630 qpair failed and we were unable to recover it. 00:37:36.630 [2024-11-19 08:01:28.334814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.631 [2024-11-19 08:01:28.334850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.631 qpair failed and we were unable to recover it. 00:37:36.631 [2024-11-19 08:01:28.335003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.631 [2024-11-19 08:01:28.335053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.631 qpair failed and we were unable to recover it. 00:37:36.631 [2024-11-19 08:01:28.335203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.631 [2024-11-19 08:01:28.335241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.631 qpair failed and we were unable to recover it. 00:37:36.631 [2024-11-19 08:01:28.335382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.631 [2024-11-19 08:01:28.335419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.631 qpair failed and we were unable to recover it. 00:37:36.631 [2024-11-19 08:01:28.335594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.631 [2024-11-19 08:01:28.335634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.631 qpair failed and we were unable to recover it. 00:37:36.631 [2024-11-19 08:01:28.335797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.631 [2024-11-19 08:01:28.335834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.631 qpair failed and we were unable to recover it. 00:37:36.631 [2024-11-19 08:01:28.336002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.631 [2024-11-19 08:01:28.336040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.631 qpair failed and we were unable to recover it. 00:37:36.631 [2024-11-19 08:01:28.336153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.631 [2024-11-19 08:01:28.336192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.631 qpair failed and we were unable to recover it. 00:37:36.631 [2024-11-19 08:01:28.336295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.631 [2024-11-19 08:01:28.336330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.631 qpair failed and we were unable to recover it. 00:37:36.631 [2024-11-19 08:01:28.336471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.631 [2024-11-19 08:01:28.336507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.631 qpair failed and we were unable to recover it. 00:37:36.631 [2024-11-19 08:01:28.336614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.631 [2024-11-19 08:01:28.336650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.631 qpair failed and we were unable to recover it. 00:37:36.631 [2024-11-19 08:01:28.336802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.631 [2024-11-19 08:01:28.336838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.631 qpair failed and we were unable to recover it. 00:37:36.631 [2024-11-19 08:01:28.336978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.631 [2024-11-19 08:01:28.337020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.631 qpair failed and we were unable to recover it. 00:37:36.631 [2024-11-19 08:01:28.337128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.631 [2024-11-19 08:01:28.337165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.631 qpair failed and we were unable to recover it. 00:37:36.631 [2024-11-19 08:01:28.337304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.631 [2024-11-19 08:01:28.337339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.631 qpair failed and we were unable to recover it. 00:37:36.631 [2024-11-19 08:01:28.337455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.631 [2024-11-19 08:01:28.337492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.631 qpair failed and we were unable to recover it. 00:37:36.631 [2024-11-19 08:01:28.337636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.631 [2024-11-19 08:01:28.337671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.631 qpair failed and we were unable to recover it. 00:37:36.631 [2024-11-19 08:01:28.337806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.631 [2024-11-19 08:01:28.337842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.631 qpair failed and we were unable to recover it. 00:37:36.631 [2024-11-19 08:01:28.337978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.631 [2024-11-19 08:01:28.338015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.631 qpair failed and we were unable to recover it. 00:37:36.631 [2024-11-19 08:01:28.338157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.631 [2024-11-19 08:01:28.338195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.631 qpair failed and we were unable to recover it. 00:37:36.631 [2024-11-19 08:01:28.338302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.631 [2024-11-19 08:01:28.338339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.631 qpair failed and we were unable to recover it. 00:37:36.631 [2024-11-19 08:01:28.338447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.631 [2024-11-19 08:01:28.338482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.631 qpair failed and we were unable to recover it. 00:37:36.631 [2024-11-19 08:01:28.338624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.631 [2024-11-19 08:01:28.338660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.631 qpair failed and we were unable to recover it. 00:37:36.631 [2024-11-19 08:01:28.338833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.631 [2024-11-19 08:01:28.338872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.631 qpair failed and we were unable to recover it. 00:37:36.631 [2024-11-19 08:01:28.339011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.631 [2024-11-19 08:01:28.339060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.631 qpair failed and we were unable to recover it. 00:37:36.631 [2024-11-19 08:01:28.339219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.631 [2024-11-19 08:01:28.339269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.631 qpair failed and we were unable to recover it. 00:37:36.631 [2024-11-19 08:01:28.339424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.631 [2024-11-19 08:01:28.339465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.631 qpair failed and we were unable to recover it. 00:37:36.631 [2024-11-19 08:01:28.339605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.631 [2024-11-19 08:01:28.339642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.631 qpair failed and we were unable to recover it. 00:37:36.631 [2024-11-19 08:01:28.339800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.631 [2024-11-19 08:01:28.339837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.631 qpair failed and we were unable to recover it. 00:37:36.631 [2024-11-19 08:01:28.339971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.631 [2024-11-19 08:01:28.340008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.631 qpair failed and we were unable to recover it. 00:37:36.631 [2024-11-19 08:01:28.340146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.631 [2024-11-19 08:01:28.340181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.631 qpair failed and we were unable to recover it. 00:37:36.631 [2024-11-19 08:01:28.340357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.631 [2024-11-19 08:01:28.340393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.631 qpair failed and we were unable to recover it. 00:37:36.631 [2024-11-19 08:01:28.340501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.631 [2024-11-19 08:01:28.340538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.631 qpair failed and we were unable to recover it. 00:37:36.631 [2024-11-19 08:01:28.340676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.631 [2024-11-19 08:01:28.340726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.631 qpair failed and we were unable to recover it. 00:37:36.631 [2024-11-19 08:01:28.340858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.631 [2024-11-19 08:01:28.340894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.631 qpair failed and we were unable to recover it. 00:37:36.631 [2024-11-19 08:01:28.341033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.631 [2024-11-19 08:01:28.341068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.631 qpair failed and we were unable to recover it. 00:37:36.631 [2024-11-19 08:01:28.341255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.631 [2024-11-19 08:01:28.341291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.631 qpair failed and we were unable to recover it. 00:37:36.631 [2024-11-19 08:01:28.341428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.631 [2024-11-19 08:01:28.341464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.631 qpair failed and we were unable to recover it. 00:37:36.632 [2024-11-19 08:01:28.341614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.632 [2024-11-19 08:01:28.341655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.632 qpair failed and we were unable to recover it. 00:37:36.632 [2024-11-19 08:01:28.341800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.632 [2024-11-19 08:01:28.341850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.632 qpair failed and we were unable to recover it. 00:37:36.632 [2024-11-19 08:01:28.342020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.632 [2024-11-19 08:01:28.342069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.632 qpair failed and we were unable to recover it. 00:37:36.632 [2024-11-19 08:01:28.342186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.632 [2024-11-19 08:01:28.342223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.632 qpair failed and we were unable to recover it. 00:37:36.632 [2024-11-19 08:01:28.342362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.632 [2024-11-19 08:01:28.342398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.632 qpair failed and we were unable to recover it. 00:37:36.632 [2024-11-19 08:01:28.342514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.632 [2024-11-19 08:01:28.342549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.632 qpair failed and we were unable to recover it. 00:37:36.632 [2024-11-19 08:01:28.342665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.632 [2024-11-19 08:01:28.342714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.632 qpair failed and we were unable to recover it. 00:37:36.632 [2024-11-19 08:01:28.342865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.632 [2024-11-19 08:01:28.342903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.632 qpair failed and we were unable to recover it. 00:37:36.632 [2024-11-19 08:01:28.343024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.632 [2024-11-19 08:01:28.343060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.632 qpair failed and we were unable to recover it. 00:37:36.632 [2024-11-19 08:01:28.343175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.632 [2024-11-19 08:01:28.343211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.632 qpair failed and we were unable to recover it. 00:37:36.632 [2024-11-19 08:01:28.343342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.632 [2024-11-19 08:01:28.343377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.632 qpair failed and we were unable to recover it. 00:37:36.632 [2024-11-19 08:01:28.343549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.632 [2024-11-19 08:01:28.343588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.632 qpair failed and we were unable to recover it. 00:37:36.632 [2024-11-19 08:01:28.343729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.632 [2024-11-19 08:01:28.343780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.632 qpair failed and we were unable to recover it. 00:37:36.632 [2024-11-19 08:01:28.343900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.632 [2024-11-19 08:01:28.343943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.632 qpair failed and we were unable to recover it. 00:37:36.632 [2024-11-19 08:01:28.344054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.632 [2024-11-19 08:01:28.344089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.632 qpair failed and we were unable to recover it. 00:37:36.632 [2024-11-19 08:01:28.344253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.632 [2024-11-19 08:01:28.344288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.632 qpair failed and we were unable to recover it. 00:37:36.632 [2024-11-19 08:01:28.344451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.632 [2024-11-19 08:01:28.344486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.632 qpair failed and we were unable to recover it. 00:37:36.632 [2024-11-19 08:01:28.344620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.632 [2024-11-19 08:01:28.344655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.632 qpair failed and we were unable to recover it. 00:37:36.632 [2024-11-19 08:01:28.344800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.632 [2024-11-19 08:01:28.344850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.632 qpair failed and we were unable to recover it. 00:37:36.632 [2024-11-19 08:01:28.345020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.632 [2024-11-19 08:01:28.345056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.632 qpair failed and we were unable to recover it. 00:37:36.632 [2024-11-19 08:01:28.345233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.632 [2024-11-19 08:01:28.345269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.632 qpair failed and we were unable to recover it. 00:37:36.632 [2024-11-19 08:01:28.345436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.632 [2024-11-19 08:01:28.345472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.632 qpair failed and we were unable to recover it. 00:37:36.632 [2024-11-19 08:01:28.345653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.632 [2024-11-19 08:01:28.345714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.632 qpair failed and we were unable to recover it. 00:37:36.632 [2024-11-19 08:01:28.345855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.632 [2024-11-19 08:01:28.345905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.632 qpair failed and we were unable to recover it. 00:37:36.632 [2024-11-19 08:01:28.346092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.632 [2024-11-19 08:01:28.346130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.632 qpair failed and we were unable to recover it. 00:37:36.632 [2024-11-19 08:01:28.346295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.632 [2024-11-19 08:01:28.346330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.632 qpair failed and we were unable to recover it. 00:37:36.632 [2024-11-19 08:01:28.346470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.632 [2024-11-19 08:01:28.346505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.632 qpair failed and we were unable to recover it. 00:37:36.632 [2024-11-19 08:01:28.346670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.632 [2024-11-19 08:01:28.346713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.632 qpair failed and we were unable to recover it. 00:37:36.632 [2024-11-19 08:01:28.346857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.632 [2024-11-19 08:01:28.346893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.632 qpair failed and we were unable to recover it. 00:37:36.632 [2024-11-19 08:01:28.347025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.632 [2024-11-19 08:01:28.347061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.632 qpair failed and we were unable to recover it. 00:37:36.632 [2024-11-19 08:01:28.347202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.632 [2024-11-19 08:01:28.347237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.632 qpair failed and we were unable to recover it. 00:37:36.632 [2024-11-19 08:01:28.347373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.632 [2024-11-19 08:01:28.347408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.632 qpair failed and we were unable to recover it. 00:37:36.632 [2024-11-19 08:01:28.347544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.632 [2024-11-19 08:01:28.347579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.632 qpair failed and we were unable to recover it. 00:37:36.632 [2024-11-19 08:01:28.347713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.632 [2024-11-19 08:01:28.347766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.632 qpair failed and we were unable to recover it. 00:37:36.632 [2024-11-19 08:01:28.347926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.632 [2024-11-19 08:01:28.347968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.632 qpair failed and we were unable to recover it. 00:37:36.632 [2024-11-19 08:01:28.348099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.632 [2024-11-19 08:01:28.348134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.632 qpair failed and we were unable to recover it. 00:37:36.632 [2024-11-19 08:01:28.348270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.632 [2024-11-19 08:01:28.348305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.632 qpair failed and we were unable to recover it. 00:37:36.633 [2024-11-19 08:01:28.348441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.633 [2024-11-19 08:01:28.348475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.633 qpair failed and we were unable to recover it. 00:37:36.633 [2024-11-19 08:01:28.348606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.633 [2024-11-19 08:01:28.348656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.633 qpair failed and we were unable to recover it. 00:37:36.633 [2024-11-19 08:01:28.348813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.633 [2024-11-19 08:01:28.348849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.633 qpair failed and we were unable to recover it. 00:37:36.633 [2024-11-19 08:01:28.348994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.633 [2024-11-19 08:01:28.349029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.633 qpair failed and we were unable to recover it. 00:37:36.633 [2024-11-19 08:01:28.349190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.633 [2024-11-19 08:01:28.349225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.633 qpair failed and we were unable to recover it. 00:37:36.633 [2024-11-19 08:01:28.349360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.633 [2024-11-19 08:01:28.349395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.633 qpair failed and we were unable to recover it. 00:37:36.633 [2024-11-19 08:01:28.349522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.633 [2024-11-19 08:01:28.349571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.633 qpair failed and we were unable to recover it. 00:37:36.633 [2024-11-19 08:01:28.349700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.633 [2024-11-19 08:01:28.349740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.633 qpair failed and we were unable to recover it. 00:37:36.633 [2024-11-19 08:01:28.349907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.633 [2024-11-19 08:01:28.349968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.633 qpair failed and we were unable to recover it. 00:37:36.633 [2024-11-19 08:01:28.350114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.633 [2024-11-19 08:01:28.350158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.633 qpair failed and we were unable to recover it. 00:37:36.633 [2024-11-19 08:01:28.350304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.633 [2024-11-19 08:01:28.350341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.633 qpair failed and we were unable to recover it. 00:37:36.633 [2024-11-19 08:01:28.350473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.633 [2024-11-19 08:01:28.350509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.633 qpair failed and we were unable to recover it. 00:37:36.633 [2024-11-19 08:01:28.350642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.633 [2024-11-19 08:01:28.350678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.633 qpair failed and we were unable to recover it. 00:37:36.633 [2024-11-19 08:01:28.350824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.633 [2024-11-19 08:01:28.350860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.633 qpair failed and we were unable to recover it. 00:37:36.633 [2024-11-19 08:01:28.350990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.633 [2024-11-19 08:01:28.351051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.633 qpair failed and we were unable to recover it. 00:37:36.633 [2024-11-19 08:01:28.351202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.633 [2024-11-19 08:01:28.351238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.633 qpair failed and we were unable to recover it. 00:37:36.633 [2024-11-19 08:01:28.351379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.633 [2024-11-19 08:01:28.351414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.633 qpair failed and we were unable to recover it. 00:37:36.633 [2024-11-19 08:01:28.351527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.633 [2024-11-19 08:01:28.351563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.633 qpair failed and we were unable to recover it. 00:37:36.633 [2024-11-19 08:01:28.351707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.633 [2024-11-19 08:01:28.351754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.633 qpair failed and we were unable to recover it. 00:37:36.633 [2024-11-19 08:01:28.351902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.633 [2024-11-19 08:01:28.351951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.633 qpair failed and we were unable to recover it. 00:37:36.633 [2024-11-19 08:01:28.352122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.633 [2024-11-19 08:01:28.352159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.633 qpair failed and we were unable to recover it. 00:37:36.633 [2024-11-19 08:01:28.352302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.633 [2024-11-19 08:01:28.352338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.633 qpair failed and we were unable to recover it. 00:37:36.633 [2024-11-19 08:01:28.352462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.633 [2024-11-19 08:01:28.352499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.633 qpair failed and we were unable to recover it. 00:37:36.633 [2024-11-19 08:01:28.352667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.633 [2024-11-19 08:01:28.352720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.633 qpair failed and we were unable to recover it. 00:37:36.633 [2024-11-19 08:01:28.352864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.633 [2024-11-19 08:01:28.352913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.633 qpair failed and we were unable to recover it. 00:37:36.633 [2024-11-19 08:01:28.353069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.633 [2024-11-19 08:01:28.353104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.633 qpair failed and we were unable to recover it. 00:37:36.633 [2024-11-19 08:01:28.353239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.633 [2024-11-19 08:01:28.353274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.633 qpair failed and we were unable to recover it. 00:37:36.633 [2024-11-19 08:01:28.353436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.633 [2024-11-19 08:01:28.353471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.633 qpair failed and we were unable to recover it. 00:37:36.633 [2024-11-19 08:01:28.353620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.633 [2024-11-19 08:01:28.353655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.633 qpair failed and we were unable to recover it. 00:37:36.633 [2024-11-19 08:01:28.353780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.633 [2024-11-19 08:01:28.353815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.633 qpair failed and we were unable to recover it. 00:37:36.633 [2024-11-19 08:01:28.353928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.633 [2024-11-19 08:01:28.353970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.633 qpair failed and we were unable to recover it. 00:37:36.633 [2024-11-19 08:01:28.354133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.633 [2024-11-19 08:01:28.354167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.633 qpair failed and we were unable to recover it. 00:37:36.633 [2024-11-19 08:01:28.354302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.633 [2024-11-19 08:01:28.354337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.633 qpair failed and we were unable to recover it. 00:37:36.633 [2024-11-19 08:01:28.354511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.633 [2024-11-19 08:01:28.354549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.633 qpair failed and we were unable to recover it. 00:37:36.633 [2024-11-19 08:01:28.354727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.633 [2024-11-19 08:01:28.354771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.633 qpair failed and we were unable to recover it. 00:37:36.633 [2024-11-19 08:01:28.354881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.633 [2024-11-19 08:01:28.354917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.633 qpair failed and we were unable to recover it. 00:37:36.633 [2024-11-19 08:01:28.355059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.633 [2024-11-19 08:01:28.355095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.633 qpair failed and we were unable to recover it. 00:37:36.633 [2024-11-19 08:01:28.355209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.633 [2024-11-19 08:01:28.355243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.634 qpair failed and we were unable to recover it. 00:37:36.634 [2024-11-19 08:01:28.355406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.634 [2024-11-19 08:01:28.355456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.634 qpair failed and we were unable to recover it. 00:37:36.634 [2024-11-19 08:01:28.355573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.634 [2024-11-19 08:01:28.355611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.634 qpair failed and we were unable to recover it. 00:37:36.634 [2024-11-19 08:01:28.355763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.634 [2024-11-19 08:01:28.355801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.634 qpair failed and we were unable to recover it. 00:37:36.634 [2024-11-19 08:01:28.355949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.634 [2024-11-19 08:01:28.355985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.634 qpair failed and we were unable to recover it. 00:37:36.634 [2024-11-19 08:01:28.356125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.634 [2024-11-19 08:01:28.356159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.634 qpair failed and we were unable to recover it. 00:37:36.634 [2024-11-19 08:01:28.356265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.634 [2024-11-19 08:01:28.356300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.634 qpair failed and we were unable to recover it. 00:37:36.634 [2024-11-19 08:01:28.356405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.634 [2024-11-19 08:01:28.356440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.634 qpair failed and we were unable to recover it. 00:37:36.634 [2024-11-19 08:01:28.356604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.634 [2024-11-19 08:01:28.356639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.634 qpair failed and we were unable to recover it. 00:37:36.634 [2024-11-19 08:01:28.356795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.634 [2024-11-19 08:01:28.356844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.634 qpair failed and we were unable to recover it. 00:37:36.634 [2024-11-19 08:01:28.356993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.634 [2024-11-19 08:01:28.357029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.634 qpair failed and we were unable to recover it. 00:37:36.634 [2024-11-19 08:01:28.357171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.634 [2024-11-19 08:01:28.357206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.634 qpair failed and we were unable to recover it. 00:37:36.634 [2024-11-19 08:01:28.357340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.634 [2024-11-19 08:01:28.357380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.634 qpair failed and we were unable to recover it. 00:37:36.634 [2024-11-19 08:01:28.357515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.634 [2024-11-19 08:01:28.357550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.634 qpair failed and we were unable to recover it. 00:37:36.634 [2024-11-19 08:01:28.357661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.634 [2024-11-19 08:01:28.357705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.634 qpair failed and we were unable to recover it. 00:37:36.634 [2024-11-19 08:01:28.357857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.634 [2024-11-19 08:01:28.357893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.634 qpair failed and we were unable to recover it. 00:37:36.634 [2024-11-19 08:01:28.358054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.634 [2024-11-19 08:01:28.358089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.634 qpair failed and we were unable to recover it. 00:37:36.634 [2024-11-19 08:01:28.358235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.634 [2024-11-19 08:01:28.358271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.634 qpair failed and we were unable to recover it. 00:37:36.634 [2024-11-19 08:01:28.358411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.634 [2024-11-19 08:01:28.358447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.634 qpair failed and we were unable to recover it. 00:37:36.634 [2024-11-19 08:01:28.358596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.634 [2024-11-19 08:01:28.358635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.634 qpair failed and we were unable to recover it. 00:37:36.634 [2024-11-19 08:01:28.358806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.634 [2024-11-19 08:01:28.358856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.634 qpair failed and we were unable to recover it. 00:37:36.634 [2024-11-19 08:01:28.359010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.634 [2024-11-19 08:01:28.359046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.634 qpair failed and we were unable to recover it. 00:37:36.634 [2024-11-19 08:01:28.359160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.634 [2024-11-19 08:01:28.359195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.634 qpair failed and we were unable to recover it. 00:37:36.634 [2024-11-19 08:01:28.359328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.634 [2024-11-19 08:01:28.359363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.634 qpair failed and we were unable to recover it. 00:37:36.634 [2024-11-19 08:01:28.359524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.634 [2024-11-19 08:01:28.359558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.634 qpair failed and we were unable to recover it. 00:37:36.634 [2024-11-19 08:01:28.359669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.634 [2024-11-19 08:01:28.359722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.634 qpair failed and we were unable to recover it. 00:37:36.634 [2024-11-19 08:01:28.359869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.634 [2024-11-19 08:01:28.359905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.634 qpair failed and we were unable to recover it. 00:37:36.634 [2024-11-19 08:01:28.360057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.634 [2024-11-19 08:01:28.360107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.634 qpair failed and we were unable to recover it. 00:37:36.634 [2024-11-19 08:01:28.360253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.634 [2024-11-19 08:01:28.360291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.634 qpair failed and we were unable to recover it. 00:37:36.634 [2024-11-19 08:01:28.360479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.634 [2024-11-19 08:01:28.360529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.634 qpair failed and we were unable to recover it. 00:37:36.634 [2024-11-19 08:01:28.360675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.634 [2024-11-19 08:01:28.360732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.634 qpair failed and we were unable to recover it. 00:37:36.634 [2024-11-19 08:01:28.360880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.634 [2024-11-19 08:01:28.360916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.634 qpair failed and we were unable to recover it. 00:37:36.634 [2024-11-19 08:01:28.361054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.634 [2024-11-19 08:01:28.361089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.634 qpair failed and we were unable to recover it. 00:37:36.634 [2024-11-19 08:01:28.361199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.634 [2024-11-19 08:01:28.361234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.634 qpair failed and we were unable to recover it. 00:37:36.634 [2024-11-19 08:01:28.361400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.634 [2024-11-19 08:01:28.361435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.634 qpair failed and we were unable to recover it. 00:37:36.635 [2024-11-19 08:01:28.361578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.635 [2024-11-19 08:01:28.361614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.635 qpair failed and we were unable to recover it. 00:37:36.635 [2024-11-19 08:01:28.361736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.635 [2024-11-19 08:01:28.361771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.635 qpair failed and we were unable to recover it. 00:37:36.635 [2024-11-19 08:01:28.361877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.635 [2024-11-19 08:01:28.361911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.635 qpair failed and we were unable to recover it. 00:37:36.635 [2024-11-19 08:01:28.362039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.635 [2024-11-19 08:01:28.362073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.635 qpair failed and we were unable to recover it. 00:37:36.635 [2024-11-19 08:01:28.362192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.635 [2024-11-19 08:01:28.362232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.635 qpair failed and we were unable to recover it. 00:37:36.635 [2024-11-19 08:01:28.362373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.635 [2024-11-19 08:01:28.362409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.635 qpair failed and we were unable to recover it. 00:37:36.635 [2024-11-19 08:01:28.362539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.635 [2024-11-19 08:01:28.362589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.635 qpair failed and we were unable to recover it. 00:37:36.635 [2024-11-19 08:01:28.362715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.635 [2024-11-19 08:01:28.362751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.635 qpair failed and we were unable to recover it. 00:37:36.635 [2024-11-19 08:01:28.362894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.635 [2024-11-19 08:01:28.362929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.635 qpair failed and we were unable to recover it. 00:37:36.635 [2024-11-19 08:01:28.363059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.635 [2024-11-19 08:01:28.363095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.635 qpair failed and we were unable to recover it. 00:37:36.635 [2024-11-19 08:01:28.363231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.635 [2024-11-19 08:01:28.363265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.635 qpair failed and we were unable to recover it. 00:37:36.635 [2024-11-19 08:01:28.363407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.635 [2024-11-19 08:01:28.363442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.635 qpair failed and we were unable to recover it. 00:37:36.635 [2024-11-19 08:01:28.363588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.635 [2024-11-19 08:01:28.363624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.635 qpair failed and we were unable to recover it. 00:37:36.635 [2024-11-19 08:01:28.363778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.635 [2024-11-19 08:01:28.363815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.635 qpair failed and we were unable to recover it. 00:37:36.635 [2024-11-19 08:01:28.363978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.635 [2024-11-19 08:01:28.364012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.635 qpair failed and we were unable to recover it. 00:37:36.635 [2024-11-19 08:01:28.364178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.635 [2024-11-19 08:01:28.364213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.635 qpair failed and we were unable to recover it. 00:37:36.635 [2024-11-19 08:01:28.364354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.635 [2024-11-19 08:01:28.364389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.635 qpair failed and we were unable to recover it. 00:37:36.635 [2024-11-19 08:01:28.364501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.635 [2024-11-19 08:01:28.364541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.635 qpair failed and we were unable to recover it. 00:37:36.635 [2024-11-19 08:01:28.364682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.635 [2024-11-19 08:01:28.364735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.635 qpair failed and we were unable to recover it. 00:37:36.635 [2024-11-19 08:01:28.364843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.635 [2024-11-19 08:01:28.364878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.635 qpair failed and we were unable to recover it. 00:37:36.635 [2024-11-19 08:01:28.365017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.635 [2024-11-19 08:01:28.365052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.635 qpair failed and we were unable to recover it. 00:37:36.635 [2024-11-19 08:01:28.365161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.635 [2024-11-19 08:01:28.365196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.635 qpair failed and we were unable to recover it. 00:37:36.635 [2024-11-19 08:01:28.365353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.635 [2024-11-19 08:01:28.365388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.635 qpair failed and we were unable to recover it. 00:37:36.635 [2024-11-19 08:01:28.365524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.635 [2024-11-19 08:01:28.365558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.635 qpair failed and we were unable to recover it. 00:37:36.635 [2024-11-19 08:01:28.365673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.635 [2024-11-19 08:01:28.365714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.635 qpair failed and we were unable to recover it. 00:37:36.635 [2024-11-19 08:01:28.365876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.635 [2024-11-19 08:01:28.365926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.635 qpair failed and we were unable to recover it. 00:37:36.635 [2024-11-19 08:01:28.366074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.635 [2024-11-19 08:01:28.366113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.635 qpair failed and we were unable to recover it. 00:37:36.635 [2024-11-19 08:01:28.366229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.635 [2024-11-19 08:01:28.366265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.635 qpair failed and we were unable to recover it. 00:37:36.635 [2024-11-19 08:01:28.366400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.635 [2024-11-19 08:01:28.366436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.635 qpair failed and we were unable to recover it. 00:37:36.635 [2024-11-19 08:01:28.366551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.635 [2024-11-19 08:01:28.366587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.635 qpair failed and we were unable to recover it. 00:37:36.635 [2024-11-19 08:01:28.366696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.635 [2024-11-19 08:01:28.366733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.635 qpair failed and we were unable to recover it. 00:37:36.635 [2024-11-19 08:01:28.366848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.635 [2024-11-19 08:01:28.366889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.635 qpair failed and we were unable to recover it. 00:37:36.635 [2024-11-19 08:01:28.367053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.635 [2024-11-19 08:01:28.367103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.635 qpair failed and we were unable to recover it. 00:37:36.635 [2024-11-19 08:01:28.367256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.635 [2024-11-19 08:01:28.367306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.635 qpair failed and we were unable to recover it. 00:37:36.635 [2024-11-19 08:01:28.367425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.635 [2024-11-19 08:01:28.367462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.635 qpair failed and we were unable to recover it. 00:37:36.635 [2024-11-19 08:01:28.367607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.635 [2024-11-19 08:01:28.367642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.635 qpair failed and we were unable to recover it. 00:37:36.635 [2024-11-19 08:01:28.367775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.635 [2024-11-19 08:01:28.367810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.635 qpair failed and we were unable to recover it. 00:37:36.635 [2024-11-19 08:01:28.367924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.636 [2024-11-19 08:01:28.367961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.636 qpair failed and we were unable to recover it. 00:37:36.636 [2024-11-19 08:01:28.368131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.636 [2024-11-19 08:01:28.368167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.636 qpair failed and we were unable to recover it. 00:37:36.636 [2024-11-19 08:01:28.368313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.636 [2024-11-19 08:01:28.368353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.636 qpair failed and we were unable to recover it. 00:37:36.636 [2024-11-19 08:01:28.368499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.636 [2024-11-19 08:01:28.368535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.636 qpair failed and we were unable to recover it. 00:37:36.636 [2024-11-19 08:01:28.368658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.636 [2024-11-19 08:01:28.368715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.636 qpair failed and we were unable to recover it. 00:37:36.636 [2024-11-19 08:01:28.368857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.636 [2024-11-19 08:01:28.368893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.636 qpair failed and we were unable to recover it. 00:37:36.636 [2024-11-19 08:01:28.369032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.636 [2024-11-19 08:01:28.369068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.636 qpair failed and we were unable to recover it. 00:37:36.636 [2024-11-19 08:01:28.369185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.636 [2024-11-19 08:01:28.369224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.636 qpair failed and we were unable to recover it. 00:37:36.636 [2024-11-19 08:01:28.369342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.636 [2024-11-19 08:01:28.369378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.636 qpair failed and we were unable to recover it. 00:37:36.636 [2024-11-19 08:01:28.369522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.636 [2024-11-19 08:01:28.369559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.636 qpair failed and we were unable to recover it. 00:37:36.636 [2024-11-19 08:01:28.369673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.636 [2024-11-19 08:01:28.369717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.636 qpair failed and we were unable to recover it. 00:37:36.636 [2024-11-19 08:01:28.369854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.636 [2024-11-19 08:01:28.369890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.636 qpair failed and we were unable to recover it. 00:37:36.636 [2024-11-19 08:01:28.370024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.636 [2024-11-19 08:01:28.370058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.636 qpair failed and we were unable to recover it. 00:37:36.636 [2024-11-19 08:01:28.370221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.636 [2024-11-19 08:01:28.370255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.636 qpair failed and we were unable to recover it. 00:37:36.636 [2024-11-19 08:01:28.370387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.636 [2024-11-19 08:01:28.370436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.636 qpair failed and we were unable to recover it. 00:37:36.636 [2024-11-19 08:01:28.370551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.636 [2024-11-19 08:01:28.370587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.636 qpair failed and we were unable to recover it. 00:37:36.636 [2024-11-19 08:01:28.370731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.636 [2024-11-19 08:01:28.370768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.636 qpair failed and we were unable to recover it. 00:37:36.636 [2024-11-19 08:01:28.370883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.636 [2024-11-19 08:01:28.370919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.636 qpair failed and we were unable to recover it. 00:37:36.636 [2024-11-19 08:01:28.371101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.636 [2024-11-19 08:01:28.371137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.636 qpair failed and we were unable to recover it. 00:37:36.636 [2024-11-19 08:01:28.371285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.636 [2024-11-19 08:01:28.371321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.636 qpair failed and we were unable to recover it. 00:37:36.636 [2024-11-19 08:01:28.371455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.636 [2024-11-19 08:01:28.371506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.636 qpair failed and we were unable to recover it. 00:37:36.636 [2024-11-19 08:01:28.371646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.636 [2024-11-19 08:01:28.371681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.636 qpair failed and we were unable to recover it. 00:37:36.636 [2024-11-19 08:01:28.371866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.636 [2024-11-19 08:01:28.371902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.636 qpair failed and we were unable to recover it. 00:37:36.636 [2024-11-19 08:01:28.372062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.636 [2024-11-19 08:01:28.372113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.636 qpair failed and we were unable to recover it. 00:37:36.636 [2024-11-19 08:01:28.372255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.636 [2024-11-19 08:01:28.372293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.636 qpair failed and we were unable to recover it. 00:37:36.636 [2024-11-19 08:01:28.372434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.636 [2024-11-19 08:01:28.372471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.636 qpair failed and we were unable to recover it. 00:37:36.636 [2024-11-19 08:01:28.372587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.636 [2024-11-19 08:01:28.372623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.636 qpair failed and we were unable to recover it. 00:37:36.636 [2024-11-19 08:01:28.372732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.636 [2024-11-19 08:01:28.372769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.636 qpair failed and we were unable to recover it. 00:37:36.636 [2024-11-19 08:01:28.372887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.636 [2024-11-19 08:01:28.372923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.636 qpair failed and we were unable to recover it. 00:37:36.636 [2024-11-19 08:01:28.373030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.636 [2024-11-19 08:01:28.373065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.636 qpair failed and we were unable to recover it. 00:37:36.636 [2024-11-19 08:01:28.373196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.636 [2024-11-19 08:01:28.373231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.636 qpair failed and we were unable to recover it. 00:37:36.636 [2024-11-19 08:01:28.373410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.636 [2024-11-19 08:01:28.373460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.636 qpair failed and we were unable to recover it. 00:37:36.636 [2024-11-19 08:01:28.373586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.636 [2024-11-19 08:01:28.373623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.636 qpair failed and we were unable to recover it. 00:37:36.636 [2024-11-19 08:01:28.373750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.636 [2024-11-19 08:01:28.373799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.636 qpair failed and we were unable to recover it. 00:37:36.636 [2024-11-19 08:01:28.373951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.636 [2024-11-19 08:01:28.373989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.636 qpair failed and we were unable to recover it. 00:37:36.636 [2024-11-19 08:01:28.374096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.636 [2024-11-19 08:01:28.374132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.636 qpair failed and we were unable to recover it. 00:37:36.636 [2024-11-19 08:01:28.374239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.636 [2024-11-19 08:01:28.374274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.636 qpair failed and we were unable to recover it. 00:37:36.636 [2024-11-19 08:01:28.374444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.636 [2024-11-19 08:01:28.374481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.636 qpair failed and we were unable to recover it. 00:37:36.637 [2024-11-19 08:01:28.374624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.637 [2024-11-19 08:01:28.374659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.637 qpair failed and we were unable to recover it. 00:37:36.637 [2024-11-19 08:01:28.374817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.637 [2024-11-19 08:01:28.374865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.637 qpair failed and we were unable to recover it. 00:37:36.637 [2024-11-19 08:01:28.375016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.637 [2024-11-19 08:01:28.375053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.637 qpair failed and we were unable to recover it. 00:37:36.637 [2024-11-19 08:01:28.375170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.637 [2024-11-19 08:01:28.375207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.637 qpair failed and we were unable to recover it. 00:37:36.637 [2024-11-19 08:01:28.375329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.637 [2024-11-19 08:01:28.375364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.637 qpair failed and we were unable to recover it. 00:37:36.637 [2024-11-19 08:01:28.375481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.637 [2024-11-19 08:01:28.375517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.637 qpair failed and we were unable to recover it. 00:37:36.637 [2024-11-19 08:01:28.375620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.637 [2024-11-19 08:01:28.375655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.637 qpair failed and we were unable to recover it. 00:37:36.637 [2024-11-19 08:01:28.375802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.637 [2024-11-19 08:01:28.375838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.637 qpair failed and we were unable to recover it. 00:37:36.637 [2024-11-19 08:01:28.375949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.637 [2024-11-19 08:01:28.375985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.637 qpair failed and we were unable to recover it. 00:37:36.637 [2024-11-19 08:01:28.376111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.637 [2024-11-19 08:01:28.376161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.637 qpair failed and we were unable to recover it. 00:37:36.637 [2024-11-19 08:01:28.376302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.637 [2024-11-19 08:01:28.376340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.637 qpair failed and we were unable to recover it. 00:37:36.637 [2024-11-19 08:01:28.376486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.637 [2024-11-19 08:01:28.376523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.637 qpair failed and we were unable to recover it. 00:37:36.637 [2024-11-19 08:01:28.376643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.637 [2024-11-19 08:01:28.376678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.637 qpair failed and we were unable to recover it. 00:37:36.637 [2024-11-19 08:01:28.376823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.637 [2024-11-19 08:01:28.376858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.637 qpair failed and we were unable to recover it. 00:37:36.637 [2024-11-19 08:01:28.376968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.637 [2024-11-19 08:01:28.377002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.637 qpair failed and we were unable to recover it. 00:37:36.637 [2024-11-19 08:01:28.377120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.637 [2024-11-19 08:01:28.377155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.637 qpair failed and we were unable to recover it. 00:37:36.637 [2024-11-19 08:01:28.377262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.637 [2024-11-19 08:01:28.377297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.637 qpair failed and we were unable to recover it. 00:37:36.637 [2024-11-19 08:01:28.377441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.637 [2024-11-19 08:01:28.377478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.637 qpair failed and we were unable to recover it. 00:37:36.637 [2024-11-19 08:01:28.377661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.637 [2024-11-19 08:01:28.377720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.637 qpair failed and we were unable to recover it. 00:37:36.637 [2024-11-19 08:01:28.377856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.637 [2024-11-19 08:01:28.377905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.637 qpair failed and we were unable to recover it. 00:37:36.637 [2024-11-19 08:01:28.378048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.637 [2024-11-19 08:01:28.378084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.637 qpair failed and we were unable to recover it. 00:37:36.637 [2024-11-19 08:01:28.378247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.637 [2024-11-19 08:01:28.378282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.637 qpair failed and we were unable to recover it. 00:37:36.637 [2024-11-19 08:01:28.378390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.637 [2024-11-19 08:01:28.378432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.637 qpair failed and we were unable to recover it. 00:37:36.637 [2024-11-19 08:01:28.378596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.637 [2024-11-19 08:01:28.378631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.637 qpair failed and we were unable to recover it. 00:37:36.637 [2024-11-19 08:01:28.378790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.637 [2024-11-19 08:01:28.378841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.637 qpair failed and we were unable to recover it. 00:37:36.637 [2024-11-19 08:01:28.379018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.637 [2024-11-19 08:01:28.379055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.637 qpair failed and we were unable to recover it. 00:37:36.637 [2024-11-19 08:01:28.379165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.637 [2024-11-19 08:01:28.379200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.637 qpair failed and we were unable to recover it. 00:37:36.637 [2024-11-19 08:01:28.379339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.637 [2024-11-19 08:01:28.379375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.637 qpair failed and we were unable to recover it. 00:37:36.637 [2024-11-19 08:01:28.379517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.637 [2024-11-19 08:01:28.379552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.637 qpair failed and we were unable to recover it. 00:37:36.637 [2024-11-19 08:01:28.379704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.637 [2024-11-19 08:01:28.379753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.637 qpair failed and we were unable to recover it. 00:37:36.637 [2024-11-19 08:01:28.379865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.637 [2024-11-19 08:01:28.379900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.637 qpair failed and we were unable to recover it. 00:37:36.637 [2024-11-19 08:01:28.380041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.637 [2024-11-19 08:01:28.380076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.637 qpair failed and we were unable to recover it. 00:37:36.637 [2024-11-19 08:01:28.380218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.637 [2024-11-19 08:01:28.380254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.637 qpair failed and we were unable to recover it. 00:37:36.637 [2024-11-19 08:01:28.380366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.637 [2024-11-19 08:01:28.380401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.637 qpair failed and we were unable to recover it. 00:37:36.637 [2024-11-19 08:01:28.380506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.637 [2024-11-19 08:01:28.380541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.637 qpair failed and we were unable to recover it. 00:37:36.637 [2024-11-19 08:01:28.380648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.637 [2024-11-19 08:01:28.380685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.637 qpair failed and we were unable to recover it. 00:37:36.637 [2024-11-19 08:01:28.380814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.637 [2024-11-19 08:01:28.380850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.637 qpair failed and we were unable to recover it. 00:37:36.637 [2024-11-19 08:01:28.380979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.638 [2024-11-19 08:01:28.381014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.638 qpair failed and we were unable to recover it. 00:37:36.638 [2024-11-19 08:01:28.381116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.638 [2024-11-19 08:01:28.381152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.638 qpair failed and we were unable to recover it. 00:37:36.638 [2024-11-19 08:01:28.381299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.638 [2024-11-19 08:01:28.381338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.638 qpair failed and we were unable to recover it. 00:37:36.638 [2024-11-19 08:01:28.381528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.638 [2024-11-19 08:01:28.381589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.638 qpair failed and we were unable to recover it. 00:37:36.638 [2024-11-19 08:01:28.381748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.638 [2024-11-19 08:01:28.381798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.638 qpair failed and we were unable to recover it. 00:37:36.638 [2024-11-19 08:01:28.381957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.638 [2024-11-19 08:01:28.381993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.638 qpair failed and we were unable to recover it. 00:37:36.638 [2024-11-19 08:01:28.382155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.638 [2024-11-19 08:01:28.382190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.638 qpair failed and we were unable to recover it. 00:37:36.638 [2024-11-19 08:01:28.382328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.638 [2024-11-19 08:01:28.382362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.638 qpair failed and we were unable to recover it. 00:37:36.638 [2024-11-19 08:01:28.382468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.638 [2024-11-19 08:01:28.382503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.638 qpair failed and we were unable to recover it. 00:37:36.638 [2024-11-19 08:01:28.382641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.638 [2024-11-19 08:01:28.382678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.638 qpair failed and we were unable to recover it. 00:37:36.638 [2024-11-19 08:01:28.382833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.638 [2024-11-19 08:01:28.382869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.638 qpair failed and we were unable to recover it. 00:37:36.638 [2024-11-19 08:01:28.383004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.638 [2024-11-19 08:01:28.383055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.638 qpair failed and we were unable to recover it. 00:37:36.638 [2024-11-19 08:01:28.383169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.638 [2024-11-19 08:01:28.383206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.638 qpair failed and we were unable to recover it. 00:37:36.638 [2024-11-19 08:01:28.383375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.638 [2024-11-19 08:01:28.383411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.638 qpair failed and we were unable to recover it. 00:37:36.638 [2024-11-19 08:01:28.383512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.638 [2024-11-19 08:01:28.383548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.638 qpair failed and we were unable to recover it. 00:37:36.638 [2024-11-19 08:01:28.383653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.638 [2024-11-19 08:01:28.383695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.638 qpair failed and we were unable to recover it. 00:37:36.638 [2024-11-19 08:01:28.383831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.638 [2024-11-19 08:01:28.383881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.638 qpair failed and we were unable to recover it. 00:37:36.638 [2024-11-19 08:01:28.384005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.638 [2024-11-19 08:01:28.384042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.638 qpair failed and we were unable to recover it. 00:37:36.638 [2024-11-19 08:01:28.384186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.638 [2024-11-19 08:01:28.384222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.638 qpair failed and we were unable to recover it. 00:37:36.638 [2024-11-19 08:01:28.384329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.638 [2024-11-19 08:01:28.384366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.638 qpair failed and we were unable to recover it. 00:37:36.638 [2024-11-19 08:01:28.384467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.638 [2024-11-19 08:01:28.384503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.638 qpair failed and we were unable to recover it. 00:37:36.638 [2024-11-19 08:01:28.384632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.638 [2024-11-19 08:01:28.384668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.638 qpair failed and we were unable to recover it. 00:37:36.638 [2024-11-19 08:01:28.384829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.638 [2024-11-19 08:01:28.384866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.638 qpair failed and we were unable to recover it. 00:37:36.638 [2024-11-19 08:01:28.385013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.638 [2024-11-19 08:01:28.385053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.638 qpair failed and we were unable to recover it. 00:37:36.638 [2024-11-19 08:01:28.385171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.638 [2024-11-19 08:01:28.385210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.638 qpair failed and we were unable to recover it. 00:37:36.638 [2024-11-19 08:01:28.385356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.638 [2024-11-19 08:01:28.385399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.638 qpair failed and we were unable to recover it. 00:37:36.638 [2024-11-19 08:01:28.385512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.638 [2024-11-19 08:01:28.385548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.638 qpair failed and we were unable to recover it. 00:37:36.638 [2024-11-19 08:01:28.385708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.638 [2024-11-19 08:01:28.385762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.638 qpair failed and we were unable to recover it. 00:37:36.638 [2024-11-19 08:01:28.385875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.638 [2024-11-19 08:01:28.385912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.638 qpair failed and we were unable to recover it. 00:37:36.638 [2024-11-19 08:01:28.386030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.638 [2024-11-19 08:01:28.386066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.638 qpair failed and we were unable to recover it. 00:37:36.638 [2024-11-19 08:01:28.386171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.638 [2024-11-19 08:01:28.386207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.638 qpair failed and we were unable to recover it. 00:37:36.638 [2024-11-19 08:01:28.386319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.638 [2024-11-19 08:01:28.386366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.638 qpair failed and we were unable to recover it. 00:37:36.638 [2024-11-19 08:01:28.386514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.638 [2024-11-19 08:01:28.386549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.638 qpair failed and we were unable to recover it. 00:37:36.638 [2024-11-19 08:01:28.386702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.638 [2024-11-19 08:01:28.386746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.638 qpair failed and we were unable to recover it. 00:37:36.638 [2024-11-19 08:01:28.386858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.638 [2024-11-19 08:01:28.386893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.638 qpair failed and we were unable to recover it. 00:37:36.638 [2024-11-19 08:01:28.387035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.638 [2024-11-19 08:01:28.387072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.638 qpair failed and we were unable to recover it. 00:37:36.638 [2024-11-19 08:01:28.387179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.638 [2024-11-19 08:01:28.387227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.638 qpair failed and we were unable to recover it. 00:37:36.638 [2024-11-19 08:01:28.387326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.638 [2024-11-19 08:01:28.387363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.638 qpair failed and we were unable to recover it. 00:37:36.638 [2024-11-19 08:01:28.387500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.639 [2024-11-19 08:01:28.387537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.639 qpair failed and we were unable to recover it. 00:37:36.639 [2024-11-19 08:01:28.387685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.639 [2024-11-19 08:01:28.387738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.639 qpair failed and we were unable to recover it. 00:37:36.639 [2024-11-19 08:01:28.387872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.639 [2024-11-19 08:01:28.387921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.639 qpair failed and we were unable to recover it. 00:37:36.639 [2024-11-19 08:01:28.388080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.639 [2024-11-19 08:01:28.388118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.639 qpair failed and we were unable to recover it. 00:37:36.639 [2024-11-19 08:01:28.388228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.639 [2024-11-19 08:01:28.388264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.639 qpair failed and we were unable to recover it. 00:37:36.639 [2024-11-19 08:01:28.388410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.639 [2024-11-19 08:01:28.388446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.639 qpair failed and we were unable to recover it. 00:37:36.639 [2024-11-19 08:01:28.388597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.639 [2024-11-19 08:01:28.388648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.639 qpair failed and we were unable to recover it. 00:37:36.639 [2024-11-19 08:01:28.388791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.639 [2024-11-19 08:01:28.388841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.639 qpair failed and we were unable to recover it. 00:37:36.639 [2024-11-19 08:01:28.388997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.639 [2024-11-19 08:01:28.389036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.639 qpair failed and we were unable to recover it. 00:37:36.639 [2024-11-19 08:01:28.389154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.639 [2024-11-19 08:01:28.389192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.639 qpair failed and we were unable to recover it. 00:37:36.639 [2024-11-19 08:01:28.389334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.639 [2024-11-19 08:01:28.389370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.639 qpair failed and we were unable to recover it. 00:37:36.639 [2024-11-19 08:01:28.389501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.639 [2024-11-19 08:01:28.389537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.639 qpair failed and we were unable to recover it. 00:37:36.639 [2024-11-19 08:01:28.389655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.639 [2024-11-19 08:01:28.389699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.639 qpair failed and we were unable to recover it. 00:37:36.639 [2024-11-19 08:01:28.389845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.639 [2024-11-19 08:01:28.389881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.639 qpair failed and we were unable to recover it. 00:37:36.639 [2024-11-19 08:01:28.390003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.639 [2024-11-19 08:01:28.390045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.639 qpair failed and we were unable to recover it. 00:37:36.639 [2024-11-19 08:01:28.390156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.639 [2024-11-19 08:01:28.390192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.639 qpair failed and we were unable to recover it. 00:37:36.639 [2024-11-19 08:01:28.390360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.639 [2024-11-19 08:01:28.390419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.639 qpair failed and we were unable to recover it. 00:37:36.639 [2024-11-19 08:01:28.390540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.639 [2024-11-19 08:01:28.390579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.639 qpair failed and we were unable to recover it. 00:37:36.639 [2024-11-19 08:01:28.390743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.639 [2024-11-19 08:01:28.390794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.639 qpair failed and we were unable to recover it. 00:37:36.639 [2024-11-19 08:01:28.390941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.639 [2024-11-19 08:01:28.390978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.639 qpair failed and we were unable to recover it. 00:37:36.639 [2024-11-19 08:01:28.391093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.639 [2024-11-19 08:01:28.391129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.639 qpair failed and we were unable to recover it. 00:37:36.639 [2024-11-19 08:01:28.391269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.639 [2024-11-19 08:01:28.391305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.639 qpair failed and we were unable to recover it. 00:37:36.639 [2024-11-19 08:01:28.391415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.639 [2024-11-19 08:01:28.391452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.639 qpair failed and we were unable to recover it. 00:37:36.639 [2024-11-19 08:01:28.391595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.639 [2024-11-19 08:01:28.391631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.639 qpair failed and we were unable to recover it. 00:37:36.639 [2024-11-19 08:01:28.391748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.639 [2024-11-19 08:01:28.391786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.639 qpair failed and we were unable to recover it. 00:37:36.639 [2024-11-19 08:01:28.391902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.639 [2024-11-19 08:01:28.391937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.639 qpair failed and we were unable to recover it. 00:37:36.639 [2024-11-19 08:01:28.392076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.639 [2024-11-19 08:01:28.392113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.639 qpair failed and we were unable to recover it. 00:37:36.639 [2024-11-19 08:01:28.392210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.639 [2024-11-19 08:01:28.392247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.639 qpair failed and we were unable to recover it. 00:37:36.639 [2024-11-19 08:01:28.392363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.639 [2024-11-19 08:01:28.392400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.639 qpair failed and we were unable to recover it. 00:37:36.639 [2024-11-19 08:01:28.392578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.639 [2024-11-19 08:01:28.392628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.639 qpair failed and we were unable to recover it. 00:37:36.639 [2024-11-19 08:01:28.392794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.639 [2024-11-19 08:01:28.392832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.639 qpair failed and we were unable to recover it. 00:37:36.639 [2024-11-19 08:01:28.392981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.639 [2024-11-19 08:01:28.393018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.639 qpair failed and we were unable to recover it. 00:37:36.639 [2024-11-19 08:01:28.393155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.639 [2024-11-19 08:01:28.393191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.639 qpair failed and we were unable to recover it. 00:37:36.639 [2024-11-19 08:01:28.393326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.639 [2024-11-19 08:01:28.393362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.639 qpair failed and we were unable to recover it. 00:37:36.640 [2024-11-19 08:01:28.393504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.640 [2024-11-19 08:01:28.393541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.640 qpair failed and we were unable to recover it. 00:37:36.640 [2024-11-19 08:01:28.393646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.640 [2024-11-19 08:01:28.393683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.640 qpair failed and we were unable to recover it. 00:37:36.640 [2024-11-19 08:01:28.393811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.640 [2024-11-19 08:01:28.393846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.640 qpair failed and we were unable to recover it. 00:37:36.640 [2024-11-19 08:01:28.393984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.640 [2024-11-19 08:01:28.394019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.640 qpair failed and we were unable to recover it. 00:37:36.640 [2024-11-19 08:01:28.394148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.640 [2024-11-19 08:01:28.394185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.640 qpair failed and we were unable to recover it. 00:37:36.640 [2024-11-19 08:01:28.394305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.640 [2024-11-19 08:01:28.394341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.640 qpair failed and we were unable to recover it. 00:37:36.640 [2024-11-19 08:01:28.394478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.640 [2024-11-19 08:01:28.394516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.640 qpair failed and we were unable to recover it. 00:37:36.640 [2024-11-19 08:01:28.394638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.640 [2024-11-19 08:01:28.394674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.640 qpair failed and we were unable to recover it. 00:37:36.640 [2024-11-19 08:01:28.394839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.640 [2024-11-19 08:01:28.394889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.640 qpair failed and we were unable to recover it. 00:37:36.640 [2024-11-19 08:01:28.395004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.640 [2024-11-19 08:01:28.395042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.640 qpair failed and we were unable to recover it. 00:37:36.640 [2024-11-19 08:01:28.395181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.640 [2024-11-19 08:01:28.395217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.640 qpair failed and we were unable to recover it. 00:37:36.640 [2024-11-19 08:01:28.395383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.640 [2024-11-19 08:01:28.395418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.640 qpair failed and we were unable to recover it. 00:37:36.640 [2024-11-19 08:01:28.395564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.640 [2024-11-19 08:01:28.395600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.640 qpair failed and we were unable to recover it. 00:37:36.640 [2024-11-19 08:01:28.395740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.640 [2024-11-19 08:01:28.395777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.640 qpair failed and we were unable to recover it. 00:37:36.640 [2024-11-19 08:01:28.395883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.640 [2024-11-19 08:01:28.395918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.640 qpair failed and we were unable to recover it. 00:37:36.640 [2024-11-19 08:01:28.396062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.640 [2024-11-19 08:01:28.396098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.640 qpair failed and we were unable to recover it. 00:37:36.640 [2024-11-19 08:01:28.396232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.640 [2024-11-19 08:01:28.396267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.640 qpair failed and we were unable to recover it. 00:37:36.640 [2024-11-19 08:01:28.396405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.640 [2024-11-19 08:01:28.396441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.640 qpair failed and we were unable to recover it. 00:37:36.640 [2024-11-19 08:01:28.396547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.640 [2024-11-19 08:01:28.396585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.640 qpair failed and we were unable to recover it. 00:37:36.640 [2024-11-19 08:01:28.396738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.640 [2024-11-19 08:01:28.396774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.640 qpair failed and we were unable to recover it. 00:37:36.640 [2024-11-19 08:01:28.396913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.640 [2024-11-19 08:01:28.396967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.640 qpair failed and we were unable to recover it. 00:37:36.640 [2024-11-19 08:01:28.397114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.640 [2024-11-19 08:01:28.397150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.640 qpair failed and we were unable to recover it. 00:37:36.640 [2024-11-19 08:01:28.397292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.640 [2024-11-19 08:01:28.397329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.640 qpair failed and we were unable to recover it. 00:37:36.640 [2024-11-19 08:01:28.397497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.640 [2024-11-19 08:01:28.397532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.640 qpair failed and we were unable to recover it. 00:37:36.640 [2024-11-19 08:01:28.397648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.640 [2024-11-19 08:01:28.397684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.640 qpair failed and we were unable to recover it. 00:37:36.640 [2024-11-19 08:01:28.397808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.640 [2024-11-19 08:01:28.397854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.640 qpair failed and we were unable to recover it. 00:37:36.640 [2024-11-19 08:01:28.398017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.640 [2024-11-19 08:01:28.398067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.640 qpair failed and we were unable to recover it. 00:37:36.640 [2024-11-19 08:01:28.398186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.640 [2024-11-19 08:01:28.398224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.640 qpair failed and we were unable to recover it. 00:37:36.640 [2024-11-19 08:01:28.398323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.640 [2024-11-19 08:01:28.398359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.640 qpair failed and we were unable to recover it. 00:37:36.640 [2024-11-19 08:01:28.398470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.640 [2024-11-19 08:01:28.398506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.640 qpair failed and we were unable to recover it. 00:37:36.640 [2024-11-19 08:01:28.398669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.640 [2024-11-19 08:01:28.398715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.640 qpair failed and we were unable to recover it. 00:37:36.640 [2024-11-19 08:01:28.398834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.640 [2024-11-19 08:01:28.398869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.640 qpair failed and we were unable to recover it. 00:37:36.640 [2024-11-19 08:01:28.398981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.640 [2024-11-19 08:01:28.399016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.640 qpair failed and we were unable to recover it. 00:37:36.640 [2024-11-19 08:01:28.399150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.640 [2024-11-19 08:01:28.399186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.640 qpair failed and we were unable to recover it. 00:37:36.640 [2024-11-19 08:01:28.399304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.640 [2024-11-19 08:01:28.399339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.640 qpair failed and we were unable to recover it. 00:37:36.640 [2024-11-19 08:01:28.399454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.640 [2024-11-19 08:01:28.399492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.640 qpair failed and we were unable to recover it. 00:37:36.640 [2024-11-19 08:01:28.399661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.640 [2024-11-19 08:01:28.399722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.640 qpair failed and we were unable to recover it. 00:37:36.641 [2024-11-19 08:01:28.399881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.641 [2024-11-19 08:01:28.399919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.641 qpair failed and we were unable to recover it. 00:37:36.641 [2024-11-19 08:01:28.400070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.641 [2024-11-19 08:01:28.400106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.641 qpair failed and we were unable to recover it. 00:37:36.641 [2024-11-19 08:01:28.400245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.641 [2024-11-19 08:01:28.400281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.641 qpair failed and we were unable to recover it. 00:37:36.641 [2024-11-19 08:01:28.400454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.641 [2024-11-19 08:01:28.400491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.641 qpair failed and we were unable to recover it. 00:37:36.641 [2024-11-19 08:01:28.400632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.641 [2024-11-19 08:01:28.400670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.641 qpair failed and we were unable to recover it. 00:37:36.641 [2024-11-19 08:01:28.400831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.641 [2024-11-19 08:01:28.400868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.641 qpair failed and we were unable to recover it. 00:37:36.641 [2024-11-19 08:01:28.400995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.641 [2024-11-19 08:01:28.401045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.641 qpair failed and we were unable to recover it. 00:37:36.641 [2024-11-19 08:01:28.401192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.641 [2024-11-19 08:01:28.401231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.641 qpair failed and we were unable to recover it. 00:37:36.641 [2024-11-19 08:01:28.401397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.641 [2024-11-19 08:01:28.401433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.641 qpair failed and we were unable to recover it. 00:37:36.641 [2024-11-19 08:01:28.401571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.641 [2024-11-19 08:01:28.401609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.641 qpair failed and we were unable to recover it. 00:37:36.641 [2024-11-19 08:01:28.401777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.641 [2024-11-19 08:01:28.401815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.641 qpair failed and we were unable to recover it. 00:37:36.641 [2024-11-19 08:01:28.401961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.641 [2024-11-19 08:01:28.401998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.641 qpair failed and we were unable to recover it. 00:37:36.641 [2024-11-19 08:01:28.402111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.641 [2024-11-19 08:01:28.402147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.641 qpair failed and we were unable to recover it. 00:37:36.641 [2024-11-19 08:01:28.402321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.641 [2024-11-19 08:01:28.402357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.641 qpair failed and we were unable to recover it. 00:37:36.641 [2024-11-19 08:01:28.402493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.641 [2024-11-19 08:01:28.402529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.641 qpair failed and we were unable to recover it. 00:37:36.641 [2024-11-19 08:01:28.402705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.641 [2024-11-19 08:01:28.402756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.641 qpair failed and we were unable to recover it. 00:37:36.641 [2024-11-19 08:01:28.402908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.641 [2024-11-19 08:01:28.402945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.641 qpair failed and we were unable to recover it. 00:37:36.641 [2024-11-19 08:01:28.403059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.641 [2024-11-19 08:01:28.403096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.641 qpair failed and we were unable to recover it. 00:37:36.641 [2024-11-19 08:01:28.403206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.641 [2024-11-19 08:01:28.403242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.641 qpair failed and we were unable to recover it. 00:37:36.641 [2024-11-19 08:01:28.403346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.641 [2024-11-19 08:01:28.403383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.641 qpair failed and we were unable to recover it. 00:37:36.641 [2024-11-19 08:01:28.403525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.641 [2024-11-19 08:01:28.403560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.641 qpair failed and we were unable to recover it. 00:37:36.641 [2024-11-19 08:01:28.403736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.641 [2024-11-19 08:01:28.403772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.641 qpair failed and we were unable to recover it. 00:37:36.641 [2024-11-19 08:01:28.403900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.641 [2024-11-19 08:01:28.403936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.641 qpair failed and we were unable to recover it. 00:37:36.641 [2024-11-19 08:01:28.404046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.641 [2024-11-19 08:01:28.404099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.641 qpair failed and we were unable to recover it. 00:37:36.641 [2024-11-19 08:01:28.404243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.641 [2024-11-19 08:01:28.404279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.641 qpair failed and we were unable to recover it. 00:37:36.641 [2024-11-19 08:01:28.404434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.641 [2024-11-19 08:01:28.404485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.641 qpair failed and we were unable to recover it. 00:37:36.641 [2024-11-19 08:01:28.404696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.641 [2024-11-19 08:01:28.404746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.641 qpair failed and we were unable to recover it. 00:37:36.641 [2024-11-19 08:01:28.404874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.641 [2024-11-19 08:01:28.404911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.641 qpair failed and we were unable to recover it. 00:37:36.641 [2024-11-19 08:01:28.405084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.641 [2024-11-19 08:01:28.405121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.641 qpair failed and we were unable to recover it. 00:37:36.641 [2024-11-19 08:01:28.405228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.641 [2024-11-19 08:01:28.405265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.641 qpair failed and we were unable to recover it. 00:37:36.641 [2024-11-19 08:01:28.405399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.641 [2024-11-19 08:01:28.405435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.641 qpair failed and we were unable to recover it. 00:37:36.641 [2024-11-19 08:01:28.405614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.641 [2024-11-19 08:01:28.405650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.641 qpair failed and we were unable to recover it. 00:37:36.641 [2024-11-19 08:01:28.405811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.641 [2024-11-19 08:01:28.405862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.641 qpair failed and we were unable to recover it. 00:37:36.641 [2024-11-19 08:01:28.405987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.641 [2024-11-19 08:01:28.406041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.641 qpair failed and we were unable to recover it. 00:37:36.641 [2024-11-19 08:01:28.406189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.641 [2024-11-19 08:01:28.406226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.641 qpair failed and we were unable to recover it. 00:37:36.641 [2024-11-19 08:01:28.406368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.641 [2024-11-19 08:01:28.406404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.641 qpair failed and we were unable to recover it. 00:37:36.641 [2024-11-19 08:01:28.406535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.641 [2024-11-19 08:01:28.406573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.641 qpair failed and we were unable to recover it. 00:37:36.641 [2024-11-19 08:01:28.406721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.642 [2024-11-19 08:01:28.406758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.642 qpair failed and we were unable to recover it. 00:37:36.642 [2024-11-19 08:01:28.406912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.642 [2024-11-19 08:01:28.406961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.642 qpair failed and we were unable to recover it. 00:37:36.642 [2024-11-19 08:01:28.407120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.642 [2024-11-19 08:01:28.407158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.642 qpair failed and we were unable to recover it. 00:37:36.642 [2024-11-19 08:01:28.407292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.642 [2024-11-19 08:01:28.407328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.642 qpair failed and we were unable to recover it. 00:37:36.642 [2024-11-19 08:01:28.407473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.642 [2024-11-19 08:01:28.407510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.642 qpair failed and we were unable to recover it. 00:37:36.642 [2024-11-19 08:01:28.407675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.642 [2024-11-19 08:01:28.407735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.642 qpair failed and we were unable to recover it. 00:37:36.642 [2024-11-19 08:01:28.407890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.642 [2024-11-19 08:01:28.407939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.642 qpair failed and we were unable to recover it. 00:37:36.642 [2024-11-19 08:01:28.408063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.642 [2024-11-19 08:01:28.408100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.642 qpair failed and we were unable to recover it. 00:37:36.642 [2024-11-19 08:01:28.408294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.642 [2024-11-19 08:01:28.408330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.642 qpair failed and we were unable to recover it. 00:37:36.642 [2024-11-19 08:01:28.408435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.642 [2024-11-19 08:01:28.408471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.642 qpair failed and we were unable to recover it. 00:37:36.642 [2024-11-19 08:01:28.408631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.642 [2024-11-19 08:01:28.408667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.642 qpair failed and we were unable to recover it. 00:37:36.642 [2024-11-19 08:01:28.408847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.642 [2024-11-19 08:01:28.408897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.642 qpair failed and we were unable to recover it. 00:37:36.642 [2024-11-19 08:01:28.409019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.642 [2024-11-19 08:01:28.409056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.642 qpair failed and we were unable to recover it. 00:37:36.642 [2024-11-19 08:01:28.409246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.642 [2024-11-19 08:01:28.409282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.642 qpair failed and we were unable to recover it. 00:37:36.642 [2024-11-19 08:01:28.409423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.642 [2024-11-19 08:01:28.409459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.642 qpair failed and we were unable to recover it. 00:37:36.642 [2024-11-19 08:01:28.409567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.642 [2024-11-19 08:01:28.409602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.642 qpair failed and we were unable to recover it. 00:37:36.642 [2024-11-19 08:01:28.409738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.642 [2024-11-19 08:01:28.409789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.642 qpair failed and we were unable to recover it. 00:37:36.642 [2024-11-19 08:01:28.409963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.642 [2024-11-19 08:01:28.410002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.642 qpair failed and we were unable to recover it. 00:37:36.642 [2024-11-19 08:01:28.410146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.642 [2024-11-19 08:01:28.410183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.642 qpair failed and we were unable to recover it. 00:37:36.642 [2024-11-19 08:01:28.410321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.642 [2024-11-19 08:01:28.410358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.642 qpair failed and we were unable to recover it. 00:37:36.642 [2024-11-19 08:01:28.410468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.642 [2024-11-19 08:01:28.410505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.642 qpair failed and we were unable to recover it. 00:37:36.642 [2024-11-19 08:01:28.410664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.642 [2024-11-19 08:01:28.410722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.642 qpair failed and we were unable to recover it. 00:37:36.642 [2024-11-19 08:01:28.410842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.642 [2024-11-19 08:01:28.410879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.642 qpair failed and we were unable to recover it. 00:37:36.642 [2024-11-19 08:01:28.410998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.642 [2024-11-19 08:01:28.411034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.642 qpair failed and we were unable to recover it. 00:37:36.642 [2024-11-19 08:01:28.411143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.642 [2024-11-19 08:01:28.411178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.642 qpair failed and we were unable to recover it. 00:37:36.642 [2024-11-19 08:01:28.411351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.642 [2024-11-19 08:01:28.411387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.642 qpair failed and we were unable to recover it. 00:37:36.642 [2024-11-19 08:01:28.411500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.642 [2024-11-19 08:01:28.411543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.642 qpair failed and we were unable to recover it. 00:37:36.642 [2024-11-19 08:01:28.411714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.642 [2024-11-19 08:01:28.411754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.642 qpair failed and we were unable to recover it. 00:37:36.642 [2024-11-19 08:01:28.411898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.642 [2024-11-19 08:01:28.411933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.642 qpair failed and we were unable to recover it. 00:37:36.642 [2024-11-19 08:01:28.412051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.642 [2024-11-19 08:01:28.412088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.642 qpair failed and we were unable to recover it. 00:37:36.642 [2024-11-19 08:01:28.412237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.642 [2024-11-19 08:01:28.412287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.642 qpair failed and we were unable to recover it. 00:37:36.642 [2024-11-19 08:01:28.412439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.642 [2024-11-19 08:01:28.412478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.642 qpair failed and we were unable to recover it. 00:37:36.642 [2024-11-19 08:01:28.412636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.642 [2024-11-19 08:01:28.412686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.642 qpair failed and we were unable to recover it. 00:37:36.642 [2024-11-19 08:01:28.412829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.642 [2024-11-19 08:01:28.412865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.642 qpair failed and we were unable to recover it. 00:37:36.642 [2024-11-19 08:01:28.413012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.642 [2024-11-19 08:01:28.413049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.642 qpair failed and we were unable to recover it. 00:37:36.642 [2024-11-19 08:01:28.413190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.642 [2024-11-19 08:01:28.413226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.642 qpair failed and we were unable to recover it. 00:37:36.642 [2024-11-19 08:01:28.413365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.642 [2024-11-19 08:01:28.413402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.642 qpair failed and we were unable to recover it. 00:37:36.642 [2024-11-19 08:01:28.413507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.642 [2024-11-19 08:01:28.413543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.642 qpair failed and we were unable to recover it. 00:37:36.642 [2024-11-19 08:01:28.413704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.643 [2024-11-19 08:01:28.413746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.643 qpair failed and we were unable to recover it. 00:37:36.643 [2024-11-19 08:01:28.413865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.643 [2024-11-19 08:01:28.413904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.643 qpair failed and we were unable to recover it. 00:37:36.643 [2024-11-19 08:01:28.414067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.643 [2024-11-19 08:01:28.414105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.643 qpair failed and we were unable to recover it. 00:37:36.643 [2024-11-19 08:01:28.414229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.643 [2024-11-19 08:01:28.414280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.643 qpair failed and we were unable to recover it. 00:37:36.643 [2024-11-19 08:01:28.414432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.643 [2024-11-19 08:01:28.414468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.643 qpair failed and we were unable to recover it. 00:37:36.643 [2024-11-19 08:01:28.414613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.643 [2024-11-19 08:01:28.414649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.643 qpair failed and we were unable to recover it. 00:37:36.643 [2024-11-19 08:01:28.414774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.643 [2024-11-19 08:01:28.414810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.643 qpair failed and we were unable to recover it. 00:37:36.643 [2024-11-19 08:01:28.414976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.643 [2024-11-19 08:01:28.415012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.643 qpair failed and we were unable to recover it. 00:37:36.643 [2024-11-19 08:01:28.415112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.643 [2024-11-19 08:01:28.415147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.643 qpair failed and we were unable to recover it. 00:37:36.643 [2024-11-19 08:01:28.415307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.643 [2024-11-19 08:01:28.415342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.643 qpair failed and we were unable to recover it. 00:37:36.643 [2024-11-19 08:01:28.415489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.643 [2024-11-19 08:01:28.415524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.643 qpair failed and we were unable to recover it. 00:37:36.643 [2024-11-19 08:01:28.415698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.643 [2024-11-19 08:01:28.415747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.643 qpair failed and we were unable to recover it. 00:37:36.643 [2024-11-19 08:01:28.415861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.643 [2024-11-19 08:01:28.415898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.643 qpair failed and we were unable to recover it. 00:37:36.643 [2024-11-19 08:01:28.416040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.643 [2024-11-19 08:01:28.416077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.643 qpair failed and we were unable to recover it. 00:37:36.643 [2024-11-19 08:01:28.416187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.643 [2024-11-19 08:01:28.416224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.643 qpair failed and we were unable to recover it. 00:37:36.643 [2024-11-19 08:01:28.416369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.643 [2024-11-19 08:01:28.416406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.643 qpair failed and we were unable to recover it. 00:37:36.643 [2024-11-19 08:01:28.416534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.643 [2024-11-19 08:01:28.416584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.643 qpair failed and we were unable to recover it. 00:37:36.643 [2024-11-19 08:01:28.416760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.643 [2024-11-19 08:01:28.416797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.643 qpair failed and we were unable to recover it. 00:37:36.643 [2024-11-19 08:01:28.416912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.643 [2024-11-19 08:01:28.416947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.643 qpair failed and we were unable to recover it. 00:37:36.643 [2024-11-19 08:01:28.417082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.643 [2024-11-19 08:01:28.417117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.643 qpair failed and we were unable to recover it. 00:37:36.643 [2024-11-19 08:01:28.417256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.643 [2024-11-19 08:01:28.417290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.643 qpair failed and we were unable to recover it. 00:37:36.643 [2024-11-19 08:01:28.417444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.643 [2024-11-19 08:01:28.417492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.643 qpair failed and we were unable to recover it. 00:37:36.643 [2024-11-19 08:01:28.417631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.643 [2024-11-19 08:01:28.417681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.643 qpair failed and we were unable to recover it. 00:37:36.643 [2024-11-19 08:01:28.417850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.643 [2024-11-19 08:01:28.417890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.643 qpair failed and we were unable to recover it. 00:37:36.643 [2024-11-19 08:01:28.418061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.643 [2024-11-19 08:01:28.418097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.643 qpair failed and we were unable to recover it. 00:37:36.643 [2024-11-19 08:01:28.418239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.643 [2024-11-19 08:01:28.418275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.643 qpair failed and we were unable to recover it. 00:37:36.643 [2024-11-19 08:01:28.418449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.643 [2024-11-19 08:01:28.418484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.643 qpair failed and we were unable to recover it. 00:37:36.643 [2024-11-19 08:01:28.418620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.643 [2024-11-19 08:01:28.418655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.643 qpair failed and we were unable to recover it. 00:37:36.643 [2024-11-19 08:01:28.418802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.643 [2024-11-19 08:01:28.418849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.643 qpair failed and we were unable to recover it. 00:37:36.643 [2024-11-19 08:01:28.419016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.643 [2024-11-19 08:01:28.419066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.643 qpair failed and we were unable to recover it. 00:37:36.643 [2024-11-19 08:01:28.419214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.643 [2024-11-19 08:01:28.419252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.643 qpair failed and we were unable to recover it. 00:37:36.643 [2024-11-19 08:01:28.419425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.643 [2024-11-19 08:01:28.419462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.643 qpair failed and we were unable to recover it. 00:37:36.643 [2024-11-19 08:01:28.419605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.643 [2024-11-19 08:01:28.419641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.643 qpair failed and we were unable to recover it. 00:37:36.643 [2024-11-19 08:01:28.419789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.643 [2024-11-19 08:01:28.419834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.643 qpair failed and we were unable to recover it. 00:37:36.643 [2024-11-19 08:01:28.420007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.643 [2024-11-19 08:01:28.420057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.643 qpair failed and we were unable to recover it. 00:37:36.643 [2024-11-19 08:01:28.420207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.643 [2024-11-19 08:01:28.420245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.643 qpair failed and we were unable to recover it. 00:37:36.644 [2024-11-19 08:01:28.420413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.644 [2024-11-19 08:01:28.420447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.644 qpair failed and we were unable to recover it. 00:37:36.644 [2024-11-19 08:01:28.420600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.644 [2024-11-19 08:01:28.420637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.644 qpair failed and we were unable to recover it. 00:37:36.644 [2024-11-19 08:01:28.420790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.644 [2024-11-19 08:01:28.420826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.644 qpair failed and we were unable to recover it. 00:37:36.644 [2024-11-19 08:01:28.420987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.644 [2024-11-19 08:01:28.421034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.644 qpair failed and we were unable to recover it. 00:37:36.644 [2024-11-19 08:01:28.421172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.644 [2024-11-19 08:01:28.421207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.644 qpair failed and we were unable to recover it. 00:37:36.644 [2024-11-19 08:01:28.421317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.644 [2024-11-19 08:01:28.421351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.644 qpair failed and we were unable to recover it. 00:37:36.644 [2024-11-19 08:01:28.421509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.644 [2024-11-19 08:01:28.421546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.644 qpair failed and we were unable to recover it. 00:37:36.644 [2024-11-19 08:01:28.421708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.644 [2024-11-19 08:01:28.421763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.644 qpair failed and we were unable to recover it. 00:37:36.644 [2024-11-19 08:01:28.421926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.644 [2024-11-19 08:01:28.421985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.644 qpair failed and we were unable to recover it. 00:37:36.644 [2024-11-19 08:01:28.422139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.644 [2024-11-19 08:01:28.422175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.644 qpair failed and we were unable to recover it. 00:37:36.644 [2024-11-19 08:01:28.422325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.644 [2024-11-19 08:01:28.422360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.644 qpair failed and we were unable to recover it. 00:37:36.644 [2024-11-19 08:01:28.422504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.644 [2024-11-19 08:01:28.422539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.644 qpair failed and we were unable to recover it. 00:37:36.644 [2024-11-19 08:01:28.422651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.644 [2024-11-19 08:01:28.422687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.644 qpair failed and we were unable to recover it. 00:37:36.644 [2024-11-19 08:01:28.422840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.644 [2024-11-19 08:01:28.422875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.644 qpair failed and we were unable to recover it. 00:37:36.644 [2024-11-19 08:01:28.422988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.644 [2024-11-19 08:01:28.423022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.644 qpair failed and we were unable to recover it. 00:37:36.644 [2024-11-19 08:01:28.423131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.644 [2024-11-19 08:01:28.423167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.644 qpair failed and we were unable to recover it. 00:37:36.644 [2024-11-19 08:01:28.423314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.644 [2024-11-19 08:01:28.423349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.644 qpair failed and we were unable to recover it. 00:37:36.644 [2024-11-19 08:01:28.423512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.644 [2024-11-19 08:01:28.423547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.644 qpair failed and we were unable to recover it. 00:37:36.644 [2024-11-19 08:01:28.423671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.644 [2024-11-19 08:01:28.423732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.644 qpair failed and we were unable to recover it. 00:37:36.644 [2024-11-19 08:01:28.423900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.644 [2024-11-19 08:01:28.423942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.644 qpair failed and we were unable to recover it. 00:37:36.644 [2024-11-19 08:01:28.424078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.644 [2024-11-19 08:01:28.424114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.644 qpair failed and we were unable to recover it. 00:37:36.644 [2024-11-19 08:01:28.424253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.644 [2024-11-19 08:01:28.424288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.644 qpair failed and we were unable to recover it. 00:37:36.644 [2024-11-19 08:01:28.424420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.644 [2024-11-19 08:01:28.424455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.644 qpair failed and we were unable to recover it. 00:37:36.644 [2024-11-19 08:01:28.424574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.644 [2024-11-19 08:01:28.424624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.644 qpair failed and we were unable to recover it. 00:37:36.644 [2024-11-19 08:01:28.424752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.644 [2024-11-19 08:01:28.424788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.644 qpair failed and we were unable to recover it. 00:37:36.644 [2024-11-19 08:01:28.424907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.644 [2024-11-19 08:01:28.424950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.644 qpair failed and we were unable to recover it. 00:37:36.644 [2024-11-19 08:01:28.425081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.644 [2024-11-19 08:01:28.425116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.644 qpair failed and we were unable to recover it. 00:37:36.644 [2024-11-19 08:01:28.425250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.644 [2024-11-19 08:01:28.425285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.644 qpair failed and we were unable to recover it. 00:37:36.644 [2024-11-19 08:01:28.425447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.644 [2024-11-19 08:01:28.425482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.644 qpair failed and we were unable to recover it. 00:37:36.644 [2024-11-19 08:01:28.425585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.644 [2024-11-19 08:01:28.425621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.644 qpair failed and we were unable to recover it. 00:37:36.644 [2024-11-19 08:01:28.425782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.644 [2024-11-19 08:01:28.425832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.644 qpair failed and we were unable to recover it. 00:37:36.644 [2024-11-19 08:01:28.425999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.644 [2024-11-19 08:01:28.426049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.644 qpair failed and we were unable to recover it. 00:37:36.644 [2024-11-19 08:01:28.426225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.644 [2024-11-19 08:01:28.426270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.644 qpair failed and we were unable to recover it. 00:37:36.644 [2024-11-19 08:01:28.426416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.644 [2024-11-19 08:01:28.426451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.644 qpair failed and we were unable to recover it. 00:37:36.644 [2024-11-19 08:01:28.426588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.644 [2024-11-19 08:01:28.426623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.644 qpair failed and we were unable to recover it. 00:37:36.644 [2024-11-19 08:01:28.426746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.644 [2024-11-19 08:01:28.426782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.644 qpair failed and we were unable to recover it. 00:37:36.644 [2024-11-19 08:01:28.426949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.644 [2024-11-19 08:01:28.426984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.644 qpair failed and we were unable to recover it. 00:37:36.644 [2024-11-19 08:01:28.427119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.644 [2024-11-19 08:01:28.427154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.644 qpair failed and we were unable to recover it. 00:37:36.644 [2024-11-19 08:01:28.427289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.644 [2024-11-19 08:01:28.427323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.644 qpair failed and we were unable to recover it. 00:37:36.644 [2024-11-19 08:01:28.427470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.644 [2024-11-19 08:01:28.427508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.644 qpair failed and we were unable to recover it. 00:37:36.644 [2024-11-19 08:01:28.427643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.645 [2024-11-19 08:01:28.427701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.645 qpair failed and we were unable to recover it. 00:37:36.645 [2024-11-19 08:01:28.427850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.645 [2024-11-19 08:01:28.427900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.645 qpair failed and we were unable to recover it. 00:37:36.645 [2024-11-19 08:01:28.428024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.645 [2024-11-19 08:01:28.428059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.645 qpair failed and we were unable to recover it. 00:37:36.645 [2024-11-19 08:01:28.428175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.645 [2024-11-19 08:01:28.428210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.645 qpair failed and we were unable to recover it. 00:37:36.645 [2024-11-19 08:01:28.428381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.645 [2024-11-19 08:01:28.428417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.645 qpair failed and we were unable to recover it. 00:37:36.645 [2024-11-19 08:01:28.428563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.645 [2024-11-19 08:01:28.428597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.645 qpair failed and we were unable to recover it. 00:37:36.645 [2024-11-19 08:01:28.428706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.645 [2024-11-19 08:01:28.428748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.645 qpair failed and we were unable to recover it. 00:37:36.645 [2024-11-19 08:01:28.428917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.645 [2024-11-19 08:01:28.428956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.645 qpair failed and we were unable to recover it. 00:37:36.645 [2024-11-19 08:01:28.429102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.645 [2024-11-19 08:01:28.429140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.645 qpair failed and we were unable to recover it. 00:37:36.645 [2024-11-19 08:01:28.429305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.645 [2024-11-19 08:01:28.429341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.645 qpair failed and we were unable to recover it. 00:37:36.645 [2024-11-19 08:01:28.429478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.645 [2024-11-19 08:01:28.429513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.645 qpair failed and we were unable to recover it. 00:37:36.645 [2024-11-19 08:01:28.429648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.645 [2024-11-19 08:01:28.429682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.645 qpair failed and we were unable to recover it. 00:37:36.645 [2024-11-19 08:01:28.429830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.645 [2024-11-19 08:01:28.429865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.645 qpair failed and we were unable to recover it. 00:37:36.645 [2024-11-19 08:01:28.429972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.645 [2024-11-19 08:01:28.430007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.645 qpair failed and we were unable to recover it. 00:37:36.645 [2024-11-19 08:01:28.430144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.645 [2024-11-19 08:01:28.430179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.645 qpair failed and we were unable to recover it. 00:37:36.645 [2024-11-19 08:01:28.430307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.645 [2024-11-19 08:01:28.430342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.645 qpair failed and we were unable to recover it. 00:37:36.645 [2024-11-19 08:01:28.430480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.645 [2024-11-19 08:01:28.430517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.645 qpair failed and we were unable to recover it. 00:37:36.645 [2024-11-19 08:01:28.430656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.645 [2024-11-19 08:01:28.430700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.645 qpair failed and we were unable to recover it. 00:37:36.645 [2024-11-19 08:01:28.430819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.645 [2024-11-19 08:01:28.430854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.645 qpair failed and we were unable to recover it. 00:37:36.645 [2024-11-19 08:01:28.431001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.645 [2024-11-19 08:01:28.431038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.645 qpair failed and we were unable to recover it. 00:37:36.645 [2024-11-19 08:01:28.431176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.645 [2024-11-19 08:01:28.431211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.645 qpair failed and we were unable to recover it. 00:37:36.645 [2024-11-19 08:01:28.431339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.645 [2024-11-19 08:01:28.431374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.645 qpair failed and we were unable to recover it. 00:37:36.645 [2024-11-19 08:01:28.431491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.645 [2024-11-19 08:01:28.431526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.645 qpair failed and we were unable to recover it. 00:37:36.645 [2024-11-19 08:01:28.431661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.645 [2024-11-19 08:01:28.431703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.645 qpair failed and we were unable to recover it. 00:37:36.645 [2024-11-19 08:01:28.431833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.645 [2024-11-19 08:01:28.431883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.645 qpair failed and we were unable to recover it. 00:37:36.645 [2024-11-19 08:01:28.432010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.645 [2024-11-19 08:01:28.432046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.645 qpair failed and we were unable to recover it. 00:37:36.645 [2024-11-19 08:01:28.432190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.645 [2024-11-19 08:01:28.432226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.645 qpair failed and we were unable to recover it. 00:37:36.645 [2024-11-19 08:01:28.432360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.645 [2024-11-19 08:01:28.432396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.645 qpair failed and we were unable to recover it. 00:37:36.645 [2024-11-19 08:01:28.432511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.645 [2024-11-19 08:01:28.432547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.645 qpair failed and we were unable to recover it. 00:37:36.645 [2024-11-19 08:01:28.432656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.645 [2024-11-19 08:01:28.432697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.645 qpair failed and we were unable to recover it. 00:37:36.645 [2024-11-19 08:01:28.432881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.645 [2024-11-19 08:01:28.432916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.645 qpair failed and we were unable to recover it. 00:37:36.645 [2024-11-19 08:01:28.433031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.645 [2024-11-19 08:01:28.433066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.645 qpair failed and we were unable to recover it. 00:37:36.645 [2024-11-19 08:01:28.433198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.645 [2024-11-19 08:01:28.433237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.645 qpair failed and we were unable to recover it. 00:37:36.645 [2024-11-19 08:01:28.433345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.645 [2024-11-19 08:01:28.433379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.645 qpair failed and we were unable to recover it. 00:37:36.645 [2024-11-19 08:01:28.433492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.645 [2024-11-19 08:01:28.433527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.645 qpair failed and we were unable to recover it. 00:37:36.645 [2024-11-19 08:01:28.433687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.645 [2024-11-19 08:01:28.433748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.645 qpair failed and we were unable to recover it. 00:37:36.645 [2024-11-19 08:01:28.433864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.645 [2024-11-19 08:01:28.433899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.645 qpair failed and we were unable to recover it. 00:37:36.645 [2024-11-19 08:01:28.434025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.645 [2024-11-19 08:01:28.434076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.645 qpair failed and we were unable to recover it. 00:37:36.645 [2024-11-19 08:01:28.434190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.645 [2024-11-19 08:01:28.434228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.645 qpair failed and we were unable to recover it. 00:37:36.645 [2024-11-19 08:01:28.434365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.645 [2024-11-19 08:01:28.434401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.645 qpair failed and we were unable to recover it. 00:37:36.645 [2024-11-19 08:01:28.434565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.645 [2024-11-19 08:01:28.434601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.645 qpair failed and we were unable to recover it. 00:37:36.645 [2024-11-19 08:01:28.434740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.645 [2024-11-19 08:01:28.434789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.645 qpair failed and we were unable to recover it. 00:37:36.645 [2024-11-19 08:01:28.434932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.645 [2024-11-19 08:01:28.434968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.645 qpair failed and we were unable to recover it. 00:37:36.645 [2024-11-19 08:01:28.435087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.645 [2024-11-19 08:01:28.435122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.645 qpair failed and we were unable to recover it. 00:37:36.646 [2024-11-19 08:01:28.435280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.646 [2024-11-19 08:01:28.435315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.646 qpair failed and we were unable to recover it. 00:37:36.646 [2024-11-19 08:01:28.435476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.646 [2024-11-19 08:01:28.435510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.646 qpair failed and we were unable to recover it. 00:37:36.646 [2024-11-19 08:01:28.435627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.646 [2024-11-19 08:01:28.435662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.646 qpair failed and we were unable to recover it. 00:37:36.646 [2024-11-19 08:01:28.435788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.646 [2024-11-19 08:01:28.435825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.646 qpair failed and we were unable to recover it. 00:37:36.646 [2024-11-19 08:01:28.435995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.646 [2024-11-19 08:01:28.436031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.646 qpair failed and we were unable to recover it. 00:37:36.646 [2024-11-19 08:01:28.436169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.646 [2024-11-19 08:01:28.436205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.646 qpair failed and we were unable to recover it. 00:37:36.646 [2024-11-19 08:01:28.436344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.646 [2024-11-19 08:01:28.436379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.646 qpair failed and we were unable to recover it. 00:37:36.646 [2024-11-19 08:01:28.436540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.646 [2024-11-19 08:01:28.436590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.646 qpair failed and we were unable to recover it. 00:37:36.646 [2024-11-19 08:01:28.436727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.646 [2024-11-19 08:01:28.436777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.646 qpair failed and we were unable to recover it. 00:37:36.646 [2024-11-19 08:01:28.436923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.646 [2024-11-19 08:01:28.436959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.646 qpair failed and we were unable to recover it. 00:37:36.646 [2024-11-19 08:01:28.437128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.646 [2024-11-19 08:01:28.437163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.646 qpair failed and we were unable to recover it. 00:37:36.646 [2024-11-19 08:01:28.437271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.646 [2024-11-19 08:01:28.437305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.646 qpair failed and we were unable to recover it. 00:37:36.646 [2024-11-19 08:01:28.437481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.646 [2024-11-19 08:01:28.437517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.646 qpair failed and we were unable to recover it. 00:37:36.646 [2024-11-19 08:01:28.437686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.646 [2024-11-19 08:01:28.437731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.646 qpair failed and we were unable to recover it. 00:37:36.646 [2024-11-19 08:01:28.437866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.646 [2024-11-19 08:01:28.437901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.646 qpair failed and we were unable to recover it. 00:37:36.646 [2024-11-19 08:01:28.438026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.646 [2024-11-19 08:01:28.438066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.646 qpair failed and we were unable to recover it. 00:37:36.646 [2024-11-19 08:01:28.438205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.646 [2024-11-19 08:01:28.438240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.646 qpair failed and we were unable to recover it. 00:37:36.646 [2024-11-19 08:01:28.438378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.646 [2024-11-19 08:01:28.438413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.646 qpair failed and we were unable to recover it. 00:37:36.646 [2024-11-19 08:01:28.438579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.646 [2024-11-19 08:01:28.438614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.646 qpair failed and we were unable to recover it. 00:37:36.646 [2024-11-19 08:01:28.438769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.646 [2024-11-19 08:01:28.438819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.646 qpair failed and we were unable to recover it. 00:37:36.646 [2024-11-19 08:01:28.438967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.646 [2024-11-19 08:01:28.439017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.646 qpair failed and we were unable to recover it. 00:37:36.646 [2024-11-19 08:01:28.439162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.646 [2024-11-19 08:01:28.439200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.646 qpair failed and we were unable to recover it. 00:37:36.646 [2024-11-19 08:01:28.439340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.646 [2024-11-19 08:01:28.439376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.646 qpair failed and we were unable to recover it. 00:37:36.646 [2024-11-19 08:01:28.439512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.646 [2024-11-19 08:01:28.439547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.646 qpair failed and we were unable to recover it. 00:37:36.646 [2024-11-19 08:01:28.439706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.646 [2024-11-19 08:01:28.439758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.646 qpair failed and we were unable to recover it. 00:37:36.646 [2024-11-19 08:01:28.439933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.646 [2024-11-19 08:01:28.439969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.646 qpair failed and we were unable to recover it. 00:37:36.646 [2024-11-19 08:01:28.440118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.646 [2024-11-19 08:01:28.440167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.646 qpair failed and we were unable to recover it. 00:37:36.646 [2024-11-19 08:01:28.440309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.646 [2024-11-19 08:01:28.440346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.646 qpair failed and we were unable to recover it. 00:37:36.646 [2024-11-19 08:01:28.440447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.646 [2024-11-19 08:01:28.440488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.646 qpair failed and we were unable to recover it. 00:37:36.646 [2024-11-19 08:01:28.440603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.646 [2024-11-19 08:01:28.440638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.646 qpair failed and we were unable to recover it. 00:37:36.646 [2024-11-19 08:01:28.440772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.646 [2024-11-19 08:01:28.440822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.646 qpair failed and we were unable to recover it. 00:37:36.646 [2024-11-19 08:01:28.440960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.646 [2024-11-19 08:01:28.441011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.646 qpair failed and we were unable to recover it. 00:37:36.646 [2024-11-19 08:01:28.441140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.646 [2024-11-19 08:01:28.441177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.646 qpair failed and we were unable to recover it. 00:37:36.646 [2024-11-19 08:01:28.441315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.646 [2024-11-19 08:01:28.441350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.646 qpair failed and we were unable to recover it. 00:37:36.646 [2024-11-19 08:01:28.441453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.646 [2024-11-19 08:01:28.441488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.646 qpair failed and we were unable to recover it. 00:37:36.646 [2024-11-19 08:01:28.441600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.646 [2024-11-19 08:01:28.441636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.646 qpair failed and we were unable to recover it. 00:37:36.646 [2024-11-19 08:01:28.441761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.646 [2024-11-19 08:01:28.441811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.646 qpair failed and we were unable to recover it. 00:37:36.646 [2024-11-19 08:01:28.441939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.646 [2024-11-19 08:01:28.441978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.646 qpair failed and we were unable to recover it. 00:37:36.646 [2024-11-19 08:01:28.442122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.646 [2024-11-19 08:01:28.442158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.646 qpair failed and we were unable to recover it. 00:37:36.646 [2024-11-19 08:01:28.442273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.646 [2024-11-19 08:01:28.442308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.646 qpair failed and we were unable to recover it. 00:37:36.646 [2024-11-19 08:01:28.442451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.646 [2024-11-19 08:01:28.442487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.646 qpair failed and we were unable to recover it. 00:37:36.646 [2024-11-19 08:01:28.442595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.646 [2024-11-19 08:01:28.442630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.646 qpair failed and we were unable to recover it. 00:37:36.646 [2024-11-19 08:01:28.442784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.646 [2024-11-19 08:01:28.442822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.646 qpair failed and we were unable to recover it. 00:37:36.646 [2024-11-19 08:01:28.442968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.646 [2024-11-19 08:01:28.443005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.646 qpair failed and we were unable to recover it. 00:37:36.647 [2024-11-19 08:01:28.443112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.647 [2024-11-19 08:01:28.443147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.647 qpair failed and we were unable to recover it. 00:37:36.647 [2024-11-19 08:01:28.443283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.647 [2024-11-19 08:01:28.443317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.647 qpair failed and we were unable to recover it. 00:37:36.647 [2024-11-19 08:01:28.443490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.647 [2024-11-19 08:01:28.443526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.647 qpair failed and we were unable to recover it. 00:37:36.647 [2024-11-19 08:01:28.443634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.647 [2024-11-19 08:01:28.443669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.647 qpair failed and we were unable to recover it. 00:37:36.647 [2024-11-19 08:01:28.443844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.647 [2024-11-19 08:01:28.443880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.647 qpair failed and we were unable to recover it. 00:37:36.647 [2024-11-19 08:01:28.444048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.647 [2024-11-19 08:01:28.444084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.647 qpair failed and we were unable to recover it. 00:37:36.647 [2024-11-19 08:01:28.444236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.647 [2024-11-19 08:01:28.444285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.647 qpair failed and we were unable to recover it. 00:37:36.647 [2024-11-19 08:01:28.444423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.647 [2024-11-19 08:01:28.444461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.647 qpair failed and we were unable to recover it. 00:37:36.647 [2024-11-19 08:01:28.444603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.647 [2024-11-19 08:01:28.444639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.647 qpair failed and we were unable to recover it. 00:37:36.647 [2024-11-19 08:01:28.444786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.647 [2024-11-19 08:01:28.444822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.647 qpair failed and we were unable to recover it. 00:37:36.647 [2024-11-19 08:01:28.444959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.647 [2024-11-19 08:01:28.444993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.647 qpair failed and we were unable to recover it. 00:37:36.647 [2024-11-19 08:01:28.445095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.647 [2024-11-19 08:01:28.445130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.647 qpair failed and we were unable to recover it. 00:37:36.647 [2024-11-19 08:01:28.445274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.647 [2024-11-19 08:01:28.445308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.647 qpair failed and we were unable to recover it. 00:37:36.647 [2024-11-19 08:01:28.445474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.647 [2024-11-19 08:01:28.445523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.647 qpair failed and we were unable to recover it. 00:37:36.647 [2024-11-19 08:01:28.445645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.647 [2024-11-19 08:01:28.445683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.647 qpair failed and we were unable to recover it. 00:37:36.647 [2024-11-19 08:01:28.445804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.647 [2024-11-19 08:01:28.445848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.647 qpair failed and we were unable to recover it. 00:37:36.647 [2024-11-19 08:01:28.446019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.647 [2024-11-19 08:01:28.446056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.647 qpair failed and we were unable to recover it. 00:37:36.647 [2024-11-19 08:01:28.446219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.647 [2024-11-19 08:01:28.446255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.647 qpair failed and we were unable to recover it. 00:37:36.647 [2024-11-19 08:01:28.446453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.647 [2024-11-19 08:01:28.446503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.647 qpair failed and we were unable to recover it. 00:37:36.647 [2024-11-19 08:01:28.446648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.647 [2024-11-19 08:01:28.446683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.647 qpair failed and we were unable to recover it. 00:37:36.647 [2024-11-19 08:01:28.446808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.647 [2024-11-19 08:01:28.446844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.647 qpair failed and we were unable to recover it. 00:37:36.647 [2024-11-19 08:01:28.446956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.647 [2024-11-19 08:01:28.446990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.647 qpair failed and we were unable to recover it. 00:37:36.647 [2024-11-19 08:01:28.447126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.647 [2024-11-19 08:01:28.447161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.647 qpair failed and we were unable to recover it. 00:37:36.647 [2024-11-19 08:01:28.447272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.647 [2024-11-19 08:01:28.447306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.647 qpair failed and we were unable to recover it. 00:37:36.647 [2024-11-19 08:01:28.447469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.647 [2024-11-19 08:01:28.447509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.647 qpair failed and we were unable to recover it. 00:37:36.647 [2024-11-19 08:01:28.447652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.647 [2024-11-19 08:01:28.447686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.647 qpair failed and we were unable to recover it. 00:37:36.647 [2024-11-19 08:01:28.447824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.647 [2024-11-19 08:01:28.447874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.647 qpair failed and we were unable to recover it. 00:37:36.647 [2024-11-19 08:01:28.448027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.647 [2024-11-19 08:01:28.448063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.647 qpair failed and we were unable to recover it. 00:37:36.647 [2024-11-19 08:01:28.448197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.647 [2024-11-19 08:01:28.448233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.647 qpair failed and we were unable to recover it. 00:37:36.647 [2024-11-19 08:01:28.448380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.647 [2024-11-19 08:01:28.448415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.647 qpair failed and we were unable to recover it. 00:37:36.647 [2024-11-19 08:01:28.448547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.647 [2024-11-19 08:01:28.448583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.647 qpair failed and we were unable to recover it. 00:37:36.647 [2024-11-19 08:01:28.448721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.647 [2024-11-19 08:01:28.448758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.647 qpair failed and we were unable to recover it. 00:37:36.647 [2024-11-19 08:01:28.448862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.647 [2024-11-19 08:01:28.448898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.647 qpair failed and we were unable to recover it. 00:37:36.647 [2024-11-19 08:01:28.449011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.647 [2024-11-19 08:01:28.449049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.647 qpair failed and we were unable to recover it. 00:37:36.647 [2024-11-19 08:01:28.449188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.647 [2024-11-19 08:01:28.449224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.647 qpair failed and we were unable to recover it. 00:37:36.647 [2024-11-19 08:01:28.449390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.647 [2024-11-19 08:01:28.449425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.647 qpair failed and we were unable to recover it. 00:37:36.647 [2024-11-19 08:01:28.449554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.647 [2024-11-19 08:01:28.449589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.647 qpair failed and we were unable to recover it. 00:37:36.647 [2024-11-19 08:01:28.449743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.647 [2024-11-19 08:01:28.449793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.647 qpair failed and we were unable to recover it. 00:37:36.647 [2024-11-19 08:01:28.449970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.647 [2024-11-19 08:01:28.450006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.647 qpair failed and we were unable to recover it. 00:37:36.647 [2024-11-19 08:01:28.450145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.648 [2024-11-19 08:01:28.450181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.648 qpair failed and we were unable to recover it. 00:37:36.648 [2024-11-19 08:01:28.450290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.648 [2024-11-19 08:01:28.450327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.648 qpair failed and we were unable to recover it. 00:37:36.648 [2024-11-19 08:01:28.450482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.648 [2024-11-19 08:01:28.450522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.648 qpair failed and we were unable to recover it. 00:37:36.648 [2024-11-19 08:01:28.450685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.648 [2024-11-19 08:01:28.450742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.648 qpair failed and we were unable to recover it. 00:37:36.648 [2024-11-19 08:01:28.450894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.648 [2024-11-19 08:01:28.450941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.648 qpair failed and we were unable to recover it. 00:37:36.648 [2024-11-19 08:01:28.451105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.648 [2024-11-19 08:01:28.451140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.648 qpair failed and we were unable to recover it. 00:37:36.648 [2024-11-19 08:01:28.451282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.648 [2024-11-19 08:01:28.451318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.648 qpair failed and we were unable to recover it. 00:37:36.648 [2024-11-19 08:01:28.451425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.648 [2024-11-19 08:01:28.451460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.648 qpair failed and we were unable to recover it. 00:37:36.648 [2024-11-19 08:01:28.451622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.648 [2024-11-19 08:01:28.451657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.648 qpair failed and we were unable to recover it. 00:37:36.648 [2024-11-19 08:01:28.451843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.648 [2024-11-19 08:01:28.451894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.648 qpair failed and we were unable to recover it. 00:37:36.648 [2024-11-19 08:01:28.452045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.648 [2024-11-19 08:01:28.452083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.648 qpair failed and we were unable to recover it. 00:37:36.648 [2024-11-19 08:01:28.452214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.648 [2024-11-19 08:01:28.452250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.648 qpair failed and we were unable to recover it. 00:37:36.648 [2024-11-19 08:01:28.452367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.648 [2024-11-19 08:01:28.452403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.648 qpair failed and we were unable to recover it. 00:37:36.648 [2024-11-19 08:01:28.452506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.648 [2024-11-19 08:01:28.452540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.648 qpair failed and we were unable to recover it. 00:37:36.648 [2024-11-19 08:01:28.452676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.648 [2024-11-19 08:01:28.452720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.648 qpair failed and we were unable to recover it. 00:37:36.648 [2024-11-19 08:01:28.452885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.648 [2024-11-19 08:01:28.452920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.648 qpair failed and we were unable to recover it. 00:37:36.648 [2024-11-19 08:01:28.453061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.648 [2024-11-19 08:01:28.453095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.648 qpair failed and we were unable to recover it. 00:37:36.648 [2024-11-19 08:01:28.453200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.648 [2024-11-19 08:01:28.453235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.648 qpair failed and we were unable to recover it. 00:37:36.648 [2024-11-19 08:01:28.453350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.648 [2024-11-19 08:01:28.453386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.648 qpair failed and we were unable to recover it. 00:37:36.648 [2024-11-19 08:01:28.453568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.648 [2024-11-19 08:01:28.453618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.648 qpair failed and we were unable to recover it. 00:37:36.648 [2024-11-19 08:01:28.453829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.648 [2024-11-19 08:01:28.453880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.648 qpair failed and we were unable to recover it. 00:37:36.648 [2024-11-19 08:01:28.454032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.648 [2024-11-19 08:01:28.454071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.648 qpair failed and we were unable to recover it. 00:37:36.648 [2024-11-19 08:01:28.454211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.648 [2024-11-19 08:01:28.454247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.648 qpair failed and we were unable to recover it. 00:37:36.648 [2024-11-19 08:01:28.454415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.648 [2024-11-19 08:01:28.454451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.648 qpair failed and we were unable to recover it. 00:37:36.648 [2024-11-19 08:01:28.454595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.648 [2024-11-19 08:01:28.454631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.648 qpair failed and we were unable to recover it. 00:37:36.648 [2024-11-19 08:01:28.454755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.648 [2024-11-19 08:01:28.454799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.648 qpair failed and we were unable to recover it. 00:37:36.648 [2024-11-19 08:01:28.454950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.648 [2024-11-19 08:01:28.454990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.648 qpair failed and we were unable to recover it. 00:37:36.648 [2024-11-19 08:01:28.455106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.648 [2024-11-19 08:01:28.455143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.648 qpair failed and we were unable to recover it. 00:37:36.648 [2024-11-19 08:01:28.455284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.648 [2024-11-19 08:01:28.455320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.648 qpair failed and we were unable to recover it. 00:37:36.648 [2024-11-19 08:01:28.455458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.648 [2024-11-19 08:01:28.455493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.648 qpair failed and we were unable to recover it. 00:37:36.648 [2024-11-19 08:01:28.455636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.648 [2024-11-19 08:01:28.455673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.648 qpair failed and we were unable to recover it. 00:37:36.648 [2024-11-19 08:01:28.455832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.648 [2024-11-19 08:01:28.455881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.648 qpair failed and we were unable to recover it. 00:37:36.648 [2024-11-19 08:01:28.455995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.648 [2024-11-19 08:01:28.456031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.648 qpair failed and we were unable to recover it. 00:37:36.648 [2024-11-19 08:01:28.456203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.648 [2024-11-19 08:01:28.456238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.648 qpair failed and we were unable to recover it. 00:37:36.648 [2024-11-19 08:01:28.456375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.648 [2024-11-19 08:01:28.456410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.648 qpair failed and we were unable to recover it. 00:37:36.648 [2024-11-19 08:01:28.456545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.648 [2024-11-19 08:01:28.456580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.648 qpair failed and we were unable to recover it. 00:37:36.648 [2024-11-19 08:01:28.456705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.648 [2024-11-19 08:01:28.456742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.648 qpair failed and we were unable to recover it. 00:37:36.648 [2024-11-19 08:01:28.456855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.649 [2024-11-19 08:01:28.456890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.649 qpair failed and we were unable to recover it. 00:37:36.649 [2024-11-19 08:01:28.457029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.649 [2024-11-19 08:01:28.457064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.649 qpair failed and we were unable to recover it. 00:37:36.649 [2024-11-19 08:01:28.457234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.649 [2024-11-19 08:01:28.457270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.649 qpair failed and we were unable to recover it. 00:37:36.649 [2024-11-19 08:01:28.457380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.649 [2024-11-19 08:01:28.457415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.649 qpair failed and we were unable to recover it. 00:37:36.649 [2024-11-19 08:01:28.457522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.649 [2024-11-19 08:01:28.457557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.649 qpair failed and we were unable to recover it. 00:37:36.649 [2024-11-19 08:01:28.457656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.649 [2024-11-19 08:01:28.457703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.649 qpair failed and we were unable to recover it. 00:37:36.649 [2024-11-19 08:01:28.457868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.649 [2024-11-19 08:01:28.457918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.649 qpair failed and we were unable to recover it. 00:37:36.649 [2024-11-19 08:01:28.458083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.649 [2024-11-19 08:01:28.458122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.649 qpair failed and we were unable to recover it. 00:37:36.649 [2024-11-19 08:01:28.458287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.649 [2024-11-19 08:01:28.458335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.649 qpair failed and we were unable to recover it. 00:37:36.649 [2024-11-19 08:01:28.458504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.649 [2024-11-19 08:01:28.458540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.649 qpair failed and we were unable to recover it. 00:37:36.649 [2024-11-19 08:01:28.458645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.649 [2024-11-19 08:01:28.458680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.649 qpair failed and we were unable to recover it. 00:37:36.649 [2024-11-19 08:01:28.458809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.649 [2024-11-19 08:01:28.458844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.649 qpair failed and we were unable to recover it. 00:37:36.649 [2024-11-19 08:01:28.459014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.649 [2024-11-19 08:01:28.459051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.649 qpair failed and we were unable to recover it. 00:37:36.649 [2024-11-19 08:01:28.459191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.649 [2024-11-19 08:01:28.459226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.649 qpair failed and we were unable to recover it. 00:37:36.649 [2024-11-19 08:01:28.459341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.649 [2024-11-19 08:01:28.459377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.649 qpair failed and we were unable to recover it. 00:37:36.649 [2024-11-19 08:01:28.459520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.649 [2024-11-19 08:01:28.459557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.649 qpair failed and we were unable to recover it. 00:37:36.649 [2024-11-19 08:01:28.459695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.649 [2024-11-19 08:01:28.459745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.649 qpair failed and we were unable to recover it. 00:37:36.649 [2024-11-19 08:01:28.459861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.649 [2024-11-19 08:01:28.459898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.649 qpair failed and we were unable to recover it. 00:37:36.649 [2024-11-19 08:01:28.460035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.649 [2024-11-19 08:01:28.460071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.649 qpair failed and we were unable to recover it. 00:37:36.649 [2024-11-19 08:01:28.460204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.649 [2024-11-19 08:01:28.460239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.649 qpair failed and we were unable to recover it. 00:37:36.649 [2024-11-19 08:01:28.460349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.649 [2024-11-19 08:01:28.460385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.649 qpair failed and we were unable to recover it. 00:37:36.649 [2024-11-19 08:01:28.460506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.649 [2024-11-19 08:01:28.460542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.649 qpair failed and we were unable to recover it. 00:37:36.649 [2024-11-19 08:01:28.460680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.649 [2024-11-19 08:01:28.460735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.649 qpair failed and we were unable to recover it. 00:37:36.649 [2024-11-19 08:01:28.460853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.649 [2024-11-19 08:01:28.460888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.649 qpair failed and we were unable to recover it. 00:37:36.649 [2024-11-19 08:01:28.461033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.649 [2024-11-19 08:01:28.461070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.649 qpair failed and we were unable to recover it. 00:37:36.649 [2024-11-19 08:01:28.461210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.649 [2024-11-19 08:01:28.461246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.649 qpair failed and we were unable to recover it. 00:37:36.649 [2024-11-19 08:01:28.461354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.649 [2024-11-19 08:01:28.461392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.649 qpair failed and we were unable to recover it. 00:37:36.649 [2024-11-19 08:01:28.461534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.649 [2024-11-19 08:01:28.461571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.649 qpair failed and we were unable to recover it. 00:37:36.649 [2024-11-19 08:01:28.461679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.649 [2024-11-19 08:01:28.461743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.649 qpair failed and we were unable to recover it. 00:37:36.649 [2024-11-19 08:01:28.461879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.649 [2024-11-19 08:01:28.461915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.649 qpair failed and we were unable to recover it. 00:37:36.649 [2024-11-19 08:01:28.462055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.649 [2024-11-19 08:01:28.462091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.649 qpair failed and we were unable to recover it. 00:37:36.649 [2024-11-19 08:01:28.462202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.649 [2024-11-19 08:01:28.462238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.649 qpair failed and we were unable to recover it. 00:37:36.649 [2024-11-19 08:01:28.462349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.649 [2024-11-19 08:01:28.462385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.649 qpair failed and we were unable to recover it. 00:37:36.649 [2024-11-19 08:01:28.462506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.649 [2024-11-19 08:01:28.462543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.649 qpair failed and we were unable to recover it. 00:37:36.649 [2024-11-19 08:01:28.462684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.649 [2024-11-19 08:01:28.462740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.649 qpair failed and we were unable to recover it. 00:37:36.649 [2024-11-19 08:01:28.462902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.649 [2024-11-19 08:01:28.462948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.649 qpair failed and we were unable to recover it. 00:37:36.649 [2024-11-19 08:01:28.463101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.649 [2024-11-19 08:01:28.463137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.649 qpair failed and we were unable to recover it. 00:37:36.649 [2024-11-19 08:01:28.463289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.649 [2024-11-19 08:01:28.463339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.649 qpair failed and we were unable to recover it. 00:37:36.649 [2024-11-19 08:01:28.463522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.649 [2024-11-19 08:01:28.463560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.649 qpair failed and we were unable to recover it. 00:37:36.649 [2024-11-19 08:01:28.463700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.649 [2024-11-19 08:01:28.463748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.649 qpair failed and we were unable to recover it. 00:37:36.649 [2024-11-19 08:01:28.463877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.649 [2024-11-19 08:01:28.463912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.649 qpair failed and we were unable to recover it. 00:37:36.649 [2024-11-19 08:01:28.464051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.649 [2024-11-19 08:01:28.464086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.649 qpair failed and we were unable to recover it. 00:37:36.649 [2024-11-19 08:01:28.464232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.649 [2024-11-19 08:01:28.464268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.649 qpair failed and we were unable to recover it. 00:37:36.649 [2024-11-19 08:01:28.464403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.649 [2024-11-19 08:01:28.464439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.649 qpair failed and we were unable to recover it. 00:37:36.649 [2024-11-19 08:01:28.464591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.649 [2024-11-19 08:01:28.464642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.649 qpair failed and we were unable to recover it. 00:37:36.649 [2024-11-19 08:01:28.464787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.649 [2024-11-19 08:01:28.464837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.649 qpair failed and we were unable to recover it. 00:37:36.649 [2024-11-19 08:01:28.464956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.649 [2024-11-19 08:01:28.465011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.649 qpair failed and we were unable to recover it. 00:37:36.649 [2024-11-19 08:01:28.465148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.649 [2024-11-19 08:01:28.465184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.649 qpair failed and we were unable to recover it. 00:37:36.649 [2024-11-19 08:01:28.465325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.649 [2024-11-19 08:01:28.465361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.649 qpair failed and we were unable to recover it. 00:37:36.649 [2024-11-19 08:01:28.465525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.649 [2024-11-19 08:01:28.465561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.649 qpair failed and we were unable to recover it. 00:37:36.649 [2024-11-19 08:01:28.465672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 08:01:28.465728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.650 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 08:01:28.465888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 08:01:28.465938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.650 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 08:01:28.466085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 08:01:28.466123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.650 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 08:01:28.466263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 08:01:28.466299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.650 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 08:01:28.466408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 08:01:28.466444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.650 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 08:01:28.466606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 08:01:28.466658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.650 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 08:01:28.466800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 08:01:28.466836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.650 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 08:01:28.466950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 08:01:28.466986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.650 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 08:01:28.467145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 08:01:28.467182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.650 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 08:01:28.467318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 08:01:28.467354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.650 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 08:01:28.467484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 08:01:28.467520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.650 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 08:01:28.467648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 08:01:28.467706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.650 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 08:01:28.467869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 08:01:28.467907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.650 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 08:01:28.468012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 08:01:28.468047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.650 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 08:01:28.468185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 08:01:28.468232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.650 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 08:01:28.468365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 08:01:28.468401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.650 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 08:01:28.468522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 08:01:28.468572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.650 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 08:01:28.468712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 08:01:28.468749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.650 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 08:01:28.468889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 08:01:28.468929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.650 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 08:01:28.469080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 08:01:28.469116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.650 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 08:01:28.469245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 08:01:28.469281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.650 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 08:01:28.469419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 08:01:28.469454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.650 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 08:01:28.469596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 08:01:28.469632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.650 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 08:01:28.469785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 08:01:28.469825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.650 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 08:01:28.469947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 08:01:28.469984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.650 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 08:01:28.470136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 08:01:28.470186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.650 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 08:01:28.470300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 08:01:28.470338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.650 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 08:01:28.470499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 08:01:28.470550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.650 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 08:01:28.470733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 08:01:28.470769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.650 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 08:01:28.470905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 08:01:28.470940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.650 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 08:01:28.471085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 08:01:28.471121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.650 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 08:01:28.471286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 08:01:28.471322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.650 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 08:01:28.471437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 08:01:28.471473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.650 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 08:01:28.471573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 08:01:28.471607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.650 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 08:01:28.471740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 08:01:28.471781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.650 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 08:01:28.471949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 08:01:28.471987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.650 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 08:01:28.472111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 08:01:28.472160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.650 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 08:01:28.472318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 08:01:28.472356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.650 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 08:01:28.472480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 08:01:28.472519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.650 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 08:01:28.472694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 08:01:28.472732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.650 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 08:01:28.472863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 08:01:28.472900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.650 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 08:01:28.473043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 08:01:28.473079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.650 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 08:01:28.473189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 08:01:28.473226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.650 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 08:01:28.473388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 08:01:28.473423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.650 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 08:01:28.473538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 08:01:28.473575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.650 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 08:01:28.473747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 08:01:28.473785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.650 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 08:01:28.473929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 08:01:28.473969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.650 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 08:01:28.474108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 08:01:28.474145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.650 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 08:01:28.474314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.650 [2024-11-19 08:01:28.474350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.650 qpair failed and we were unable to recover it. 00:37:36.650 [2024-11-19 08:01:28.474481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.651 [2024-11-19 08:01:28.474517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.651 qpair failed and we were unable to recover it. 00:37:36.651 [2024-11-19 08:01:28.474646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.651 [2024-11-19 08:01:28.474682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.651 qpair failed and we were unable to recover it. 00:37:36.651 [2024-11-19 08:01:28.474806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.651 [2024-11-19 08:01:28.474843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.651 qpair failed and we were unable to recover it. 00:37:36.651 [2024-11-19 08:01:28.474944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.651 [2024-11-19 08:01:28.474980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.651 qpair failed and we were unable to recover it. 00:37:36.651 [2024-11-19 08:01:28.475082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.651 [2024-11-19 08:01:28.475118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.651 qpair failed and we were unable to recover it. 00:37:36.651 [2024-11-19 08:01:28.475231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.651 [2024-11-19 08:01:28.475267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.651 qpair failed and we were unable to recover it. 00:37:36.651 [2024-11-19 08:01:28.475420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.651 [2024-11-19 08:01:28.475457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.651 qpair failed and we were unable to recover it. 00:37:36.651 [2024-11-19 08:01:28.475565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.651 [2024-11-19 08:01:28.475601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.651 qpair failed and we were unable to recover it. 00:37:36.651 [2024-11-19 08:01:28.475745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.651 [2024-11-19 08:01:28.475782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.651 qpair failed and we were unable to recover it. 00:37:36.651 [2024-11-19 08:01:28.475895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.651 [2024-11-19 08:01:28.475936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.651 qpair failed and we were unable to recover it. 00:37:36.651 [2024-11-19 08:01:28.476101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.651 [2024-11-19 08:01:28.476136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.651 qpair failed and we were unable to recover it. 00:37:36.651 [2024-11-19 08:01:28.476282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.651 [2024-11-19 08:01:28.476316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.651 qpair failed and we were unable to recover it. 00:37:36.651 [2024-11-19 08:01:28.476453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.651 [2024-11-19 08:01:28.476488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.651 qpair failed and we were unable to recover it. 00:37:36.651 [2024-11-19 08:01:28.476620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.651 [2024-11-19 08:01:28.476655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.651 qpair failed and we were unable to recover it. 00:37:36.651 [2024-11-19 08:01:28.476787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.651 [2024-11-19 08:01:28.476825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.651 qpair failed and we were unable to recover it. 00:37:36.651 [2024-11-19 08:01:28.476941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.651 [2024-11-19 08:01:28.476977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.651 qpair failed and we were unable to recover it. 00:37:36.651 [2024-11-19 08:01:28.477081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.651 [2024-11-19 08:01:28.477117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.651 qpair failed and we were unable to recover it. 00:37:36.651 [2024-11-19 08:01:28.477281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.651 [2024-11-19 08:01:28.477317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.651 qpair failed and we were unable to recover it. 00:37:36.651 [2024-11-19 08:01:28.477424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.651 [2024-11-19 08:01:28.477460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.651 qpair failed and we were unable to recover it. 00:37:36.651 [2024-11-19 08:01:28.477602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.651 [2024-11-19 08:01:28.477651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.651 qpair failed and we were unable to recover it. 00:37:36.651 [2024-11-19 08:01:28.477774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.651 [2024-11-19 08:01:28.477812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.651 qpair failed and we were unable to recover it. 00:37:36.651 [2024-11-19 08:01:28.477926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.651 [2024-11-19 08:01:28.477962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.651 qpair failed and we were unable to recover it. 00:37:36.651 [2024-11-19 08:01:28.478071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.651 [2024-11-19 08:01:28.478107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.651 qpair failed and we were unable to recover it. 00:37:36.651 [2024-11-19 08:01:28.478246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.651 [2024-11-19 08:01:28.478281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.651 qpair failed and we were unable to recover it. 00:37:36.651 [2024-11-19 08:01:28.478425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.651 [2024-11-19 08:01:28.478462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.651 qpair failed and we were unable to recover it. 00:37:36.651 [2024-11-19 08:01:28.478568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.651 [2024-11-19 08:01:28.478604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.651 qpair failed and we were unable to recover it. 00:37:36.651 [2024-11-19 08:01:28.478727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.651 [2024-11-19 08:01:28.478778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.651 qpair failed and we were unable to recover it. 00:37:36.651 [2024-11-19 08:01:28.478953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.651 [2024-11-19 08:01:28.478991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.651 qpair failed and we were unable to recover it. 00:37:36.651 [2024-11-19 08:01:28.479097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.651 [2024-11-19 08:01:28.479133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.651 qpair failed and we were unable to recover it. 00:37:36.651 [2024-11-19 08:01:28.479253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.651 [2024-11-19 08:01:28.479289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.651 qpair failed and we were unable to recover it. 00:37:36.651 [2024-11-19 08:01:28.479428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.651 [2024-11-19 08:01:28.479465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.651 qpair failed and we were unable to recover it. 00:37:36.651 [2024-11-19 08:01:28.479649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.651 [2024-11-19 08:01:28.479709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.651 qpair failed and we were unable to recover it. 00:37:36.651 [2024-11-19 08:01:28.479833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.651 [2024-11-19 08:01:28.479871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.651 qpair failed and we were unable to recover it. 00:37:36.651 [2024-11-19 08:01:28.479982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.651 [2024-11-19 08:01:28.480017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.651 qpair failed and we were unable to recover it. 00:37:36.651 [2024-11-19 08:01:28.480153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.651 [2024-11-19 08:01:28.480188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.651 qpair failed and we were unable to recover it. 00:37:36.651 [2024-11-19 08:01:28.480334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.651 [2024-11-19 08:01:28.480371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.651 qpair failed and we were unable to recover it. 00:37:36.651 [2024-11-19 08:01:28.480538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.651 [2024-11-19 08:01:28.480581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.651 qpair failed and we were unable to recover it. 00:37:36.651 [2024-11-19 08:01:28.480702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.651 [2024-11-19 08:01:28.480750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.651 qpair failed and we were unable to recover it. 00:37:36.651 [2024-11-19 08:01:28.480881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.651 [2024-11-19 08:01:28.480931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.651 qpair failed and we were unable to recover it. 00:37:36.651 [2024-11-19 08:01:28.481121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.651 [2024-11-19 08:01:28.481160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.651 qpair failed and we were unable to recover it. 00:37:36.651 [2024-11-19 08:01:28.481295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.651 [2024-11-19 08:01:28.481332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.651 qpair failed and we were unable to recover it. 00:37:36.651 [2024-11-19 08:01:28.481475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.651 [2024-11-19 08:01:28.481512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.651 qpair failed and we were unable to recover it. 00:37:36.651 [2024-11-19 08:01:28.481624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.651 [2024-11-19 08:01:28.481660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.651 qpair failed and we were unable to recover it. 00:37:36.651 [2024-11-19 08:01:28.481803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.651 [2024-11-19 08:01:28.481852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.651 qpair failed and we were unable to recover it. 00:37:36.651 [2024-11-19 08:01:28.481983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 08:01:28.482020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 08:01:28.482166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 08:01:28.482202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 08:01:28.482342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 08:01:28.482377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 08:01:28.482494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 08:01:28.482531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 08:01:28.482666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 08:01:28.482726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 08:01:28.482841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 08:01:28.482878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 08:01:28.483031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 08:01:28.483067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 08:01:28.483177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 08:01:28.483214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 08:01:28.483353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 08:01:28.483390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 08:01:28.483529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 08:01:28.483565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 08:01:28.483685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 08:01:28.483743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 08:01:28.483895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 08:01:28.483930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 08:01:28.484088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 08:01:28.484124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 08:01:28.484234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 08:01:28.484271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 08:01:28.484375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 08:01:28.484410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 08:01:28.484574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 08:01:28.484610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 08:01:28.484770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 08:01:28.484821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 08:01:28.484952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 08:01:28.485003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 08:01:28.485127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 08:01:28.485165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 08:01:28.485377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 08:01:28.485413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 08:01:28.485540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 08:01:28.485576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 08:01:28.485682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 08:01:28.485733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 08:01:28.485875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 08:01:28.485911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 08:01:28.486121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 08:01:28.486158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 08:01:28.486301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 08:01:28.486338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 08:01:28.486472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 08:01:28.486509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 08:01:28.486646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 08:01:28.486682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 08:01:28.486854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 08:01:28.486890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 08:01:28.487057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 08:01:28.487092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 08:01:28.487253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 08:01:28.487304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 08:01:28.487454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 08:01:28.487490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 08:01:28.487598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 08:01:28.487636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 08:01:28.487795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 08:01:28.487850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 08:01:28.487986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 08:01:28.488037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 08:01:28.488227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 08:01:28.488266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 08:01:28.488409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 08:01:28.488448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 08:01:28.488595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 08:01:28.488632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 08:01:28.488794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 08:01:28.488832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 08:01:28.488954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 08:01:28.489004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 08:01:28.489172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 08:01:28.489210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 08:01:28.489328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 08:01:28.489365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 08:01:28.489497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 08:01:28.489533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 08:01:28.489714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 08:01:28.489768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 08:01:28.489913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 08:01:28.489961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 08:01:28.490070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 08:01:28.490108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 08:01:28.490254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 08:01:28.490290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 08:01:28.490426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 08:01:28.490461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 08:01:28.490638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 08:01:28.490675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 08:01:28.490810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 08:01:28.490845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 08:01:28.490955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.652 [2024-11-19 08:01:28.490991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.652 qpair failed and we were unable to recover it. 00:37:36.652 [2024-11-19 08:01:28.491123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.653 [2024-11-19 08:01:28.491158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.653 qpair failed and we were unable to recover it. 00:37:36.653 [2024-11-19 08:01:28.491298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.653 [2024-11-19 08:01:28.491333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.653 qpair failed and we were unable to recover it. 00:37:36.653 [2024-11-19 08:01:28.491457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.653 [2024-11-19 08:01:28.491508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.653 qpair failed and we were unable to recover it. 00:37:36.653 [2024-11-19 08:01:28.491656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.653 [2024-11-19 08:01:28.491702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.653 qpair failed and we were unable to recover it. 00:37:36.653 [2024-11-19 08:01:28.491827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.653 [2024-11-19 08:01:28.491864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.653 qpair failed and we were unable to recover it. 00:37:36.653 [2024-11-19 08:01:28.492001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.653 [2024-11-19 08:01:28.492037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.653 qpair failed and we were unable to recover it. 00:37:36.653 [2024-11-19 08:01:28.492171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.653 [2024-11-19 08:01:28.492207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.653 qpair failed and we were unable to recover it. 00:37:36.653 [2024-11-19 08:01:28.492355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.653 [2024-11-19 08:01:28.492406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.653 qpair failed and we were unable to recover it. 00:37:36.653 [2024-11-19 08:01:28.492554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.653 [2024-11-19 08:01:28.492590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.653 qpair failed and we were unable to recover it. 00:37:36.653 [2024-11-19 08:01:28.492756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.653 [2024-11-19 08:01:28.492806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.653 qpair failed and we were unable to recover it. 00:37:36.653 [2024-11-19 08:01:28.492954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.653 [2024-11-19 08:01:28.492992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.653 qpair failed and we were unable to recover it. 00:37:36.653 [2024-11-19 08:01:28.493157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.653 [2024-11-19 08:01:28.493194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.653 qpair failed and we were unable to recover it. 00:37:36.653 [2024-11-19 08:01:28.493333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.653 [2024-11-19 08:01:28.493369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.653 qpair failed and we were unable to recover it. 00:37:36.653 [2024-11-19 08:01:28.493510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.653 [2024-11-19 08:01:28.493546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.653 qpair failed and we were unable to recover it. 00:37:36.653 [2024-11-19 08:01:28.493713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.653 [2024-11-19 08:01:28.493753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.653 qpair failed and we were unable to recover it. 00:37:36.653 [2024-11-19 08:01:28.493885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.653 [2024-11-19 08:01:28.493934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.653 qpair failed and we were unable to recover it. 00:37:36.653 [2024-11-19 08:01:28.494118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.653 [2024-11-19 08:01:28.494156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.653 qpair failed and we were unable to recover it. 00:37:36.653 [2024-11-19 08:01:28.494303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.653 [2024-11-19 08:01:28.494339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.653 qpair failed and we were unable to recover it. 00:37:36.653 [2024-11-19 08:01:28.494447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.653 [2024-11-19 08:01:28.494483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.653 qpair failed and we were unable to recover it. 00:37:36.653 [2024-11-19 08:01:28.494593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.653 [2024-11-19 08:01:28.494630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.653 qpair failed and we were unable to recover it. 00:37:36.653 [2024-11-19 08:01:28.494802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.653 [2024-11-19 08:01:28.494837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.653 qpair failed and we were unable to recover it. 00:37:36.653 [2024-11-19 08:01:28.494950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.653 [2024-11-19 08:01:28.494985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.653 qpair failed and we were unable to recover it. 00:37:36.653 [2024-11-19 08:01:28.495122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.653 [2024-11-19 08:01:28.495163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.653 qpair failed and we were unable to recover it. 00:37:36.653 [2024-11-19 08:01:28.495303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.653 [2024-11-19 08:01:28.495338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.653 qpair failed and we were unable to recover it. 00:37:36.653 [2024-11-19 08:01:28.495481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.653 [2024-11-19 08:01:28.495530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.653 qpair failed and we were unable to recover it. 00:37:36.653 [2024-11-19 08:01:28.495663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.653 [2024-11-19 08:01:28.495706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.653 qpair failed and we were unable to recover it. 00:37:36.653 [2024-11-19 08:01:28.495821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.653 [2024-11-19 08:01:28.495857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.653 qpair failed and we were unable to recover it. 00:37:36.653 [2024-11-19 08:01:28.495978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.653 [2024-11-19 08:01:28.496014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.653 qpair failed and we were unable to recover it. 00:37:36.653 [2024-11-19 08:01:28.496178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.653 [2024-11-19 08:01:28.496215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.653 qpair failed and we were unable to recover it. 00:37:36.653 [2024-11-19 08:01:28.496371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.653 [2024-11-19 08:01:28.496422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.653 qpair failed and we were unable to recover it. 00:37:36.653 [2024-11-19 08:01:28.496577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.653 [2024-11-19 08:01:28.496614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.653 qpair failed and we were unable to recover it. 00:37:36.653 [2024-11-19 08:01:28.496730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.653 [2024-11-19 08:01:28.496773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.653 qpair failed and we were unable to recover it. 00:37:36.653 [2024-11-19 08:01:28.496912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.653 [2024-11-19 08:01:28.496947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.653 qpair failed and we were unable to recover it. 00:37:36.653 [2024-11-19 08:01:28.497112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.653 [2024-11-19 08:01:28.497147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.653 qpair failed and we were unable to recover it. 00:37:36.653 [2024-11-19 08:01:28.497281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.653 [2024-11-19 08:01:28.497316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.653 qpair failed and we were unable to recover it. 00:37:36.653 [2024-11-19 08:01:28.497454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.653 [2024-11-19 08:01:28.497489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.653 qpair failed and we were unable to recover it. 00:37:36.653 [2024-11-19 08:01:28.497676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.653 [2024-11-19 08:01:28.497743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.653 qpair failed and we were unable to recover it. 00:37:36.653 [2024-11-19 08:01:28.497889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.653 [2024-11-19 08:01:28.497926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.653 qpair failed and we were unable to recover it. 00:37:36.653 [2024-11-19 08:01:28.498064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.653 [2024-11-19 08:01:28.498100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.653 qpair failed and we were unable to recover it. 00:37:36.653 [2024-11-19 08:01:28.498263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.653 [2024-11-19 08:01:28.498299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.653 qpair failed and we were unable to recover it. 00:37:36.653 [2024-11-19 08:01:28.498423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.653 [2024-11-19 08:01:28.498458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.653 qpair failed and we were unable to recover it. 00:37:36.653 [2024-11-19 08:01:28.498620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.653 [2024-11-19 08:01:28.498656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.653 qpair failed and we were unable to recover it. 00:37:36.653 [2024-11-19 08:01:28.498807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.653 [2024-11-19 08:01:28.498845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.653 qpair failed and we were unable to recover it. 00:37:36.653 [2024-11-19 08:01:28.498975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.653 [2024-11-19 08:01:28.499011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.653 qpair failed and we were unable to recover it. 00:37:36.653 [2024-11-19 08:01:28.499127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.653 [2024-11-19 08:01:28.499162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.653 qpair failed and we were unable to recover it. 00:37:36.653 [2024-11-19 08:01:28.499274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.653 [2024-11-19 08:01:28.499309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.653 qpair failed and we were unable to recover it. 00:37:36.653 [2024-11-19 08:01:28.499447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.653 [2024-11-19 08:01:28.499482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.653 qpair failed and we were unable to recover it. 00:37:36.653 [2024-11-19 08:01:28.499634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.653 [2024-11-19 08:01:28.499684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.653 qpair failed and we were unable to recover it. 00:37:36.653 [2024-11-19 08:01:28.499876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.653 [2024-11-19 08:01:28.499927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.653 qpair failed and we were unable to recover it. 00:37:36.653 [2024-11-19 08:01:28.500054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.653 [2024-11-19 08:01:28.500104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.653 qpair failed and we were unable to recover it. 00:37:36.653 [2024-11-19 08:01:28.500220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.654 [2024-11-19 08:01:28.500256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.654 qpair failed and we were unable to recover it. 00:37:36.654 [2024-11-19 08:01:28.500398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.654 [2024-11-19 08:01:28.500434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.654 qpair failed and we were unable to recover it. 00:37:36.654 [2024-11-19 08:01:28.500564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.654 [2024-11-19 08:01:28.500600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.654 qpair failed and we were unable to recover it. 00:37:36.654 [2024-11-19 08:01:28.500742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.654 [2024-11-19 08:01:28.500778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.654 qpair failed and we were unable to recover it. 00:37:36.654 [2024-11-19 08:01:28.500926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.654 [2024-11-19 08:01:28.500981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.654 qpair failed and we were unable to recover it. 00:37:36.654 [2024-11-19 08:01:28.501138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.654 [2024-11-19 08:01:28.501176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.654 qpair failed and we were unable to recover it. 00:37:36.654 [2024-11-19 08:01:28.501286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.654 [2024-11-19 08:01:28.501323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.654 qpair failed and we were unable to recover it. 00:37:36.654 [2024-11-19 08:01:28.501462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.654 [2024-11-19 08:01:28.501499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.654 qpair failed and we were unable to recover it. 00:37:36.654 [2024-11-19 08:01:28.501608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.654 [2024-11-19 08:01:28.501641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.654 qpair failed and we were unable to recover it. 00:37:36.654 [2024-11-19 08:01:28.501775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.654 [2024-11-19 08:01:28.501810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.654 qpair failed and we were unable to recover it. 00:37:36.654 [2024-11-19 08:01:28.501956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.654 [2024-11-19 08:01:28.501991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.654 qpair failed and we were unable to recover it. 00:37:36.654 [2024-11-19 08:01:28.502124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.654 [2024-11-19 08:01:28.502160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.654 qpair failed and we were unable to recover it. 00:37:36.654 [2024-11-19 08:01:28.502297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.654 [2024-11-19 08:01:28.502338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.654 qpair failed and we were unable to recover it. 00:37:36.654 [2024-11-19 08:01:28.502448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.654 [2024-11-19 08:01:28.502485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.654 qpair failed and we were unable to recover it. 00:37:36.654 [2024-11-19 08:01:28.502620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.654 [2024-11-19 08:01:28.502671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.654 qpair failed and we were unable to recover it. 00:37:36.654 [2024-11-19 08:01:28.502842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.654 [2024-11-19 08:01:28.502880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.654 qpair failed and we were unable to recover it. 00:37:36.654 [2024-11-19 08:01:28.503026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.654 [2024-11-19 08:01:28.503062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.654 qpair failed and we were unable to recover it. 00:37:36.654 [2024-11-19 08:01:28.503193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.654 [2024-11-19 08:01:28.503230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.654 qpair failed and we were unable to recover it. 00:37:36.654 [2024-11-19 08:01:28.503370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.654 [2024-11-19 08:01:28.503407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.654 qpair failed and we were unable to recover it. 00:37:36.654 [2024-11-19 08:01:28.503547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.654 [2024-11-19 08:01:28.503584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.654 qpair failed and we were unable to recover it. 00:37:36.654 [2024-11-19 08:01:28.503735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.654 [2024-11-19 08:01:28.503770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.654 qpair failed and we were unable to recover it. 00:37:36.654 [2024-11-19 08:01:28.503927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.654 [2024-11-19 08:01:28.503977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.654 qpair failed and we were unable to recover it. 00:37:36.654 [2024-11-19 08:01:28.504168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.654 [2024-11-19 08:01:28.504207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.654 qpair failed and we were unable to recover it. 00:37:36.654 [2024-11-19 08:01:28.504374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.654 [2024-11-19 08:01:28.504411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.654 qpair failed and we were unable to recover it. 00:37:36.654 [2024-11-19 08:01:28.504544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.654 [2024-11-19 08:01:28.504581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.654 qpair failed and we were unable to recover it. 00:37:36.654 [2024-11-19 08:01:28.504720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.654 [2024-11-19 08:01:28.504757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.654 qpair failed and we were unable to recover it. 00:37:36.654 [2024-11-19 08:01:28.504920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.654 [2024-11-19 08:01:28.504979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.654 qpair failed and we were unable to recover it. 00:37:36.654 [2024-11-19 08:01:28.505152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.654 [2024-11-19 08:01:28.505188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.654 qpair failed and we were unable to recover it. 00:37:36.654 [2024-11-19 08:01:28.505324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.654 [2024-11-19 08:01:28.505360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.654 qpair failed and we were unable to recover it. 00:37:36.654 [2024-11-19 08:01:28.505494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.654 [2024-11-19 08:01:28.505529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.654 qpair failed and we were unable to recover it. 00:37:36.654 [2024-11-19 08:01:28.505632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.654 [2024-11-19 08:01:28.505667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.654 qpair failed and we were unable to recover it. 00:37:36.654 [2024-11-19 08:01:28.505820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.654 [2024-11-19 08:01:28.505854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.654 qpair failed and we were unable to recover it. 00:37:36.654 [2024-11-19 08:01:28.505976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.654 [2024-11-19 08:01:28.506013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.654 qpair failed and we were unable to recover it. 00:37:36.654 [2024-11-19 08:01:28.506130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.941 [2024-11-19 08:01:28.506171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.941 qpair failed and we were unable to recover it. 00:37:36.941 [2024-11-19 08:01:28.506292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.941 [2024-11-19 08:01:28.506329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.941 qpair failed and we were unable to recover it. 00:37:36.941 [2024-11-19 08:01:28.506467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.941 [2024-11-19 08:01:28.506505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.941 qpair failed and we were unable to recover it. 00:37:36.941 [2024-11-19 08:01:28.506646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.941 [2024-11-19 08:01:28.506683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.941 qpair failed and we were unable to recover it. 00:37:36.941 [2024-11-19 08:01:28.506843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.941 [2024-11-19 08:01:28.506879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.941 qpair failed and we were unable to recover it. 00:37:36.941 [2024-11-19 08:01:28.507046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.941 [2024-11-19 08:01:28.507081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.941 qpair failed and we were unable to recover it. 00:37:36.941 [2024-11-19 08:01:28.507267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.941 [2024-11-19 08:01:28.507304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.941 qpair failed and we were unable to recover it. 00:37:36.941 [2024-11-19 08:01:28.507446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.941 [2024-11-19 08:01:28.507482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.941 qpair failed and we were unable to recover it. 00:37:36.941 [2024-11-19 08:01:28.507589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.941 [2024-11-19 08:01:28.507625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.941 qpair failed and we were unable to recover it. 00:37:36.941 [2024-11-19 08:01:28.507782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.941 [2024-11-19 08:01:28.507819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.941 qpair failed and we were unable to recover it. 00:37:36.941 [2024-11-19 08:01:28.507952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.941 [2024-11-19 08:01:28.507996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.941 qpair failed and we were unable to recover it. 00:37:36.941 [2024-11-19 08:01:28.508126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.941 [2024-11-19 08:01:28.508174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.941 qpair failed and we were unable to recover it. 00:37:36.941 [2024-11-19 08:01:28.508338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.941 [2024-11-19 08:01:28.508375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.941 qpair failed and we were unable to recover it. 00:37:36.941 [2024-11-19 08:01:28.508487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.941 [2024-11-19 08:01:28.508523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.941 qpair failed and we were unable to recover it. 00:37:36.941 [2024-11-19 08:01:28.508684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.942 [2024-11-19 08:01:28.508738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.942 qpair failed and we were unable to recover it. 00:37:36.942 [2024-11-19 08:01:28.508894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.942 [2024-11-19 08:01:28.508944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.942 qpair failed and we were unable to recover it. 00:37:36.942 [2024-11-19 08:01:28.509071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.942 [2024-11-19 08:01:28.509108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.942 qpair failed and we were unable to recover it. 00:37:36.942 [2024-11-19 08:01:28.509250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.942 [2024-11-19 08:01:28.509286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.942 qpair failed and we were unable to recover it. 00:37:36.942 [2024-11-19 08:01:28.509422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.942 [2024-11-19 08:01:28.509458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.942 qpair failed and we were unable to recover it. 00:37:36.942 [2024-11-19 08:01:28.509556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.942 [2024-11-19 08:01:28.509597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.942 qpair failed and we were unable to recover it. 00:37:36.942 [2024-11-19 08:01:28.509740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.942 [2024-11-19 08:01:28.509775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.942 qpair failed and we were unable to recover it. 00:37:36.942 [2024-11-19 08:01:28.509892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.942 [2024-11-19 08:01:28.509950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.942 qpair failed and we were unable to recover it. 00:37:36.942 [2024-11-19 08:01:28.510057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.942 [2024-11-19 08:01:28.510093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.942 qpair failed and we were unable to recover it. 00:37:36.942 [2024-11-19 08:01:28.510233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.942 [2024-11-19 08:01:28.510271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.942 qpair failed and we were unable to recover it. 00:37:36.942 [2024-11-19 08:01:28.510388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.942 [2024-11-19 08:01:28.510425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.942 qpair failed and we were unable to recover it. 00:37:36.942 [2024-11-19 08:01:28.510577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.942 [2024-11-19 08:01:28.510626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.942 qpair failed and we were unable to recover it. 00:37:36.942 [2024-11-19 08:01:28.510803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.942 [2024-11-19 08:01:28.510853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.942 qpair failed and we were unable to recover it. 00:37:36.942 [2024-11-19 08:01:28.510982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.942 [2024-11-19 08:01:28.511020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.942 qpair failed and we were unable to recover it. 00:37:36.942 [2024-11-19 08:01:28.511160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.942 [2024-11-19 08:01:28.511196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.942 qpair failed and we were unable to recover it. 00:37:36.942 [2024-11-19 08:01:28.511308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.942 [2024-11-19 08:01:28.511343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.942 qpair failed and we were unable to recover it. 00:37:36.942 [2024-11-19 08:01:28.511478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.942 [2024-11-19 08:01:28.511514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.942 qpair failed and we were unable to recover it. 00:37:36.942 [2024-11-19 08:01:28.511687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.942 [2024-11-19 08:01:28.511745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.942 qpair failed and we were unable to recover it. 00:37:36.942 [2024-11-19 08:01:28.511856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.942 [2024-11-19 08:01:28.511890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.942 qpair failed and we were unable to recover it. 00:37:36.942 [2024-11-19 08:01:28.512005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.942 [2024-11-19 08:01:28.512040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.942 qpair failed and we were unable to recover it. 00:37:36.942 [2024-11-19 08:01:28.512178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.942 [2024-11-19 08:01:28.512213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.942 qpair failed and we were unable to recover it. 00:37:36.942 [2024-11-19 08:01:28.512315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.942 [2024-11-19 08:01:28.512355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.942 qpair failed and we were unable to recover it. 00:37:36.942 [2024-11-19 08:01:28.512499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.942 [2024-11-19 08:01:28.512545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.942 qpair failed and we were unable to recover it. 00:37:36.942 [2024-11-19 08:01:28.512686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.942 [2024-11-19 08:01:28.512734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.942 qpair failed and we were unable to recover it. 00:37:36.942 [2024-11-19 08:01:28.512874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.942 [2024-11-19 08:01:28.512909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.942 qpair failed and we were unable to recover it. 00:37:36.942 [2024-11-19 08:01:28.513062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.942 [2024-11-19 08:01:28.513097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.942 qpair failed and we were unable to recover it. 00:37:36.942 [2024-11-19 08:01:28.513235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.942 [2024-11-19 08:01:28.513271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.942 qpair failed and we were unable to recover it. 00:37:36.942 [2024-11-19 08:01:28.513416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.942 [2024-11-19 08:01:28.513456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.942 qpair failed and we were unable to recover it. 00:37:36.942 [2024-11-19 08:01:28.513575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.942 [2024-11-19 08:01:28.513615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.942 qpair failed and we were unable to recover it. 00:37:36.942 [2024-11-19 08:01:28.513764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.942 [2024-11-19 08:01:28.513814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.942 qpair failed and we were unable to recover it. 00:37:36.942 [2024-11-19 08:01:28.513959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.942 [2024-11-19 08:01:28.513994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.942 qpair failed and we were unable to recover it. 00:37:36.942 [2024-11-19 08:01:28.514155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.942 [2024-11-19 08:01:28.514191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.942 qpair failed and we were unable to recover it. 00:37:36.942 [2024-11-19 08:01:28.514363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.942 [2024-11-19 08:01:28.514403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.942 qpair failed and we were unable to recover it. 00:37:36.942 [2024-11-19 08:01:28.514540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.942 [2024-11-19 08:01:28.514579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.942 qpair failed and we were unable to recover it. 00:37:36.942 [2024-11-19 08:01:28.514734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.942 [2024-11-19 08:01:28.514771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.942 qpair failed and we were unable to recover it. 00:37:36.942 [2024-11-19 08:01:28.514905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.942 [2024-11-19 08:01:28.514943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.942 qpair failed and we were unable to recover it. 00:37:36.942 [2024-11-19 08:01:28.515094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.942 [2024-11-19 08:01:28.515134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.942 qpair failed and we were unable to recover it. 00:37:36.942 [2024-11-19 08:01:28.515245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.942 [2024-11-19 08:01:28.515281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.943 qpair failed and we were unable to recover it. 00:37:36.943 [2024-11-19 08:01:28.515451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.943 [2024-11-19 08:01:28.515488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.943 qpair failed and we were unable to recover it. 00:37:36.943 [2024-11-19 08:01:28.515627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.943 [2024-11-19 08:01:28.515663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.943 qpair failed and we were unable to recover it. 00:37:36.943 [2024-11-19 08:01:28.515785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.943 [2024-11-19 08:01:28.515822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.943 qpair failed and we were unable to recover it. 00:37:36.943 [2024-11-19 08:01:28.515970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.943 [2024-11-19 08:01:28.516006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.943 qpair failed and we were unable to recover it. 00:37:36.943 [2024-11-19 08:01:28.516145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.943 [2024-11-19 08:01:28.516185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.943 qpair failed and we were unable to recover it. 00:37:36.943 [2024-11-19 08:01:28.516319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.943 [2024-11-19 08:01:28.516359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.943 qpair failed and we were unable to recover it. 00:37:36.943 [2024-11-19 08:01:28.516506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.943 [2024-11-19 08:01:28.516556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.943 qpair failed and we were unable to recover it. 00:37:36.943 [2024-11-19 08:01:28.516715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.943 [2024-11-19 08:01:28.516770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.943 qpair failed and we were unable to recover it. 00:37:36.943 [2024-11-19 08:01:28.516913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.943 [2024-11-19 08:01:28.516950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.943 qpair failed and we were unable to recover it. 00:37:36.943 [2024-11-19 08:01:28.517087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.943 [2024-11-19 08:01:28.517123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.943 qpair failed and we were unable to recover it. 00:37:36.943 [2024-11-19 08:01:28.517224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.943 [2024-11-19 08:01:28.517260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.943 qpair failed and we were unable to recover it. 00:37:36.943 [2024-11-19 08:01:28.517424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.943 [2024-11-19 08:01:28.517460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.943 qpair failed and we were unable to recover it. 00:37:36.943 [2024-11-19 08:01:28.517601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.943 [2024-11-19 08:01:28.517638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.943 qpair failed and we were unable to recover it. 00:37:36.943 [2024-11-19 08:01:28.517803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.943 [2024-11-19 08:01:28.517853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.943 qpair failed and we were unable to recover it. 00:37:36.943 [2024-11-19 08:01:28.517973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.943 [2024-11-19 08:01:28.518010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.943 qpair failed and we were unable to recover it. 00:37:36.943 [2024-11-19 08:01:28.518154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.943 [2024-11-19 08:01:28.518190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.943 qpair failed and we were unable to recover it. 00:37:36.943 [2024-11-19 08:01:28.518326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.943 [2024-11-19 08:01:28.518361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.943 qpair failed and we were unable to recover it. 00:37:36.943 [2024-11-19 08:01:28.518512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.943 [2024-11-19 08:01:28.518552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.943 qpair failed and we were unable to recover it. 00:37:36.943 [2024-11-19 08:01:28.518704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.943 [2024-11-19 08:01:28.518741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.943 qpair failed and we were unable to recover it. 00:37:36.943 [2024-11-19 08:01:28.518880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.943 [2024-11-19 08:01:28.518916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.943 qpair failed and we were unable to recover it. 00:37:36.943 [2024-11-19 08:01:28.519030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.943 [2024-11-19 08:01:28.519066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.943 qpair failed and we were unable to recover it. 00:37:36.943 [2024-11-19 08:01:28.519215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.943 [2024-11-19 08:01:28.519252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.943 qpair failed and we were unable to recover it. 00:37:36.943 [2024-11-19 08:01:28.519363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.943 [2024-11-19 08:01:28.519398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.943 qpair failed and we were unable to recover it. 00:37:36.943 [2024-11-19 08:01:28.519543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.943 [2024-11-19 08:01:28.519578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.943 qpair failed and we were unable to recover it. 00:37:36.943 [2024-11-19 08:01:28.519725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.943 [2024-11-19 08:01:28.519763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.943 qpair failed and we were unable to recover it. 00:37:36.943 [2024-11-19 08:01:28.519883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.943 [2024-11-19 08:01:28.519920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.943 qpair failed and we were unable to recover it. 00:37:36.943 [2024-11-19 08:01:28.520057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.943 [2024-11-19 08:01:28.520093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.943 qpair failed and we were unable to recover it. 00:37:36.943 [2024-11-19 08:01:28.520231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.943 [2024-11-19 08:01:28.520266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.943 qpair failed and we were unable to recover it. 00:37:36.943 [2024-11-19 08:01:28.520414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.943 [2024-11-19 08:01:28.520450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.943 qpair failed and we were unable to recover it. 00:37:36.943 [2024-11-19 08:01:28.520587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.943 [2024-11-19 08:01:28.520623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.943 qpair failed and we were unable to recover it. 00:37:36.943 [2024-11-19 08:01:28.520762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.943 [2024-11-19 08:01:28.520812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.943 qpair failed and we were unable to recover it. 00:37:36.943 [2024-11-19 08:01:28.520963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.943 [2024-11-19 08:01:28.521000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.943 qpair failed and we were unable to recover it. 00:37:36.943 [2024-11-19 08:01:28.521146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.943 [2024-11-19 08:01:28.521188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.943 qpair failed and we were unable to recover it. 00:37:36.943 [2024-11-19 08:01:28.521356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.943 [2024-11-19 08:01:28.521391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.943 qpair failed and we were unable to recover it. 00:37:36.943 [2024-11-19 08:01:28.521513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.943 [2024-11-19 08:01:28.521549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.943 qpair failed and we were unable to recover it. 00:37:36.943 [2024-11-19 08:01:28.521724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.943 [2024-11-19 08:01:28.521763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.943 qpair failed and we were unable to recover it. 00:37:36.943 [2024-11-19 08:01:28.521882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.943 [2024-11-19 08:01:28.521921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.943 qpair failed and we were unable to recover it. 00:37:36.944 [2024-11-19 08:01:28.522040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.944 [2024-11-19 08:01:28.522078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.944 qpair failed and we were unable to recover it. 00:37:36.944 [2024-11-19 08:01:28.522191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.944 [2024-11-19 08:01:28.522226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.944 qpair failed and we were unable to recover it. 00:37:36.944 [2024-11-19 08:01:28.522371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.944 [2024-11-19 08:01:28.522408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.944 qpair failed and we were unable to recover it. 00:37:36.944 [2024-11-19 08:01:28.522519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.944 [2024-11-19 08:01:28.522554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.944 qpair failed and we were unable to recover it. 00:37:36.944 [2024-11-19 08:01:28.522718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.944 [2024-11-19 08:01:28.522755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.944 qpair failed and we were unable to recover it. 00:37:36.944 [2024-11-19 08:01:28.522892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.944 [2024-11-19 08:01:28.522928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.944 qpair failed and we were unable to recover it. 00:37:36.944 [2024-11-19 08:01:28.523108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.944 [2024-11-19 08:01:28.523143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.944 qpair failed and we were unable to recover it. 00:37:36.944 [2024-11-19 08:01:28.523252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.944 [2024-11-19 08:01:28.523286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.944 qpair failed and we were unable to recover it. 00:37:36.944 [2024-11-19 08:01:28.523398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.944 [2024-11-19 08:01:28.523433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.944 qpair failed and we were unable to recover it. 00:37:36.944 [2024-11-19 08:01:28.523582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.944 [2024-11-19 08:01:28.523617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.944 qpair failed and we were unable to recover it. 00:37:36.944 [2024-11-19 08:01:28.523758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.944 [2024-11-19 08:01:28.523801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.944 qpair failed and we were unable to recover it. 00:37:36.944 [2024-11-19 08:01:28.523991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.944 [2024-11-19 08:01:28.524040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.944 qpair failed and we were unable to recover it. 00:37:36.944 [2024-11-19 08:01:28.524166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.944 [2024-11-19 08:01:28.524204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.944 qpair failed and we were unable to recover it. 00:37:36.944 [2024-11-19 08:01:28.524310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.944 [2024-11-19 08:01:28.524347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.944 qpair failed and we were unable to recover it. 00:37:36.944 [2024-11-19 08:01:28.524483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.944 [2024-11-19 08:01:28.524518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.944 qpair failed and we were unable to recover it. 00:37:36.944 [2024-11-19 08:01:28.524653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.944 [2024-11-19 08:01:28.524699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.944 qpair failed and we were unable to recover it. 00:37:36.944 [2024-11-19 08:01:28.524808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.944 [2024-11-19 08:01:28.524843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.944 qpair failed and we were unable to recover it. 00:37:36.944 [2024-11-19 08:01:28.524986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.944 [2024-11-19 08:01:28.525022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.944 qpair failed and we were unable to recover it. 00:37:36.944 [2024-11-19 08:01:28.525168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.944 [2024-11-19 08:01:28.525208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.944 qpair failed and we were unable to recover it. 00:37:36.944 [2024-11-19 08:01:28.525347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.944 [2024-11-19 08:01:28.525384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.944 qpair failed and we were unable to recover it. 00:37:36.944 [2024-11-19 08:01:28.525527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.944 [2024-11-19 08:01:28.525562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.944 qpair failed and we were unable to recover it. 00:37:36.944 [2024-11-19 08:01:28.525672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.944 [2024-11-19 08:01:28.525717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.944 qpair failed and we were unable to recover it. 00:37:36.944 [2024-11-19 08:01:28.525874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.944 [2024-11-19 08:01:28.525923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.944 qpair failed and we were unable to recover it. 00:37:36.944 [2024-11-19 08:01:28.526069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.944 [2024-11-19 08:01:28.526105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.944 qpair failed and we were unable to recover it. 00:37:36.944 [2024-11-19 08:01:28.526255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.944 [2024-11-19 08:01:28.526291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.944 qpair failed and we were unable to recover it. 00:37:36.944 [2024-11-19 08:01:28.526401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.944 [2024-11-19 08:01:28.526437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.944 qpair failed and we were unable to recover it. 00:37:36.944 [2024-11-19 08:01:28.526577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.944 [2024-11-19 08:01:28.526611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.944 qpair failed and we were unable to recover it. 00:37:36.944 [2024-11-19 08:01:28.526747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.944 [2024-11-19 08:01:28.526798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.944 qpair failed and we were unable to recover it. 00:37:36.944 [2024-11-19 08:01:28.526920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.944 [2024-11-19 08:01:28.526958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.944 qpair failed and we were unable to recover it. 00:37:36.944 [2024-11-19 08:01:28.527097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.944 [2024-11-19 08:01:28.527138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.944 qpair failed and we were unable to recover it. 00:37:36.944 [2024-11-19 08:01:28.527278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.944 [2024-11-19 08:01:28.527314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.944 qpair failed and we were unable to recover it. 00:37:36.944 [2024-11-19 08:01:28.527456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.944 [2024-11-19 08:01:28.527493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.944 qpair failed and we were unable to recover it. 00:37:36.944 [2024-11-19 08:01:28.527627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.944 [2024-11-19 08:01:28.527662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.944 qpair failed and we were unable to recover it. 00:37:36.944 [2024-11-19 08:01:28.527840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.944 [2024-11-19 08:01:28.527876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.944 qpair failed and we were unable to recover it. 00:37:36.944 [2024-11-19 08:01:28.527990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.944 [2024-11-19 08:01:28.528025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.944 qpair failed and we were unable to recover it. 00:37:36.944 [2024-11-19 08:01:28.528162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.944 [2024-11-19 08:01:28.528197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.944 qpair failed and we were unable to recover it. 00:37:36.944 [2024-11-19 08:01:28.528307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.944 [2024-11-19 08:01:28.528343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.944 qpair failed and we were unable to recover it. 00:37:36.944 [2024-11-19 08:01:28.528497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.945 [2024-11-19 08:01:28.528534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.945 qpair failed and we were unable to recover it. 00:37:36.945 [2024-11-19 08:01:28.528674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.945 [2024-11-19 08:01:28.528716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.945 qpair failed and we were unable to recover it. 00:37:36.945 [2024-11-19 08:01:28.528858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.945 [2024-11-19 08:01:28.528894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.945 qpair failed and we were unable to recover it. 00:37:36.945 [2024-11-19 08:01:28.529028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.945 [2024-11-19 08:01:28.529063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.945 qpair failed and we were unable to recover it. 00:37:36.945 [2024-11-19 08:01:28.529224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.945 [2024-11-19 08:01:28.529259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.945 qpair failed and we were unable to recover it. 00:37:36.945 [2024-11-19 08:01:28.529378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.945 [2024-11-19 08:01:28.529414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.945 qpair failed and we were unable to recover it. 00:37:36.945 [2024-11-19 08:01:28.529556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.945 [2024-11-19 08:01:28.529592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.945 qpair failed and we were unable to recover it. 00:37:36.945 [2024-11-19 08:01:28.529750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.945 [2024-11-19 08:01:28.529801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.945 qpair failed and we were unable to recover it. 00:37:36.945 [2024-11-19 08:01:28.529916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.945 [2024-11-19 08:01:28.529954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.945 qpair failed and we were unable to recover it. 00:37:36.945 [2024-11-19 08:01:28.530092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.945 [2024-11-19 08:01:28.530128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.945 qpair failed and we were unable to recover it. 00:37:36.945 [2024-11-19 08:01:28.530261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.945 [2024-11-19 08:01:28.530296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.945 qpair failed and we were unable to recover it. 00:37:36.945 [2024-11-19 08:01:28.530408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.945 [2024-11-19 08:01:28.530444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.945 qpair failed and we were unable to recover it. 00:37:36.945 [2024-11-19 08:01:28.530610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.945 [2024-11-19 08:01:28.530660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.945 qpair failed and we were unable to recover it. 00:37:36.945 [2024-11-19 08:01:28.530821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.945 [2024-11-19 08:01:28.530863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.945 qpair failed and we were unable to recover it. 00:37:36.945 [2024-11-19 08:01:28.531006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.945 [2024-11-19 08:01:28.531042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.945 qpair failed and we were unable to recover it. 00:37:36.945 [2024-11-19 08:01:28.531181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.945 [2024-11-19 08:01:28.531217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.945 qpair failed and we were unable to recover it. 00:37:36.945 [2024-11-19 08:01:28.531359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.945 [2024-11-19 08:01:28.531395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.945 qpair failed and we were unable to recover it. 00:37:36.945 [2024-11-19 08:01:28.531510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.945 [2024-11-19 08:01:28.531546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.945 qpair failed and we were unable to recover it. 00:37:36.945 [2024-11-19 08:01:28.531712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.945 [2024-11-19 08:01:28.531749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.945 qpair failed and we were unable to recover it. 00:37:36.945 [2024-11-19 08:01:28.531902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.945 [2024-11-19 08:01:28.531952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.945 qpair failed and we were unable to recover it. 00:37:36.945 [2024-11-19 08:01:28.532062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.945 [2024-11-19 08:01:28.532100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.945 qpair failed and we were unable to recover it. 00:37:36.945 [2024-11-19 08:01:28.532241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.945 [2024-11-19 08:01:28.532276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.945 qpair failed and we were unable to recover it. 00:37:36.945 [2024-11-19 08:01:28.532438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.945 [2024-11-19 08:01:28.532474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.945 qpair failed and we were unable to recover it. 00:37:36.945 [2024-11-19 08:01:28.532604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.945 [2024-11-19 08:01:28.532640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.945 qpair failed and we were unable to recover it. 00:37:36.945 [2024-11-19 08:01:28.532772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.945 [2024-11-19 08:01:28.532810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.945 qpair failed and we were unable to recover it. 00:37:36.945 [2024-11-19 08:01:28.532956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.945 [2024-11-19 08:01:28.532993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.945 qpair failed and we were unable to recover it. 00:37:36.945 [2024-11-19 08:01:28.533134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.945 [2024-11-19 08:01:28.533170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.945 qpair failed and we were unable to recover it. 00:37:36.945 [2024-11-19 08:01:28.533282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.945 [2024-11-19 08:01:28.533317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.945 qpair failed and we were unable to recover it. 00:37:36.945 [2024-11-19 08:01:28.533451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.945 [2024-11-19 08:01:28.533486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.945 qpair failed and we were unable to recover it. 00:37:36.945 [2024-11-19 08:01:28.533624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.945 [2024-11-19 08:01:28.533659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.945 qpair failed and we were unable to recover it. 00:37:36.945 [2024-11-19 08:01:28.533803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.945 [2024-11-19 08:01:28.533839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.945 qpair failed and we were unable to recover it. 00:37:36.945 [2024-11-19 08:01:28.534007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.945 [2024-11-19 08:01:28.534044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.945 qpair failed and we were unable to recover it. 00:37:36.945 [2024-11-19 08:01:28.534152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.945 [2024-11-19 08:01:28.534188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.945 qpair failed and we were unable to recover it. 00:37:36.945 [2024-11-19 08:01:28.534298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.945 [2024-11-19 08:01:28.534335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.945 qpair failed and we were unable to recover it. 00:37:36.945 [2024-11-19 08:01:28.534505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.945 [2024-11-19 08:01:28.534541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.945 qpair failed and we were unable to recover it. 00:37:36.945 [2024-11-19 08:01:28.534647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.945 [2024-11-19 08:01:28.534684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.945 qpair failed and we were unable to recover it. 00:37:36.945 [2024-11-19 08:01:28.534835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.946 [2024-11-19 08:01:28.534872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.946 qpair failed and we were unable to recover it. 00:37:36.946 [2024-11-19 08:01:28.534986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.946 [2024-11-19 08:01:28.535022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.946 qpair failed and we were unable to recover it. 00:37:36.946 [2024-11-19 08:01:28.535157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.946 [2024-11-19 08:01:28.535192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.946 qpair failed and we were unable to recover it. 00:37:36.946 [2024-11-19 08:01:28.535356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.946 [2024-11-19 08:01:28.535392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.946 qpair failed and we were unable to recover it. 00:37:36.946 [2024-11-19 08:01:28.535548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.946 [2024-11-19 08:01:28.535598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.946 qpair failed and we were unable to recover it. 00:37:36.946 [2024-11-19 08:01:28.535721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.946 [2024-11-19 08:01:28.535772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.946 qpair failed and we were unable to recover it. 00:37:36.946 [2024-11-19 08:01:28.535913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.946 [2024-11-19 08:01:28.535950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.946 qpair failed and we were unable to recover it. 00:37:36.946 [2024-11-19 08:01:28.536093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.946 [2024-11-19 08:01:28.536129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.946 qpair failed and we were unable to recover it. 00:37:36.946 [2024-11-19 08:01:28.536267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.946 [2024-11-19 08:01:28.536310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.946 qpair failed and we were unable to recover it. 00:37:36.946 [2024-11-19 08:01:28.536417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.946 [2024-11-19 08:01:28.536453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.946 qpair failed and we were unable to recover it. 00:37:36.946 [2024-11-19 08:01:28.536621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.946 [2024-11-19 08:01:28.536658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.946 qpair failed and we were unable to recover it. 00:37:36.946 [2024-11-19 08:01:28.536820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.946 [2024-11-19 08:01:28.536870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.946 qpair failed and we were unable to recover it. 00:37:36.946 [2024-11-19 08:01:28.536997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.946 [2024-11-19 08:01:28.537039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.946 qpair failed and we were unable to recover it. 00:37:36.946 [2024-11-19 08:01:28.537210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.946 [2024-11-19 08:01:28.537246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.946 qpair failed and we were unable to recover it. 00:37:36.946 [2024-11-19 08:01:28.537387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.946 [2024-11-19 08:01:28.537423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.946 qpair failed and we were unable to recover it. 00:37:36.946 [2024-11-19 08:01:28.537556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.946 [2024-11-19 08:01:28.537592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.946 qpair failed and we were unable to recover it. 00:37:36.946 [2024-11-19 08:01:28.537712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.946 [2024-11-19 08:01:28.537749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.946 qpair failed and we were unable to recover it. 00:37:36.946 [2024-11-19 08:01:28.537883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.946 [2024-11-19 08:01:28.537924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.946 qpair failed and we were unable to recover it. 00:37:36.946 [2024-11-19 08:01:28.538088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.946 [2024-11-19 08:01:28.538124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.946 qpair failed and we were unable to recover it. 00:37:36.946 [2024-11-19 08:01:28.538267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.946 [2024-11-19 08:01:28.538302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.946 qpair failed and we were unable to recover it. 00:37:36.946 [2024-11-19 08:01:28.538420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.946 [2024-11-19 08:01:28.538470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.946 qpair failed and we were unable to recover it. 00:37:36.946 [2024-11-19 08:01:28.538614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.946 [2024-11-19 08:01:28.538652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.946 qpair failed and we were unable to recover it. 00:37:36.946 [2024-11-19 08:01:28.538817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.946 [2024-11-19 08:01:28.538869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.946 qpair failed and we were unable to recover it. 00:37:36.946 [2024-11-19 08:01:28.539043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.946 [2024-11-19 08:01:28.539079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.946 qpair failed and we were unable to recover it. 00:37:36.946 [2024-11-19 08:01:28.539182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.946 [2024-11-19 08:01:28.539218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.946 qpair failed and we were unable to recover it. 00:37:36.946 [2024-11-19 08:01:28.539361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.946 [2024-11-19 08:01:28.539397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.946 qpair failed and we were unable to recover it. 00:37:36.946 [2024-11-19 08:01:28.539535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.946 [2024-11-19 08:01:28.539572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.946 qpair failed and we were unable to recover it. 00:37:36.946 [2024-11-19 08:01:28.539712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.946 [2024-11-19 08:01:28.539763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.946 qpair failed and we were unable to recover it. 00:37:36.946 [2024-11-19 08:01:28.539931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.946 [2024-11-19 08:01:28.539971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.946 qpair failed and we were unable to recover it. 00:37:36.946 [2024-11-19 08:01:28.540087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.946 [2024-11-19 08:01:28.540137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.946 qpair failed and we were unable to recover it. 00:37:36.946 [2024-11-19 08:01:28.540281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.946 [2024-11-19 08:01:28.540317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.946 qpair failed and we were unable to recover it. 00:37:36.946 [2024-11-19 08:01:28.540461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.946 [2024-11-19 08:01:28.540497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.946 qpair failed and we were unable to recover it. 00:37:36.947 [2024-11-19 08:01:28.540639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.947 [2024-11-19 08:01:28.540675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.947 qpair failed and we were unable to recover it. 00:37:36.947 [2024-11-19 08:01:28.540826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.947 [2024-11-19 08:01:28.540862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.947 qpair failed and we were unable to recover it. 00:37:36.947 [2024-11-19 08:01:28.541058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.947 [2024-11-19 08:01:28.541098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.947 qpair failed and we were unable to recover it. 00:37:36.947 [2024-11-19 08:01:28.541227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.947 [2024-11-19 08:01:28.541264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.947 qpair failed and we were unable to recover it. 00:37:36.947 [2024-11-19 08:01:28.541408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.947 [2024-11-19 08:01:28.541445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.947 qpair failed and we were unable to recover it. 00:37:36.947 [2024-11-19 08:01:28.541583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.947 [2024-11-19 08:01:28.541619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.947 qpair failed and we were unable to recover it. 00:37:36.947 [2024-11-19 08:01:28.541733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.947 [2024-11-19 08:01:28.541769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.947 qpair failed and we were unable to recover it. 00:37:36.947 [2024-11-19 08:01:28.541878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.947 [2024-11-19 08:01:28.541914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.947 qpair failed and we were unable to recover it. 00:37:36.947 [2024-11-19 08:01:28.542081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.947 [2024-11-19 08:01:28.542117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.947 qpair failed and we were unable to recover it. 00:37:36.947 [2024-11-19 08:01:28.542288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.947 [2024-11-19 08:01:28.542325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.947 qpair failed and we were unable to recover it. 00:37:36.947 [2024-11-19 08:01:28.542463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.947 [2024-11-19 08:01:28.542511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.947 qpair failed and we were unable to recover it. 00:37:36.947 [2024-11-19 08:01:28.542657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.947 [2024-11-19 08:01:28.542701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.947 qpair failed and we were unable to recover it. 00:37:36.947 [2024-11-19 08:01:28.542858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.947 [2024-11-19 08:01:28.542909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.947 qpair failed and we were unable to recover it. 00:37:36.947 [2024-11-19 08:01:28.543089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.947 [2024-11-19 08:01:28.543128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.947 qpair failed and we were unable to recover it. 00:37:36.947 [2024-11-19 08:01:28.543292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.947 [2024-11-19 08:01:28.543329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.947 qpair failed and we were unable to recover it. 00:37:36.947 [2024-11-19 08:01:28.543460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.947 [2024-11-19 08:01:28.543497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.947 qpair failed and we were unable to recover it. 00:37:36.947 [2024-11-19 08:01:28.543604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.947 [2024-11-19 08:01:28.543641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.947 qpair failed and we were unable to recover it. 00:37:36.947 [2024-11-19 08:01:28.543820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.947 [2024-11-19 08:01:28.543857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.947 qpair failed and we were unable to recover it. 00:37:36.947 [2024-11-19 08:01:28.543959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.947 [2024-11-19 08:01:28.543994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.947 qpair failed and we were unable to recover it. 00:37:36.947 [2024-11-19 08:01:28.544137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.947 [2024-11-19 08:01:28.544175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.947 qpair failed and we were unable to recover it. 00:37:36.947 [2024-11-19 08:01:28.544339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.947 [2024-11-19 08:01:28.544375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.947 qpair failed and we were unable to recover it. 00:37:36.947 [2024-11-19 08:01:28.544544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.947 [2024-11-19 08:01:28.544581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.947 qpair failed and we were unable to recover it. 00:37:36.947 [2024-11-19 08:01:28.544719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.947 [2024-11-19 08:01:28.544756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.947 qpair failed and we were unable to recover it. 00:37:36.947 [2024-11-19 08:01:28.544894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.947 [2024-11-19 08:01:28.544958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.947 qpair failed and we were unable to recover it. 00:37:36.947 [2024-11-19 08:01:28.545077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.947 [2024-11-19 08:01:28.545114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.947 qpair failed and we were unable to recover it. 00:37:36.947 [2024-11-19 08:01:28.545222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.947 [2024-11-19 08:01:28.545263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.947 qpair failed and we were unable to recover it. 00:37:36.947 [2024-11-19 08:01:28.545430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.947 [2024-11-19 08:01:28.545468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.947 qpair failed and we were unable to recover it. 00:37:36.947 [2024-11-19 08:01:28.545583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.947 [2024-11-19 08:01:28.545620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.947 qpair failed and we were unable to recover it. 00:37:36.947 [2024-11-19 08:01:28.545787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.947 [2024-11-19 08:01:28.545838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.947 qpair failed and we were unable to recover it. 00:37:36.947 [2024-11-19 08:01:28.545961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.947 [2024-11-19 08:01:28.545998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.947 qpair failed and we were unable to recover it. 00:37:36.947 [2024-11-19 08:01:28.546137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.947 [2024-11-19 08:01:28.546172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.947 qpair failed and we were unable to recover it. 00:37:36.947 [2024-11-19 08:01:28.546308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.947 [2024-11-19 08:01:28.546344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.947 qpair failed and we were unable to recover it. 00:37:36.947 [2024-11-19 08:01:28.546475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.947 [2024-11-19 08:01:28.546511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.947 qpair failed and we were unable to recover it. 00:37:36.947 [2024-11-19 08:01:28.546663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.947 [2024-11-19 08:01:28.546724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.947 qpair failed and we were unable to recover it. 00:37:36.947 [2024-11-19 08:01:28.546875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.947 [2024-11-19 08:01:28.546914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.947 qpair failed and we were unable to recover it. 00:37:36.947 [2024-11-19 08:01:28.547093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.947 [2024-11-19 08:01:28.547143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.947 qpair failed and we were unable to recover it. 00:37:36.947 [2024-11-19 08:01:28.547266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.947 [2024-11-19 08:01:28.547304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.947 qpair failed and we were unable to recover it. 00:37:36.947 [2024-11-19 08:01:28.547473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.948 [2024-11-19 08:01:28.547511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.948 qpair failed and we were unable to recover it. 00:37:36.948 [2024-11-19 08:01:28.547685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.948 [2024-11-19 08:01:28.547727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.948 qpair failed and we were unable to recover it. 00:37:36.948 [2024-11-19 08:01:28.547855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.948 [2024-11-19 08:01:28.547892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.948 qpair failed and we were unable to recover it. 00:37:36.948 [2024-11-19 08:01:28.548004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.948 [2024-11-19 08:01:28.548040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.948 qpair failed and we were unable to recover it. 00:37:36.948 [2024-11-19 08:01:28.548181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.948 [2024-11-19 08:01:28.548217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.948 qpair failed and we were unable to recover it. 00:37:36.948 [2024-11-19 08:01:28.548325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.948 [2024-11-19 08:01:28.548362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.948 qpair failed and we were unable to recover it. 00:37:36.948 [2024-11-19 08:01:28.548507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.948 [2024-11-19 08:01:28.548543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.948 qpair failed and we were unable to recover it. 00:37:36.948 [2024-11-19 08:01:28.548657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.948 [2024-11-19 08:01:28.548701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.948 qpair failed and we were unable to recover it. 00:37:36.948 [2024-11-19 08:01:28.548823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.948 [2024-11-19 08:01:28.548861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.948 qpair failed and we were unable to recover it. 00:37:36.948 [2024-11-19 08:01:28.549019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.948 [2024-11-19 08:01:28.549070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.948 qpair failed and we were unable to recover it. 00:37:36.948 [2024-11-19 08:01:28.549185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.948 [2024-11-19 08:01:28.549223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.948 qpair failed and we were unable to recover it. 00:37:36.948 [2024-11-19 08:01:28.549332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.948 [2024-11-19 08:01:28.549368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.948 qpair failed and we were unable to recover it. 00:37:36.948 [2024-11-19 08:01:28.549532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.948 [2024-11-19 08:01:28.549568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.948 qpair failed and we were unable to recover it. 00:37:36.948 [2024-11-19 08:01:28.549714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.948 [2024-11-19 08:01:28.549751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.948 qpair failed and we were unable to recover it. 00:37:36.948 [2024-11-19 08:01:28.549854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.948 [2024-11-19 08:01:28.549889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.948 qpair failed and we were unable to recover it. 00:37:36.948 [2024-11-19 08:01:28.550055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.948 [2024-11-19 08:01:28.550092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.948 qpair failed and we were unable to recover it. 00:37:36.948 [2024-11-19 08:01:28.550202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.948 [2024-11-19 08:01:28.550240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.948 qpair failed and we were unable to recover it. 00:37:36.948 [2024-11-19 08:01:28.550387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.948 [2024-11-19 08:01:28.550423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.948 qpair failed and we were unable to recover it. 00:37:36.948 [2024-11-19 08:01:28.550566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.948 [2024-11-19 08:01:28.550603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.948 qpair failed and we were unable to recover it. 00:37:36.948 [2024-11-19 08:01:28.550714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.948 [2024-11-19 08:01:28.550757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.948 qpair failed and we were unable to recover it. 00:37:36.948 [2024-11-19 08:01:28.550919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.948 [2024-11-19 08:01:28.550969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.948 qpair failed and we were unable to recover it. 00:37:36.948 [2024-11-19 08:01:28.551147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.948 [2024-11-19 08:01:28.551185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.948 qpair failed and we were unable to recover it. 00:37:36.948 [2024-11-19 08:01:28.551339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.948 [2024-11-19 08:01:28.551389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.948 qpair failed and we were unable to recover it. 00:37:36.948 [2024-11-19 08:01:28.551538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.948 [2024-11-19 08:01:28.551575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.948 qpair failed and we were unable to recover it. 00:37:36.948 [2024-11-19 08:01:28.551758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.948 [2024-11-19 08:01:28.551809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.948 qpair failed and we were unable to recover it. 00:37:36.948 [2024-11-19 08:01:28.551923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.948 [2024-11-19 08:01:28.551960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.948 qpair failed and we were unable to recover it. 00:37:36.948 [2024-11-19 08:01:28.552096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.948 [2024-11-19 08:01:28.552133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.948 qpair failed and we were unable to recover it. 00:37:36.948 [2024-11-19 08:01:28.552272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.948 [2024-11-19 08:01:28.552308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.948 qpair failed and we were unable to recover it. 00:37:36.948 [2024-11-19 08:01:28.552443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.948 [2024-11-19 08:01:28.552484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.948 qpair failed and we were unable to recover it. 00:37:36.948 [2024-11-19 08:01:28.552654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.948 [2024-11-19 08:01:28.552699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.948 qpair failed and we were unable to recover it. 00:37:36.948 [2024-11-19 08:01:28.552818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.948 [2024-11-19 08:01:28.552856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.948 qpair failed and we were unable to recover it. 00:37:36.948 [2024-11-19 08:01:28.552995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.948 [2024-11-19 08:01:28.553033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.948 qpair failed and we were unable to recover it. 00:37:36.948 [2024-11-19 08:01:28.553164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.948 [2024-11-19 08:01:28.553201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.948 qpair failed and we were unable to recover it. 00:37:36.948 [2024-11-19 08:01:28.553353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.948 [2024-11-19 08:01:28.553391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.948 qpair failed and we were unable to recover it. 00:37:36.948 [2024-11-19 08:01:28.553528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.948 [2024-11-19 08:01:28.553576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.948 qpair failed and we were unable to recover it. 00:37:36.948 [2024-11-19 08:01:28.553723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.948 [2024-11-19 08:01:28.553761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.948 qpair failed and we were unable to recover it. 00:37:36.948 [2024-11-19 08:01:28.553900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.948 [2024-11-19 08:01:28.553937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.948 qpair failed and we were unable to recover it. 00:37:36.948 [2024-11-19 08:01:28.554077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.948 [2024-11-19 08:01:28.554113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.948 qpair failed and we were unable to recover it. 00:37:36.949 [2024-11-19 08:01:28.554217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.949 [2024-11-19 08:01:28.554253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.949 qpair failed and we were unable to recover it. 00:37:36.949 [2024-11-19 08:01:28.554353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.949 [2024-11-19 08:01:28.554389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.949 qpair failed and we were unable to recover it. 00:37:36.949 [2024-11-19 08:01:28.554494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.949 [2024-11-19 08:01:28.554530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.949 qpair failed and we were unable to recover it. 00:37:36.949 [2024-11-19 08:01:28.554675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.949 [2024-11-19 08:01:28.554719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.949 qpair failed and we were unable to recover it. 00:37:36.949 [2024-11-19 08:01:28.554870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.949 [2024-11-19 08:01:28.554920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.949 qpair failed and we were unable to recover it. 00:37:36.949 [2024-11-19 08:01:28.555046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.949 [2024-11-19 08:01:28.555083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.949 qpair failed and we were unable to recover it. 00:37:36.949 [2024-11-19 08:01:28.555199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.949 [2024-11-19 08:01:28.555235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.949 qpair failed and we were unable to recover it. 00:37:36.949 [2024-11-19 08:01:28.555373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.949 [2024-11-19 08:01:28.555409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.949 qpair failed and we were unable to recover it. 00:37:36.949 [2024-11-19 08:01:28.555519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.949 [2024-11-19 08:01:28.555555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.949 qpair failed and we were unable to recover it. 00:37:36.949 [2024-11-19 08:01:28.555702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.949 [2024-11-19 08:01:28.555739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.949 qpair failed and we were unable to recover it. 00:37:36.949 [2024-11-19 08:01:28.555846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.949 [2024-11-19 08:01:28.555882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.949 qpair failed and we were unable to recover it. 00:37:36.949 [2024-11-19 08:01:28.556032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.949 [2024-11-19 08:01:28.556083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.949 qpair failed and we were unable to recover it. 00:37:36.949 [2024-11-19 08:01:28.556261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.949 [2024-11-19 08:01:28.556299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.949 qpair failed and we were unable to recover it. 00:37:36.949 [2024-11-19 08:01:28.556418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.949 [2024-11-19 08:01:28.556455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.949 qpair failed and we were unable to recover it. 00:37:36.949 [2024-11-19 08:01:28.556619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.949 [2024-11-19 08:01:28.556656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.949 qpair failed and we were unable to recover it. 00:37:36.949 [2024-11-19 08:01:28.556842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.949 [2024-11-19 08:01:28.556880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.949 qpair failed and we were unable to recover it. 00:37:36.949 [2024-11-19 08:01:28.556993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.949 [2024-11-19 08:01:28.557029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.949 qpair failed and we were unable to recover it. 00:37:36.949 [2024-11-19 08:01:28.557201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.949 [2024-11-19 08:01:28.557251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.949 qpair failed and we were unable to recover it. 00:37:36.949 [2024-11-19 08:01:28.557369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.949 [2024-11-19 08:01:28.557406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.949 qpair failed and we were unable to recover it. 00:37:36.949 [2024-11-19 08:01:28.557573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.949 [2024-11-19 08:01:28.557609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.949 qpair failed and we were unable to recover it. 00:37:36.949 [2024-11-19 08:01:28.557714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.949 [2024-11-19 08:01:28.557751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.949 qpair failed and we were unable to recover it. 00:37:36.949 [2024-11-19 08:01:28.557898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.949 [2024-11-19 08:01:28.557934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.949 qpair failed and we were unable to recover it. 00:37:36.949 [2024-11-19 08:01:28.558080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.949 [2024-11-19 08:01:28.558115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.949 qpair failed and we were unable to recover it. 00:37:36.949 [2024-11-19 08:01:28.558246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.949 [2024-11-19 08:01:28.558281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.949 qpair failed and we were unable to recover it. 00:37:36.949 [2024-11-19 08:01:28.558428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.949 [2024-11-19 08:01:28.558468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.949 qpair failed and we were unable to recover it. 00:37:36.949 [2024-11-19 08:01:28.558628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.949 [2024-11-19 08:01:28.558679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.949 qpair failed and we were unable to recover it. 00:37:36.949 [2024-11-19 08:01:28.558850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.949 [2024-11-19 08:01:28.558900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.949 qpair failed and we were unable to recover it. 00:37:36.949 [2024-11-19 08:01:28.559050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.949 [2024-11-19 08:01:28.559088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.949 qpair failed and we were unable to recover it. 00:37:36.949 [2024-11-19 08:01:28.559233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.949 [2024-11-19 08:01:28.559270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.949 qpair failed and we were unable to recover it. 00:37:36.949 [2024-11-19 08:01:28.559384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.949 [2024-11-19 08:01:28.559420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.949 qpair failed and we were unable to recover it. 00:37:36.949 [2024-11-19 08:01:28.559560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.949 [2024-11-19 08:01:28.559602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.949 qpair failed and we were unable to recover it. 00:37:36.949 [2024-11-19 08:01:28.559717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.949 [2024-11-19 08:01:28.559755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.949 qpair failed and we were unable to recover it. 00:37:36.949 [2024-11-19 08:01:28.559910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.949 [2024-11-19 08:01:28.559962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.949 qpair failed and we were unable to recover it. 00:37:36.949 [2024-11-19 08:01:28.560139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.949 [2024-11-19 08:01:28.560176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.949 qpair failed and we were unable to recover it. 00:37:36.949 [2024-11-19 08:01:28.560314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.949 [2024-11-19 08:01:28.560350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.949 qpair failed and we were unable to recover it. 00:37:36.949 [2024-11-19 08:01:28.560509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.949 [2024-11-19 08:01:28.560545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.949 qpair failed and we were unable to recover it. 00:37:36.949 [2024-11-19 08:01:28.560681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.949 [2024-11-19 08:01:28.560722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.949 qpair failed and we were unable to recover it. 00:37:36.949 [2024-11-19 08:01:28.560875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.950 [2024-11-19 08:01:28.560924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.950 qpair failed and we were unable to recover it. 00:37:36.950 [2024-11-19 08:01:28.561075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.950 [2024-11-19 08:01:28.561114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.950 qpair failed and we were unable to recover it. 00:37:36.950 [2024-11-19 08:01:28.561276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.950 [2024-11-19 08:01:28.561312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.950 qpair failed and we were unable to recover it. 00:37:36.950 [2024-11-19 08:01:28.561419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.950 [2024-11-19 08:01:28.561455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.950 qpair failed and we were unable to recover it. 00:37:36.950 [2024-11-19 08:01:28.561615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.950 [2024-11-19 08:01:28.561665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.950 qpair failed and we were unable to recover it. 00:37:36.950 [2024-11-19 08:01:28.561856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.950 [2024-11-19 08:01:28.561898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.950 qpair failed and we were unable to recover it. 00:37:36.950 [2024-11-19 08:01:28.562042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.950 [2024-11-19 08:01:28.562078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.950 qpair failed and we were unable to recover it. 00:37:36.950 [2024-11-19 08:01:28.562198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.950 [2024-11-19 08:01:28.562234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.950 qpair failed and we were unable to recover it. 00:37:36.950 [2024-11-19 08:01:28.562370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.950 [2024-11-19 08:01:28.562406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.950 qpair failed and we were unable to recover it. 00:37:36.950 [2024-11-19 08:01:28.562549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.950 [2024-11-19 08:01:28.562584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.950 qpair failed and we were unable to recover it. 00:37:36.950 [2024-11-19 08:01:28.562719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.950 [2024-11-19 08:01:28.562755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.950 qpair failed and we were unable to recover it. 00:37:36.950 [2024-11-19 08:01:28.562897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.950 [2024-11-19 08:01:28.562932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.950 qpair failed and we were unable to recover it. 00:37:36.950 [2024-11-19 08:01:28.563029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.950 [2024-11-19 08:01:28.563064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.950 qpair failed and we were unable to recover it. 00:37:36.950 [2024-11-19 08:01:28.563169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.950 [2024-11-19 08:01:28.563205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.950 qpair failed and we were unable to recover it. 00:37:36.950 [2024-11-19 08:01:28.563329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.950 [2024-11-19 08:01:28.563380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.950 qpair failed and we were unable to recover it. 00:37:36.950 [2024-11-19 08:01:28.563540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.950 [2024-11-19 08:01:28.563591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.950 qpair failed and we were unable to recover it. 00:37:36.950 [2024-11-19 08:01:28.563767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.950 [2024-11-19 08:01:28.563818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.950 qpair failed and we were unable to recover it. 00:37:36.950 [2024-11-19 08:01:28.563971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.950 [2024-11-19 08:01:28.564009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.950 qpair failed and we were unable to recover it. 00:37:36.950 [2024-11-19 08:01:28.564175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.950 [2024-11-19 08:01:28.564211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.950 qpair failed and we were unable to recover it. 00:37:36.950 [2024-11-19 08:01:28.564346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.950 [2024-11-19 08:01:28.564382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.950 qpair failed and we were unable to recover it. 00:37:36.950 [2024-11-19 08:01:28.564491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.950 [2024-11-19 08:01:28.564527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.950 qpair failed and we were unable to recover it. 00:37:36.950 [2024-11-19 08:01:28.564674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.950 [2024-11-19 08:01:28.564723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.950 qpair failed and we were unable to recover it. 00:37:36.950 [2024-11-19 08:01:28.564853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.950 [2024-11-19 08:01:28.564904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.950 qpair failed and we were unable to recover it. 00:37:36.950 [2024-11-19 08:01:28.565022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.950 [2024-11-19 08:01:28.565061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.950 qpair failed and we were unable to recover it. 00:37:36.950 [2024-11-19 08:01:28.565205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.950 [2024-11-19 08:01:28.565243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.950 qpair failed and we were unable to recover it. 00:37:36.950 [2024-11-19 08:01:28.565358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.950 [2024-11-19 08:01:28.565395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.950 qpair failed and we were unable to recover it. 00:37:36.950 [2024-11-19 08:01:28.565530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.950 [2024-11-19 08:01:28.565568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.950 qpair failed and we were unable to recover it. 00:37:36.950 [2024-11-19 08:01:28.565737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.950 [2024-11-19 08:01:28.565773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.950 qpair failed and we were unable to recover it. 00:37:36.950 [2024-11-19 08:01:28.565882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.950 [2024-11-19 08:01:28.565921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.950 qpair failed and we were unable to recover it. 00:37:36.950 [2024-11-19 08:01:28.566061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.950 [2024-11-19 08:01:28.566097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.950 qpair failed and we were unable to recover it. 00:37:36.950 [2024-11-19 08:01:28.566200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.950 [2024-11-19 08:01:28.566236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.950 qpair failed and we were unable to recover it. 00:37:36.950 [2024-11-19 08:01:28.566384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.950 [2024-11-19 08:01:28.566420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.950 qpair failed and we were unable to recover it. 00:37:36.950 [2024-11-19 08:01:28.566556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.950 [2024-11-19 08:01:28.566594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.950 qpair failed and we were unable to recover it. 00:37:36.950 [2024-11-19 08:01:28.566756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.951 [2024-11-19 08:01:28.566813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.951 qpair failed and we were unable to recover it. 00:37:36.951 [2024-11-19 08:01:28.566991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.951 [2024-11-19 08:01:28.567028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.951 qpair failed and we were unable to recover it. 00:37:36.951 [2024-11-19 08:01:28.567160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.951 [2024-11-19 08:01:28.567196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.951 qpair failed and we were unable to recover it. 00:37:36.951 [2024-11-19 08:01:28.567310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.951 [2024-11-19 08:01:28.567345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.951 qpair failed and we were unable to recover it. 00:37:36.951 [2024-11-19 08:01:28.567448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.951 [2024-11-19 08:01:28.567483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.951 qpair failed and we were unable to recover it. 00:37:36.951 [2024-11-19 08:01:28.567624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.951 [2024-11-19 08:01:28.567660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.951 qpair failed and we were unable to recover it. 00:37:36.951 [2024-11-19 08:01:28.567778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.951 [2024-11-19 08:01:28.567813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.951 qpair failed and we were unable to recover it. 00:37:36.951 [2024-11-19 08:01:28.567941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.951 [2024-11-19 08:01:28.567977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.951 qpair failed and we were unable to recover it. 00:37:36.951 [2024-11-19 08:01:28.568114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.951 [2024-11-19 08:01:28.568150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.951 qpair failed and we were unable to recover it. 00:37:36.951 [2024-11-19 08:01:28.568315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.951 [2024-11-19 08:01:28.568350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.951 qpair failed and we were unable to recover it. 00:37:36.951 [2024-11-19 08:01:28.568474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.951 [2024-11-19 08:01:28.568514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.951 qpair failed and we were unable to recover it. 00:37:36.951 [2024-11-19 08:01:28.568702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.951 [2024-11-19 08:01:28.568753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.951 qpair failed and we were unable to recover it. 00:37:36.951 [2024-11-19 08:01:28.568910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.951 [2024-11-19 08:01:28.568961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.951 qpair failed and we were unable to recover it. 00:37:36.951 [2024-11-19 08:01:28.569080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.951 [2024-11-19 08:01:28.569117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.951 qpair failed and we were unable to recover it. 00:37:36.951 [2024-11-19 08:01:28.569261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.951 [2024-11-19 08:01:28.569297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.951 qpair failed and we were unable to recover it. 00:37:36.951 [2024-11-19 08:01:28.569436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.951 [2024-11-19 08:01:28.569472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.951 qpair failed and we were unable to recover it. 00:37:36.951 [2024-11-19 08:01:28.569600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.951 [2024-11-19 08:01:28.569636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.951 qpair failed and we were unable to recover it. 00:37:36.951 [2024-11-19 08:01:28.569801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.951 [2024-11-19 08:01:28.569852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.951 qpair failed and we were unable to recover it. 00:37:36.951 [2024-11-19 08:01:28.569978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.951 [2024-11-19 08:01:28.570018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.951 qpair failed and we were unable to recover it. 00:37:36.951 [2024-11-19 08:01:28.570133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.951 [2024-11-19 08:01:28.570169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.951 qpair failed and we were unable to recover it. 00:37:36.951 [2024-11-19 08:01:28.570304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.951 [2024-11-19 08:01:28.570341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.951 qpair failed and we were unable to recover it. 00:37:36.951 [2024-11-19 08:01:28.570509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.951 [2024-11-19 08:01:28.570546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.951 qpair failed and we were unable to recover it. 00:37:36.951 [2024-11-19 08:01:28.570708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.951 [2024-11-19 08:01:28.570759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.951 qpair failed and we were unable to recover it. 00:37:36.951 [2024-11-19 08:01:28.570881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.951 [2024-11-19 08:01:28.570918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.951 qpair failed and we were unable to recover it. 00:37:36.951 [2024-11-19 08:01:28.571058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.951 [2024-11-19 08:01:28.571098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.951 qpair failed and we were unable to recover it. 00:37:36.951 [2024-11-19 08:01:28.571204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.951 [2024-11-19 08:01:28.571241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.951 qpair failed and we were unable to recover it. 00:37:36.951 [2024-11-19 08:01:28.571379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.951 [2024-11-19 08:01:28.571415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.951 qpair failed and we were unable to recover it. 00:37:36.951 [2024-11-19 08:01:28.571551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.951 [2024-11-19 08:01:28.571592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.951 qpair failed and we were unable to recover it. 00:37:36.951 [2024-11-19 08:01:28.571713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.951 [2024-11-19 08:01:28.571751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.951 qpair failed and we were unable to recover it. 00:37:36.951 [2024-11-19 08:01:28.571883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.951 [2024-11-19 08:01:28.571919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.951 qpair failed and we were unable to recover it. 00:37:36.951 [2024-11-19 08:01:28.572032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.951 [2024-11-19 08:01:28.572069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.951 qpair failed and we were unable to recover it. 00:37:36.951 [2024-11-19 08:01:28.572204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.951 [2024-11-19 08:01:28.572240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.951 qpair failed and we were unable to recover it. 00:37:36.951 [2024-11-19 08:01:28.572348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.951 [2024-11-19 08:01:28.572385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.951 qpair failed and we were unable to recover it. 00:37:36.951 [2024-11-19 08:01:28.572520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.951 [2024-11-19 08:01:28.572556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.951 qpair failed and we were unable to recover it. 00:37:36.951 [2024-11-19 08:01:28.572714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.951 [2024-11-19 08:01:28.572765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.951 qpair failed and we were unable to recover it. 00:37:36.951 [2024-11-19 08:01:28.572935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.951 [2024-11-19 08:01:28.572973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.951 qpair failed and we were unable to recover it. 00:37:36.951 [2024-11-19 08:01:28.573105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.951 [2024-11-19 08:01:28.573141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.951 qpair failed and we were unable to recover it. 00:37:36.951 [2024-11-19 08:01:28.573291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.951 [2024-11-19 08:01:28.573326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.952 qpair failed and we were unable to recover it. 00:37:36.952 [2024-11-19 08:01:28.573434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.952 [2024-11-19 08:01:28.573470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.952 qpair failed and we were unable to recover it. 00:37:36.952 [2024-11-19 08:01:28.573598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.952 [2024-11-19 08:01:28.573634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.952 qpair failed and we were unable to recover it. 00:37:36.952 [2024-11-19 08:01:28.573748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.952 [2024-11-19 08:01:28.573785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.952 qpair failed and we were unable to recover it. 00:37:36.952 [2024-11-19 08:01:28.573923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.952 [2024-11-19 08:01:28.573975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.952 qpair failed and we were unable to recover it. 00:37:36.952 [2024-11-19 08:01:28.574109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.952 [2024-11-19 08:01:28.574160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.952 qpair failed and we were unable to recover it. 00:37:36.952 [2024-11-19 08:01:28.574306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.952 [2024-11-19 08:01:28.574342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.952 qpair failed and we were unable to recover it. 00:37:36.952 [2024-11-19 08:01:28.574484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.952 [2024-11-19 08:01:28.574520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.952 qpair failed and we were unable to recover it. 00:37:36.952 [2024-11-19 08:01:28.574629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.952 [2024-11-19 08:01:28.574664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.952 qpair failed and we were unable to recover it. 00:37:36.952 [2024-11-19 08:01:28.574832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.952 [2024-11-19 08:01:28.574868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.952 qpair failed and we were unable to recover it. 00:37:36.952 [2024-11-19 08:01:28.575000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.952 [2024-11-19 08:01:28.575036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.952 qpair failed and we were unable to recover it. 00:37:36.952 [2024-11-19 08:01:28.575172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.952 [2024-11-19 08:01:28.575207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.952 qpair failed and we were unable to recover it. 00:37:36.952 [2024-11-19 08:01:28.575318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.952 [2024-11-19 08:01:28.575358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.952 qpair failed and we were unable to recover it. 00:37:36.952 [2024-11-19 08:01:28.575507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.952 [2024-11-19 08:01:28.575544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.952 qpair failed and we were unable to recover it. 00:37:36.952 [2024-11-19 08:01:28.575655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.952 [2024-11-19 08:01:28.575697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.952 qpair failed and we were unable to recover it. 00:37:36.952 [2024-11-19 08:01:28.575809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.952 [2024-11-19 08:01:28.575846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.952 qpair failed and we were unable to recover it. 00:37:36.952 [2024-11-19 08:01:28.575960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.952 [2024-11-19 08:01:28.575996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.952 qpair failed and we were unable to recover it. 00:37:36.952 [2024-11-19 08:01:28.576136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.952 [2024-11-19 08:01:28.576172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.952 qpair failed and we were unable to recover it. 00:37:36.952 [2024-11-19 08:01:28.576279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.952 [2024-11-19 08:01:28.576315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.952 qpair failed and we were unable to recover it. 00:37:36.952 [2024-11-19 08:01:28.576466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.952 [2024-11-19 08:01:28.576505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.952 qpair failed and we were unable to recover it. 00:37:36.952 [2024-11-19 08:01:28.576662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.952 [2024-11-19 08:01:28.576721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.952 qpair failed and we were unable to recover it. 00:37:36.952 [2024-11-19 08:01:28.576863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.952 [2024-11-19 08:01:28.576900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.952 qpair failed and we were unable to recover it. 00:37:36.952 [2024-11-19 08:01:28.577043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.952 [2024-11-19 08:01:28.577080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.952 qpair failed and we were unable to recover it. 00:37:36.952 [2024-11-19 08:01:28.577220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.952 [2024-11-19 08:01:28.577255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.952 qpair failed and we were unable to recover it. 00:37:36.952 [2024-11-19 08:01:28.577384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.952 [2024-11-19 08:01:28.577420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.952 qpair failed and we were unable to recover it. 00:37:36.952 [2024-11-19 08:01:28.577578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.952 [2024-11-19 08:01:28.577614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.952 qpair failed and we were unable to recover it. 00:37:36.952 [2024-11-19 08:01:28.577754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.952 [2024-11-19 08:01:28.577793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.952 qpair failed and we were unable to recover it. 00:37:36.952 [2024-11-19 08:01:28.577948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.952 [2024-11-19 08:01:28.577999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.952 qpair failed and we were unable to recover it. 00:37:36.952 [2024-11-19 08:01:28.578168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.952 [2024-11-19 08:01:28.578204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.952 qpair failed and we were unable to recover it. 00:37:36.952 [2024-11-19 08:01:28.578317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.952 [2024-11-19 08:01:28.578354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.952 qpair failed and we were unable to recover it. 00:37:36.952 [2024-11-19 08:01:28.578518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.952 [2024-11-19 08:01:28.578559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.952 qpair failed and we were unable to recover it. 00:37:36.952 [2024-11-19 08:01:28.578722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.952 [2024-11-19 08:01:28.578759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.952 qpair failed and we were unable to recover it. 00:37:36.952 [2024-11-19 08:01:28.578900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.952 [2024-11-19 08:01:28.578937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.952 qpair failed and we were unable to recover it. 00:37:36.952 [2024-11-19 08:01:28.579075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.952 [2024-11-19 08:01:28.579110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.952 qpair failed and we were unable to recover it. 00:37:36.952 [2024-11-19 08:01:28.579272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.952 [2024-11-19 08:01:28.579308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.952 qpair failed and we were unable to recover it. 00:37:36.952 [2024-11-19 08:01:28.579420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.952 [2024-11-19 08:01:28.579455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.952 qpair failed and we were unable to recover it. 00:37:36.952 [2024-11-19 08:01:28.579607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.952 [2024-11-19 08:01:28.579658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.952 qpair failed and we were unable to recover it. 00:37:36.952 [2024-11-19 08:01:28.579835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.952 [2024-11-19 08:01:28.579887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.952 qpair failed and we were unable to recover it. 00:37:36.953 [2024-11-19 08:01:28.580064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.953 [2024-11-19 08:01:28.580104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.953 qpair failed and we were unable to recover it. 00:37:36.953 [2024-11-19 08:01:28.580211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.953 [2024-11-19 08:01:28.580249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.953 qpair failed and we were unable to recover it. 00:37:36.953 [2024-11-19 08:01:28.580364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.953 [2024-11-19 08:01:28.580401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.953 qpair failed and we were unable to recover it. 00:37:36.953 [2024-11-19 08:01:28.580508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.953 [2024-11-19 08:01:28.580545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.953 qpair failed and we were unable to recover it. 00:37:36.953 [2024-11-19 08:01:28.580673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.953 [2024-11-19 08:01:28.580731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.953 qpair failed and we were unable to recover it. 00:37:36.953 [2024-11-19 08:01:28.580882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.953 [2024-11-19 08:01:28.580921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.953 qpair failed and we were unable to recover it. 00:37:36.953 [2024-11-19 08:01:28.581070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.953 [2024-11-19 08:01:28.581107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.953 qpair failed and we were unable to recover it. 00:37:36.953 [2024-11-19 08:01:28.581243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.953 [2024-11-19 08:01:28.581279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.953 qpair failed and we were unable to recover it. 00:37:36.953 [2024-11-19 08:01:28.581415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.953 [2024-11-19 08:01:28.581451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.953 qpair failed and we were unable to recover it. 00:37:36.953 [2024-11-19 08:01:28.581602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.953 [2024-11-19 08:01:28.581639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.953 qpair failed and we were unable to recover it. 00:37:36.953 [2024-11-19 08:01:28.581791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.953 [2024-11-19 08:01:28.581828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.953 qpair failed and we were unable to recover it. 00:37:36.953 [2024-11-19 08:01:28.581947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.953 [2024-11-19 08:01:28.581983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.953 qpair failed and we were unable to recover it. 00:37:36.953 [2024-11-19 08:01:28.582124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.953 [2024-11-19 08:01:28.582171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.953 qpair failed and we were unable to recover it. 00:37:36.953 [2024-11-19 08:01:28.582336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.953 [2024-11-19 08:01:28.582371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.953 qpair failed and we were unable to recover it. 00:37:36.953 [2024-11-19 08:01:28.582525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.953 [2024-11-19 08:01:28.582576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.953 qpair failed and we were unable to recover it. 00:37:36.953 [2024-11-19 08:01:28.582738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.953 [2024-11-19 08:01:28.582789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.953 qpair failed and we were unable to recover it. 00:37:36.953 [2024-11-19 08:01:28.582939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.953 [2024-11-19 08:01:28.582977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.953 qpair failed and we were unable to recover it. 00:37:36.953 [2024-11-19 08:01:28.583147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.953 [2024-11-19 08:01:28.583184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.953 qpair failed and we were unable to recover it. 00:37:36.953 [2024-11-19 08:01:28.583326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.953 [2024-11-19 08:01:28.583363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.953 qpair failed and we were unable to recover it. 00:37:36.953 [2024-11-19 08:01:28.583509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.953 [2024-11-19 08:01:28.583546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.953 qpair failed and we were unable to recover it. 00:37:36.953 [2024-11-19 08:01:28.583686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.953 [2024-11-19 08:01:28.583730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.953 qpair failed and we were unable to recover it. 00:37:36.953 [2024-11-19 08:01:28.583857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.953 [2024-11-19 08:01:28.583907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.953 qpair failed and we were unable to recover it. 00:37:36.953 [2024-11-19 08:01:28.584061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.953 [2024-11-19 08:01:28.584099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.953 qpair failed and we were unable to recover it. 00:37:36.953 [2024-11-19 08:01:28.584265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.953 [2024-11-19 08:01:28.584301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.953 qpair failed and we were unable to recover it. 00:37:36.953 [2024-11-19 08:01:28.584439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.953 [2024-11-19 08:01:28.584475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.953 qpair failed and we were unable to recover it. 00:37:36.953 [2024-11-19 08:01:28.584619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.953 [2024-11-19 08:01:28.584659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.953 qpair failed and we were unable to recover it. 00:37:36.953 [2024-11-19 08:01:28.584782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.953 [2024-11-19 08:01:28.584819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.953 qpair failed and we were unable to recover it. 00:37:36.953 [2024-11-19 08:01:28.584972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.953 [2024-11-19 08:01:28.585023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.953 qpair failed and we were unable to recover it. 00:37:36.953 [2024-11-19 08:01:28.585170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.953 [2024-11-19 08:01:28.585206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.953 qpair failed and we were unable to recover it. 00:37:36.953 [2024-11-19 08:01:28.585347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.953 [2024-11-19 08:01:28.585382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.953 qpair failed and we were unable to recover it. 00:37:36.953 [2024-11-19 08:01:28.585484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.953 [2024-11-19 08:01:28.585519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.953 qpair failed and we were unable to recover it. 00:37:36.953 [2024-11-19 08:01:28.585654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.953 [2024-11-19 08:01:28.585703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.953 qpair failed and we were unable to recover it. 00:37:36.953 [2024-11-19 08:01:28.585849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.953 [2024-11-19 08:01:28.585890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.953 qpair failed and we were unable to recover it. 00:37:36.953 [2024-11-19 08:01:28.586020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.953 [2024-11-19 08:01:28.586055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.953 qpair failed and we were unable to recover it. 00:37:36.953 [2024-11-19 08:01:28.586167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.953 [2024-11-19 08:01:28.586202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.953 qpair failed and we were unable to recover it. 00:37:36.953 [2024-11-19 08:01:28.586342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.953 [2024-11-19 08:01:28.586379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.953 qpair failed and we were unable to recover it. 00:37:36.953 [2024-11-19 08:01:28.586549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.953 [2024-11-19 08:01:28.586586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.953 qpair failed and we were unable to recover it. 00:37:36.953 [2024-11-19 08:01:28.586742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.954 [2024-11-19 08:01:28.586793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.954 qpair failed and we were unable to recover it. 00:37:36.954 [2024-11-19 08:01:28.586931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.954 [2024-11-19 08:01:28.586970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.954 qpair failed and we were unable to recover it. 00:37:36.954 [2024-11-19 08:01:28.587108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.954 [2024-11-19 08:01:28.587144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.954 qpair failed and we were unable to recover it. 00:37:36.954 [2024-11-19 08:01:28.587281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.954 [2024-11-19 08:01:28.587317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.954 qpair failed and we were unable to recover it. 00:37:36.954 [2024-11-19 08:01:28.587446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.954 [2024-11-19 08:01:28.587497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.954 qpair failed and we were unable to recover it. 00:37:36.954 [2024-11-19 08:01:28.587720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.954 [2024-11-19 08:01:28.587757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.954 qpair failed and we were unable to recover it. 00:37:36.954 [2024-11-19 08:01:28.587876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.954 [2024-11-19 08:01:28.587912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.954 qpair failed and we were unable to recover it. 00:37:36.954 [2024-11-19 08:01:28.588063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.954 [2024-11-19 08:01:28.588100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.954 qpair failed and we were unable to recover it. 00:37:36.954 [2024-11-19 08:01:28.588265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.954 [2024-11-19 08:01:28.588300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.954 qpair failed and we were unable to recover it. 00:37:36.954 [2024-11-19 08:01:28.588448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.954 [2024-11-19 08:01:28.588483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.954 qpair failed and we were unable to recover it. 00:37:36.954 [2024-11-19 08:01:28.588653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.954 [2024-11-19 08:01:28.588699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.954 qpair failed and we were unable to recover it. 00:37:36.954 [2024-11-19 08:01:28.588841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.954 [2024-11-19 08:01:28.588877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.954 qpair failed and we were unable to recover it. 00:37:36.954 [2024-11-19 08:01:28.589009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.954 [2024-11-19 08:01:28.589060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.954 qpair failed and we were unable to recover it. 00:37:36.954 [2024-11-19 08:01:28.589186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.954 [2024-11-19 08:01:28.589225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.954 qpair failed and we were unable to recover it. 00:37:36.954 [2024-11-19 08:01:28.589339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.954 [2024-11-19 08:01:28.589376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.954 qpair failed and we were unable to recover it. 00:37:36.954 [2024-11-19 08:01:28.589511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.954 [2024-11-19 08:01:28.589547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.954 qpair failed and we were unable to recover it. 00:37:36.954 [2024-11-19 08:01:28.589659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.954 [2024-11-19 08:01:28.589701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.954 qpair failed and we were unable to recover it. 00:37:36.954 [2024-11-19 08:01:28.589827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.954 [2024-11-19 08:01:28.589863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.954 qpair failed and we were unable to recover it. 00:37:36.954 [2024-11-19 08:01:28.590021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.954 [2024-11-19 08:01:28.590057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.954 qpair failed and we were unable to recover it. 00:37:36.954 [2024-11-19 08:01:28.590221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.954 [2024-11-19 08:01:28.590256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.954 qpair failed and we were unable to recover it. 00:37:36.954 [2024-11-19 08:01:28.590393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.954 [2024-11-19 08:01:28.590429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.954 qpair failed and we were unable to recover it. 00:37:36.954 [2024-11-19 08:01:28.590561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.954 [2024-11-19 08:01:28.590611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.954 qpair failed and we were unable to recover it. 00:37:36.954 [2024-11-19 08:01:28.590814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.954 [2024-11-19 08:01:28.590865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.954 qpair failed and we were unable to recover it. 00:37:36.954 [2024-11-19 08:01:28.591013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.954 [2024-11-19 08:01:28.591053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.954 qpair failed and we were unable to recover it. 00:37:36.954 [2024-11-19 08:01:28.591220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.954 [2024-11-19 08:01:28.591269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.954 qpair failed and we were unable to recover it. 00:37:36.954 [2024-11-19 08:01:28.591409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.954 [2024-11-19 08:01:28.591445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.954 qpair failed and we were unable to recover it. 00:37:36.954 [2024-11-19 08:01:28.591600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.954 [2024-11-19 08:01:28.591650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.954 qpair failed and we were unable to recover it. 00:37:36.954 [2024-11-19 08:01:28.591821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.954 [2024-11-19 08:01:28.591870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.954 qpair failed and we were unable to recover it. 00:37:36.954 [2024-11-19 08:01:28.592026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.954 [2024-11-19 08:01:28.592062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.954 qpair failed and we were unable to recover it. 00:37:36.954 [2024-11-19 08:01:28.592177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.954 [2024-11-19 08:01:28.592212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.954 qpair failed and we were unable to recover it. 00:37:36.954 [2024-11-19 08:01:28.592322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.954 [2024-11-19 08:01:28.592357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.954 qpair failed and we were unable to recover it. 00:37:36.954 [2024-11-19 08:01:28.592486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.954 [2024-11-19 08:01:28.592521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.954 qpair failed and we were unable to recover it. 00:37:36.954 [2024-11-19 08:01:28.592629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.954 [2024-11-19 08:01:28.592665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.954 qpair failed and we were unable to recover it. 00:37:36.954 [2024-11-19 08:01:28.592782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.954 [2024-11-19 08:01:28.592817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.954 qpair failed and we were unable to recover it. 00:37:36.954 [2024-11-19 08:01:28.592948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.954 [2024-11-19 08:01:28.592983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.954 qpair failed and we were unable to recover it. 00:37:36.954 [2024-11-19 08:01:28.593122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.954 [2024-11-19 08:01:28.593162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.954 qpair failed and we were unable to recover it. 00:37:36.954 [2024-11-19 08:01:28.593299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.954 [2024-11-19 08:01:28.593333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.954 qpair failed and we were unable to recover it. 00:37:36.954 [2024-11-19 08:01:28.593472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.954 [2024-11-19 08:01:28.593506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.954 qpair failed and we were unable to recover it. 00:37:36.955 [2024-11-19 08:01:28.593670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.955 [2024-11-19 08:01:28.593716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.955 qpair failed and we were unable to recover it. 00:37:36.955 [2024-11-19 08:01:28.593833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.955 [2024-11-19 08:01:28.593873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.955 qpair failed and we were unable to recover it. 00:37:36.955 [2024-11-19 08:01:28.594000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.955 [2024-11-19 08:01:28.594056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.955 qpair failed and we were unable to recover it. 00:37:36.955 [2024-11-19 08:01:28.594189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.955 [2024-11-19 08:01:28.594227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.955 qpair failed and we were unable to recover it. 00:37:36.955 [2024-11-19 08:01:28.594337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.955 [2024-11-19 08:01:28.594379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.955 qpair failed and we were unable to recover it. 00:37:36.955 [2024-11-19 08:01:28.594553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.955 [2024-11-19 08:01:28.594589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.955 qpair failed and we were unable to recover it. 00:37:36.955 [2024-11-19 08:01:28.594751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.955 [2024-11-19 08:01:28.594801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.955 qpair failed and we were unable to recover it. 00:37:36.955 [2024-11-19 08:01:28.594921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.955 [2024-11-19 08:01:28.594957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.955 qpair failed and we were unable to recover it. 00:37:36.955 [2024-11-19 08:01:28.595067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.955 [2024-11-19 08:01:28.595102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.955 qpair failed and we were unable to recover it. 00:37:36.955 [2024-11-19 08:01:28.595207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.955 [2024-11-19 08:01:28.595242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.955 qpair failed and we were unable to recover it. 00:37:36.955 [2024-11-19 08:01:28.595402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.955 [2024-11-19 08:01:28.595437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.955 qpair failed and we were unable to recover it. 00:37:36.955 [2024-11-19 08:01:28.595554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.955 [2024-11-19 08:01:28.595591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.955 qpair failed and we were unable to recover it. 00:37:36.955 [2024-11-19 08:01:28.595739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.955 [2024-11-19 08:01:28.595775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.955 qpair failed and we were unable to recover it. 00:37:36.955 [2024-11-19 08:01:28.595920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.955 [2024-11-19 08:01:28.595960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.955 qpair failed and we were unable to recover it. 00:37:36.955 [2024-11-19 08:01:28.596097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.955 [2024-11-19 08:01:28.596133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.955 qpair failed and we were unable to recover it. 00:37:36.955 [2024-11-19 08:01:28.596300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.955 [2024-11-19 08:01:28.596335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.955 qpair failed and we were unable to recover it. 00:37:36.955 [2024-11-19 08:01:28.596439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.955 [2024-11-19 08:01:28.596475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.955 qpair failed and we were unable to recover it. 00:37:36.955 [2024-11-19 08:01:28.596628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.955 [2024-11-19 08:01:28.596678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.955 qpair failed and we were unable to recover it. 00:37:36.955 [2024-11-19 08:01:28.596833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.955 [2024-11-19 08:01:28.596871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.955 qpair failed and we were unable to recover it. 00:37:36.955 [2024-11-19 08:01:28.596997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.955 [2024-11-19 08:01:28.597033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.955 qpair failed and we were unable to recover it. 00:37:36.955 [2024-11-19 08:01:28.597148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.955 [2024-11-19 08:01:28.597183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.955 qpair failed and we were unable to recover it. 00:37:36.955 [2024-11-19 08:01:28.597324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.955 [2024-11-19 08:01:28.597358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.955 qpair failed and we were unable to recover it. 00:37:36.955 [2024-11-19 08:01:28.597470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.955 [2024-11-19 08:01:28.597508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.955 qpair failed and we were unable to recover it. 00:37:36.955 [2024-11-19 08:01:28.597619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.955 [2024-11-19 08:01:28.597653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.955 qpair failed and we were unable to recover it. 00:37:36.955 [2024-11-19 08:01:28.597815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.955 [2024-11-19 08:01:28.597853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.955 qpair failed and we were unable to recover it. 00:37:36.955 [2024-11-19 08:01:28.597976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.955 [2024-11-19 08:01:28.598016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.955 qpair failed and we were unable to recover it. 00:37:36.955 [2024-11-19 08:01:28.598154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.955 [2024-11-19 08:01:28.598189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.955 qpair failed and we were unable to recover it. 00:37:36.955 [2024-11-19 08:01:28.598335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.955 [2024-11-19 08:01:28.598371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.955 qpair failed and we were unable to recover it. 00:37:36.955 [2024-11-19 08:01:28.598483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.955 [2024-11-19 08:01:28.598520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.955 qpair failed and we were unable to recover it. 00:37:36.955 [2024-11-19 08:01:28.598672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.955 [2024-11-19 08:01:28.598735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.955 qpair failed and we were unable to recover it. 00:37:36.955 [2024-11-19 08:01:28.598888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.955 [2024-11-19 08:01:28.598926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.955 qpair failed and we were unable to recover it. 00:37:36.955 [2024-11-19 08:01:28.599028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.955 [2024-11-19 08:01:28.599065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.955 qpair failed and we were unable to recover it. 00:37:36.955 [2024-11-19 08:01:28.599165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.955 [2024-11-19 08:01:28.599201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.955 qpair failed and we were unable to recover it. 00:37:36.955 [2024-11-19 08:01:28.599330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.956 [2024-11-19 08:01:28.599380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.956 qpair failed and we were unable to recover it. 00:37:36.956 [2024-11-19 08:01:28.599530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.956 [2024-11-19 08:01:28.599572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.956 qpair failed and we were unable to recover it. 00:37:36.956 [2024-11-19 08:01:28.599733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.956 [2024-11-19 08:01:28.599783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.956 qpair failed and we were unable to recover it. 00:37:36.956 [2024-11-19 08:01:28.599902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.956 [2024-11-19 08:01:28.599938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.956 qpair failed and we were unable to recover it. 00:37:36.956 [2024-11-19 08:01:28.600051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.956 [2024-11-19 08:01:28.600092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.956 qpair failed and we were unable to recover it. 00:37:36.956 [2024-11-19 08:01:28.600202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.956 [2024-11-19 08:01:28.600236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.956 qpair failed and we were unable to recover it. 00:37:36.956 [2024-11-19 08:01:28.600380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.956 [2024-11-19 08:01:28.600417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.956 qpair failed and we were unable to recover it. 00:37:36.956 [2024-11-19 08:01:28.600536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.956 [2024-11-19 08:01:28.600575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.956 qpair failed and we were unable to recover it. 00:37:36.956 [2024-11-19 08:01:28.600757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.956 [2024-11-19 08:01:28.600794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.956 qpair failed and we were unable to recover it. 00:37:36.956 [2024-11-19 08:01:28.600915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.956 [2024-11-19 08:01:28.600950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.956 qpair failed and we were unable to recover it. 00:37:36.956 [2024-11-19 08:01:28.601068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.956 [2024-11-19 08:01:28.601104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.956 qpair failed and we were unable to recover it. 00:37:36.956 [2024-11-19 08:01:28.601221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.956 [2024-11-19 08:01:28.601259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.956 qpair failed and we were unable to recover it. 00:37:36.956 [2024-11-19 08:01:28.601368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.956 [2024-11-19 08:01:28.601404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.956 qpair failed and we were unable to recover it. 00:37:36.956 [2024-11-19 08:01:28.601530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.956 [2024-11-19 08:01:28.601566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.956 qpair failed and we were unable to recover it. 00:37:36.956 [2024-11-19 08:01:28.601732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.956 [2024-11-19 08:01:28.601767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.956 qpair failed and we were unable to recover it. 00:37:36.956 [2024-11-19 08:01:28.601870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.956 [2024-11-19 08:01:28.601905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.956 qpair failed and we were unable to recover it. 00:37:36.956 [2024-11-19 08:01:28.602033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.956 [2024-11-19 08:01:28.602085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.956 qpair failed and we were unable to recover it. 00:37:36.956 [2024-11-19 08:01:28.602235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.956 [2024-11-19 08:01:28.602275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.956 qpair failed and we were unable to recover it. 00:37:36.956 [2024-11-19 08:01:28.602426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.956 [2024-11-19 08:01:28.602464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.956 qpair failed and we were unable to recover it. 00:37:36.956 [2024-11-19 08:01:28.602576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.956 [2024-11-19 08:01:28.602613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.956 qpair failed and we were unable to recover it. 00:37:36.956 [2024-11-19 08:01:28.602780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.956 [2024-11-19 08:01:28.602817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.956 qpair failed and we were unable to recover it. 00:37:36.956 [2024-11-19 08:01:28.602933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.956 [2024-11-19 08:01:28.602979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.956 qpair failed and we were unable to recover it. 00:37:36.956 [2024-11-19 08:01:28.603121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.956 [2024-11-19 08:01:28.603157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.956 qpair failed and we were unable to recover it. 00:37:36.956 [2024-11-19 08:01:28.603296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.956 [2024-11-19 08:01:28.603331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.956 qpair failed and we were unable to recover it. 00:37:36.956 [2024-11-19 08:01:28.603465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.956 [2024-11-19 08:01:28.603500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.956 qpair failed and we were unable to recover it. 00:37:36.956 [2024-11-19 08:01:28.603635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.956 [2024-11-19 08:01:28.603671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.956 qpair failed and we were unable to recover it. 00:37:36.956 [2024-11-19 08:01:28.603827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.956 [2024-11-19 08:01:28.603862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.956 qpair failed and we were unable to recover it. 00:37:36.956 [2024-11-19 08:01:28.603983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.956 [2024-11-19 08:01:28.604021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.956 qpair failed and we were unable to recover it. 00:37:36.956 [2024-11-19 08:01:28.604156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.956 [2024-11-19 08:01:28.604193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.956 qpair failed and we were unable to recover it. 00:37:36.956 [2024-11-19 08:01:28.604352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.956 [2024-11-19 08:01:28.604403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.956 qpair failed and we were unable to recover it. 00:37:36.956 [2024-11-19 08:01:28.604517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.956 [2024-11-19 08:01:28.604554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.956 qpair failed and we were unable to recover it. 00:37:36.956 [2024-11-19 08:01:28.604706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.956 [2024-11-19 08:01:28.604753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.956 qpair failed and we were unable to recover it. 00:37:36.956 [2024-11-19 08:01:28.604919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.956 [2024-11-19 08:01:28.604967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.956 qpair failed and we were unable to recover it. 00:37:36.956 [2024-11-19 08:01:28.605074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.956 [2024-11-19 08:01:28.605112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.956 qpair failed and we were unable to recover it. 00:37:36.956 [2024-11-19 08:01:28.605254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.956 [2024-11-19 08:01:28.605290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.956 qpair failed and we were unable to recover it. 00:37:36.956 [2024-11-19 08:01:28.605427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.956 [2024-11-19 08:01:28.605464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.956 qpair failed and we were unable to recover it. 00:37:36.956 [2024-11-19 08:01:28.605582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.956 [2024-11-19 08:01:28.605622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.956 qpair failed and we were unable to recover it. 00:37:36.956 [2024-11-19 08:01:28.605740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.956 [2024-11-19 08:01:28.605777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.956 qpair failed and we were unable to recover it. 00:37:36.957 [2024-11-19 08:01:28.605884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.957 [2024-11-19 08:01:28.605920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.957 qpair failed and we were unable to recover it. 00:37:36.957 [2024-11-19 08:01:28.606064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.957 [2024-11-19 08:01:28.606099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.957 qpair failed and we were unable to recover it. 00:37:36.957 [2024-11-19 08:01:28.606209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.957 [2024-11-19 08:01:28.606245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.957 qpair failed and we were unable to recover it. 00:37:36.957 [2024-11-19 08:01:28.606362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.957 [2024-11-19 08:01:28.606399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.957 qpair failed and we were unable to recover it. 00:37:36.957 [2024-11-19 08:01:28.606510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.957 [2024-11-19 08:01:28.606547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.957 qpair failed and we were unable to recover it. 00:37:36.957 [2024-11-19 08:01:28.606686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.957 [2024-11-19 08:01:28.606736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.957 qpair failed and we were unable to recover it. 00:37:36.957 [2024-11-19 08:01:28.606877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.957 [2024-11-19 08:01:28.606918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.957 qpair failed and we were unable to recover it. 00:37:36.957 [2024-11-19 08:01:28.607072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.957 [2024-11-19 08:01:28.607109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.957 qpair failed and we were unable to recover it. 00:37:36.957 [2024-11-19 08:01:28.607244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.957 [2024-11-19 08:01:28.607281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.957 qpair failed and we were unable to recover it. 00:37:36.957 [2024-11-19 08:01:28.607425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.957 [2024-11-19 08:01:28.607462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.957 qpair failed and we were unable to recover it. 00:37:36.957 [2024-11-19 08:01:28.607605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.957 [2024-11-19 08:01:28.607641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.957 qpair failed and we were unable to recover it. 00:37:36.957 [2024-11-19 08:01:28.607773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.957 [2024-11-19 08:01:28.607822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.957 qpair failed and we were unable to recover it. 00:37:36.957 [2024-11-19 08:01:28.607992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.957 [2024-11-19 08:01:28.608029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.957 qpair failed and we were unable to recover it. 00:37:36.957 [2024-11-19 08:01:28.608136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.957 [2024-11-19 08:01:28.608172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.957 qpair failed and we were unable to recover it. 00:37:36.957 [2024-11-19 08:01:28.608352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.957 [2024-11-19 08:01:28.608391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.957 qpair failed and we were unable to recover it. 00:37:36.957 [2024-11-19 08:01:28.608531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.957 [2024-11-19 08:01:28.608578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.957 qpair failed and we were unable to recover it. 00:37:36.957 [2024-11-19 08:01:28.608738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.957 [2024-11-19 08:01:28.608787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.957 qpair failed and we were unable to recover it. 00:37:36.957 [2024-11-19 08:01:28.608916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.957 [2024-11-19 08:01:28.608975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.957 qpair failed and we were unable to recover it. 00:37:36.957 [2024-11-19 08:01:28.609121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.957 [2024-11-19 08:01:28.609160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.957 qpair failed and we were unable to recover it. 00:37:36.957 [2024-11-19 08:01:28.609269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.957 [2024-11-19 08:01:28.609307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.957 qpair failed and we were unable to recover it. 00:37:36.957 [2024-11-19 08:01:28.609447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.957 [2024-11-19 08:01:28.609482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.957 qpair failed and we were unable to recover it. 00:37:36.957 [2024-11-19 08:01:28.609662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.957 [2024-11-19 08:01:28.609732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.957 qpair failed and we were unable to recover it. 00:37:36.957 [2024-11-19 08:01:28.609899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.957 [2024-11-19 08:01:28.609938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.957 qpair failed and we were unable to recover it. 00:37:36.957 [2024-11-19 08:01:28.610090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.957 [2024-11-19 08:01:28.610138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.957 qpair failed and we were unable to recover it. 00:37:36.957 [2024-11-19 08:01:28.610306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.957 [2024-11-19 08:01:28.610343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.957 qpair failed and we were unable to recover it. 00:37:36.957 [2024-11-19 08:01:28.610465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.957 [2024-11-19 08:01:28.610503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.957 qpair failed and we were unable to recover it. 00:37:36.957 [2024-11-19 08:01:28.610702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.957 [2024-11-19 08:01:28.610751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.957 qpair failed and we were unable to recover it. 00:37:36.957 [2024-11-19 08:01:28.610897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.957 [2024-11-19 08:01:28.610934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.957 qpair failed and we were unable to recover it. 00:37:36.957 [2024-11-19 08:01:28.611051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.957 [2024-11-19 08:01:28.611087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.957 qpair failed and we were unable to recover it. 00:37:36.957 [2024-11-19 08:01:28.611251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.957 [2024-11-19 08:01:28.611286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.957 qpair failed and we were unable to recover it. 00:37:36.957 [2024-11-19 08:01:28.611456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.957 [2024-11-19 08:01:28.611491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.957 qpair failed and we were unable to recover it. 00:37:36.957 [2024-11-19 08:01:28.611617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.957 [2024-11-19 08:01:28.611668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.957 qpair failed and we were unable to recover it. 00:37:36.957 [2024-11-19 08:01:28.611869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.957 [2024-11-19 08:01:28.611920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.957 qpair failed and we were unable to recover it. 00:37:36.957 [2024-11-19 08:01:28.612082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.957 [2024-11-19 08:01:28.612121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.957 qpair failed and we were unable to recover it. 00:37:36.957 [2024-11-19 08:01:28.612235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.957 [2024-11-19 08:01:28.612273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.957 qpair failed and we were unable to recover it. 00:37:36.957 [2024-11-19 08:01:28.612418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.957 [2024-11-19 08:01:28.612455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.957 qpair failed and we were unable to recover it. 00:37:36.957 [2024-11-19 08:01:28.612571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.957 [2024-11-19 08:01:28.612609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.957 qpair failed and we were unable to recover it. 00:37:36.957 [2024-11-19 08:01:28.612761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.958 [2024-11-19 08:01:28.612797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.958 qpair failed and we were unable to recover it. 00:37:36.958 [2024-11-19 08:01:28.612948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.958 [2024-11-19 08:01:28.612988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.958 qpair failed and we were unable to recover it. 00:37:36.958 [2024-11-19 08:01:28.613119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.958 [2024-11-19 08:01:28.613157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.958 qpair failed and we were unable to recover it. 00:37:36.958 [2024-11-19 08:01:28.613323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.958 [2024-11-19 08:01:28.613359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.958 qpair failed and we were unable to recover it. 00:37:36.958 [2024-11-19 08:01:28.613509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.958 [2024-11-19 08:01:28.613546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.958 qpair failed and we were unable to recover it. 00:37:36.958 [2024-11-19 08:01:28.613714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.958 [2024-11-19 08:01:28.613752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.958 qpair failed and we were unable to recover it. 00:37:36.958 [2024-11-19 08:01:28.613876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.958 [2024-11-19 08:01:28.613925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.958 qpair failed and we were unable to recover it. 00:37:36.958 [2024-11-19 08:01:28.614051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.958 [2024-11-19 08:01:28.614088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.958 qpair failed and we were unable to recover it. 00:37:36.958 [2024-11-19 08:01:28.614253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.958 [2024-11-19 08:01:28.614290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.958 qpair failed and we were unable to recover it. 00:37:36.958 [2024-11-19 08:01:28.614425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.958 [2024-11-19 08:01:28.614467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.958 qpair failed and we were unable to recover it. 00:37:36.958 [2024-11-19 08:01:28.614612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.958 [2024-11-19 08:01:28.614650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.958 qpair failed and we were unable to recover it. 00:37:36.958 [2024-11-19 08:01:28.614784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.958 [2024-11-19 08:01:28.614834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.958 qpair failed and we were unable to recover it. 00:37:36.958 [2024-11-19 08:01:28.614991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.958 [2024-11-19 08:01:28.615029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.958 qpair failed and we were unable to recover it. 00:37:36.958 [2024-11-19 08:01:28.615160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.958 [2024-11-19 08:01:28.615196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.958 qpair failed and we were unable to recover it. 00:37:36.958 [2024-11-19 08:01:28.615334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.958 [2024-11-19 08:01:28.615370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.958 qpair failed and we were unable to recover it. 00:37:36.958 [2024-11-19 08:01:28.615508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.958 [2024-11-19 08:01:28.615545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.958 qpair failed and we were unable to recover it. 00:37:36.958 [2024-11-19 08:01:28.615659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.958 [2024-11-19 08:01:28.615704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.958 qpair failed and we were unable to recover it. 00:37:36.958 [2024-11-19 08:01:28.615888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.958 [2024-11-19 08:01:28.615924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.958 qpair failed and we were unable to recover it. 00:37:36.958 [2024-11-19 08:01:28.616070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.958 [2024-11-19 08:01:28.616120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.958 qpair failed and we were unable to recover it. 00:37:36.958 [2024-11-19 08:01:28.616240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.958 [2024-11-19 08:01:28.616277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.958 qpair failed and we were unable to recover it. 00:37:36.958 [2024-11-19 08:01:28.616421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.958 [2024-11-19 08:01:28.616459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.958 qpair failed and we were unable to recover it. 00:37:36.958 [2024-11-19 08:01:28.616566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.958 [2024-11-19 08:01:28.616603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.958 qpair failed and we were unable to recover it. 00:37:36.958 [2024-11-19 08:01:28.616752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.958 [2024-11-19 08:01:28.616788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.958 qpair failed and we were unable to recover it. 00:37:36.958 [2024-11-19 08:01:28.616932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.958 [2024-11-19 08:01:28.616968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.958 qpair failed and we were unable to recover it. 00:37:36.958 [2024-11-19 08:01:28.617085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.958 [2024-11-19 08:01:28.617121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.958 qpair failed and we were unable to recover it. 00:37:36.958 [2024-11-19 08:01:28.617261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.958 [2024-11-19 08:01:28.617296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.958 qpair failed and we were unable to recover it. 00:37:36.958 [2024-11-19 08:01:28.617459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.958 [2024-11-19 08:01:28.617494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.958 qpair failed and we were unable to recover it. 00:37:36.958 [2024-11-19 08:01:28.617673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.958 [2024-11-19 08:01:28.617745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.958 qpair failed and we were unable to recover it. 00:37:36.958 [2024-11-19 08:01:28.617911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.958 [2024-11-19 08:01:28.617960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.958 qpair failed and we were unable to recover it. 00:37:36.958 [2024-11-19 08:01:28.618119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.958 [2024-11-19 08:01:28.618159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.958 qpair failed and we were unable to recover it. 00:37:36.958 [2024-11-19 08:01:28.618323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.958 [2024-11-19 08:01:28.618360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.958 qpair failed and we were unable to recover it. 00:37:36.958 [2024-11-19 08:01:28.618525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.958 [2024-11-19 08:01:28.618560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.958 qpair failed and we were unable to recover it. 00:37:36.958 [2024-11-19 08:01:28.618709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.958 [2024-11-19 08:01:28.618760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.958 qpair failed and we were unable to recover it. 00:37:36.958 [2024-11-19 08:01:28.618904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.958 [2024-11-19 08:01:28.618961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.958 qpair failed and we were unable to recover it. 00:37:36.958 [2024-11-19 08:01:28.619079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.958 [2024-11-19 08:01:28.619115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.958 qpair failed and we were unable to recover it. 00:37:36.958 [2024-11-19 08:01:28.619253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.958 [2024-11-19 08:01:28.619289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.958 qpair failed and we were unable to recover it. 00:37:36.958 [2024-11-19 08:01:28.619431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.958 [2024-11-19 08:01:28.619466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.958 qpair failed and we were unable to recover it. 00:37:36.958 [2024-11-19 08:01:28.619622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.958 [2024-11-19 08:01:28.619673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.958 qpair failed and we were unable to recover it. 00:37:36.958 [2024-11-19 08:01:28.619813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.959 [2024-11-19 08:01:28.619851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.959 qpair failed and we were unable to recover it. 00:37:36.959 [2024-11-19 08:01:28.620029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.959 [2024-11-19 08:01:28.620069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.959 qpair failed and we were unable to recover it. 00:37:36.959 [2024-11-19 08:01:28.620209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.959 [2024-11-19 08:01:28.620246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.959 qpair failed and we were unable to recover it. 00:37:36.959 [2024-11-19 08:01:28.620381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.959 [2024-11-19 08:01:28.620417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.959 qpair failed and we were unable to recover it. 00:37:36.959 [2024-11-19 08:01:28.620550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.959 [2024-11-19 08:01:28.620585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.959 qpair failed and we were unable to recover it. 00:37:36.959 [2024-11-19 08:01:28.620702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.959 [2024-11-19 08:01:28.620746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.959 qpair failed and we were unable to recover it. 00:37:36.959 [2024-11-19 08:01:28.620887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.959 [2024-11-19 08:01:28.620922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.959 qpair failed and we were unable to recover it. 00:37:36.959 [2024-11-19 08:01:28.621068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.959 [2024-11-19 08:01:28.621103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.959 qpair failed and we were unable to recover it. 00:37:36.959 [2024-11-19 08:01:28.621234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.959 [2024-11-19 08:01:28.621270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.959 qpair failed and we were unable to recover it. 00:37:36.959 [2024-11-19 08:01:28.621416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.959 [2024-11-19 08:01:28.621466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.959 qpair failed and we were unable to recover it. 00:37:36.959 [2024-11-19 08:01:28.621616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.959 [2024-11-19 08:01:28.621667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.959 qpair failed and we were unable to recover it. 00:37:36.959 [2024-11-19 08:01:28.621833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.959 [2024-11-19 08:01:28.621874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.959 qpair failed and we were unable to recover it. 00:37:36.959 [2024-11-19 08:01:28.622015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.959 [2024-11-19 08:01:28.622051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.959 qpair failed and we were unable to recover it. 00:37:36.959 [2024-11-19 08:01:28.622214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.959 [2024-11-19 08:01:28.622249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.959 qpair failed and we were unable to recover it. 00:37:36.959 [2024-11-19 08:01:28.622382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.959 [2024-11-19 08:01:28.622417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.959 qpair failed and we were unable to recover it. 00:37:36.959 [2024-11-19 08:01:28.622549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.959 [2024-11-19 08:01:28.622585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.959 qpair failed and we were unable to recover it. 00:37:36.959 [2024-11-19 08:01:28.622757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.959 [2024-11-19 08:01:28.622807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.959 qpair failed and we were unable to recover it. 00:37:36.959 [2024-11-19 08:01:28.622952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.959 [2024-11-19 08:01:28.623003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.959 qpair failed and we were unable to recover it. 00:37:36.959 [2024-11-19 08:01:28.623178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.959 [2024-11-19 08:01:28.623217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.959 qpair failed and we were unable to recover it. 00:37:36.959 [2024-11-19 08:01:28.623385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.959 [2024-11-19 08:01:28.623422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.959 qpair failed and we were unable to recover it. 00:37:36.959 [2024-11-19 08:01:28.623537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.959 [2024-11-19 08:01:28.623574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.959 qpair failed and we were unable to recover it. 00:37:36.959 [2024-11-19 08:01:28.623754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.959 [2024-11-19 08:01:28.623791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.959 qpair failed and we were unable to recover it. 00:37:36.959 [2024-11-19 08:01:28.623918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.959 [2024-11-19 08:01:28.623964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.959 qpair failed and we were unable to recover it. 00:37:36.959 [2024-11-19 08:01:28.624079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.959 [2024-11-19 08:01:28.624114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.959 qpair failed and we were unable to recover it. 00:37:36.959 [2024-11-19 08:01:28.624227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.959 [2024-11-19 08:01:28.624264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.959 qpair failed and we were unable to recover it. 00:37:36.959 [2024-11-19 08:01:28.624423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.959 [2024-11-19 08:01:28.624474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.959 qpair failed and we were unable to recover it. 00:37:36.959 [2024-11-19 08:01:28.624634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.959 [2024-11-19 08:01:28.624683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.959 qpair failed and we were unable to recover it. 00:37:36.959 [2024-11-19 08:01:28.624828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.959 [2024-11-19 08:01:28.624867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.959 qpair failed and we were unable to recover it. 00:37:36.959 [2024-11-19 08:01:28.625022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.959 [2024-11-19 08:01:28.625058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.959 qpair failed and we were unable to recover it. 00:37:36.959 [2024-11-19 08:01:28.625191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.959 [2024-11-19 08:01:28.625228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.959 qpair failed and we were unable to recover it. 00:37:36.959 [2024-11-19 08:01:28.625344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.959 [2024-11-19 08:01:28.625383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.959 qpair failed and we were unable to recover it. 00:37:36.959 [2024-11-19 08:01:28.625495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.959 [2024-11-19 08:01:28.625532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.959 qpair failed and we were unable to recover it. 00:37:36.959 [2024-11-19 08:01:28.625676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.959 [2024-11-19 08:01:28.625727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.959 qpair failed and we were unable to recover it. 00:37:36.959 [2024-11-19 08:01:28.625854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.959 [2024-11-19 08:01:28.625890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.959 qpair failed and we were unable to recover it. 00:37:36.959 [2024-11-19 08:01:28.626007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.959 [2024-11-19 08:01:28.626044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.959 qpair failed and we were unable to recover it. 00:37:36.959 [2024-11-19 08:01:28.626175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.959 [2024-11-19 08:01:28.626211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.959 qpair failed and we were unable to recover it. 00:37:36.959 [2024-11-19 08:01:28.626376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.959 [2024-11-19 08:01:28.626413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.959 qpair failed and we were unable to recover it. 00:37:36.959 [2024-11-19 08:01:28.626523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.959 [2024-11-19 08:01:28.626561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.960 qpair failed and we were unable to recover it. 00:37:36.960 [2024-11-19 08:01:28.626735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.960 [2024-11-19 08:01:28.626772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.960 qpair failed and we were unable to recover it. 00:37:36.960 [2024-11-19 08:01:28.626886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.960 [2024-11-19 08:01:28.626920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.960 qpair failed and we were unable to recover it. 00:37:36.960 [2024-11-19 08:01:28.627042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.960 [2024-11-19 08:01:28.627077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.960 qpair failed and we were unable to recover it. 00:37:36.960 [2024-11-19 08:01:28.627238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.960 [2024-11-19 08:01:28.627274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.960 qpair failed and we were unable to recover it. 00:37:36.960 [2024-11-19 08:01:28.627413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.960 [2024-11-19 08:01:28.627450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.960 qpair failed and we were unable to recover it. 00:37:36.960 [2024-11-19 08:01:28.627576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.960 [2024-11-19 08:01:28.627625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.960 qpair failed and we were unable to recover it. 00:37:36.960 [2024-11-19 08:01:28.627806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.960 [2024-11-19 08:01:28.627843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.960 qpair failed and we were unable to recover it. 00:37:36.960 [2024-11-19 08:01:28.627990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.960 [2024-11-19 08:01:28.628027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.960 qpair failed and we were unable to recover it. 00:37:36.960 [2024-11-19 08:01:28.628153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.960 [2024-11-19 08:01:28.628188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.960 qpair failed and we were unable to recover it. 00:37:36.960 [2024-11-19 08:01:28.628327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.960 [2024-11-19 08:01:28.628363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.960 qpair failed and we were unable to recover it. 00:37:36.960 [2024-11-19 08:01:28.628470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.960 [2024-11-19 08:01:28.628505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.960 qpair failed and we were unable to recover it. 00:37:36.960 [2024-11-19 08:01:28.628634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.960 [2024-11-19 08:01:28.628685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.960 qpair failed and we were unable to recover it. 00:37:36.960 [2024-11-19 08:01:28.628861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.960 [2024-11-19 08:01:28.628911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.960 qpair failed and we were unable to recover it. 00:37:36.960 [2024-11-19 08:01:28.629034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.960 [2024-11-19 08:01:28.629082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.960 qpair failed and we were unable to recover it. 00:37:36.960 [2024-11-19 08:01:28.629226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.960 [2024-11-19 08:01:28.629263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.960 qpair failed and we were unable to recover it. 00:37:36.960 [2024-11-19 08:01:28.629399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.960 [2024-11-19 08:01:28.629435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.960 qpair failed and we were unable to recover it. 00:37:36.960 [2024-11-19 08:01:28.629542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.960 [2024-11-19 08:01:28.629580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.960 qpair failed and we were unable to recover it. 00:37:36.960 [2024-11-19 08:01:28.629732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.960 [2024-11-19 08:01:28.629768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.960 qpair failed and we were unable to recover it. 00:37:36.960 [2024-11-19 08:01:28.629915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.960 [2024-11-19 08:01:28.629953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.960 qpair failed and we were unable to recover it. 00:37:36.960 [2024-11-19 08:01:28.630140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.960 [2024-11-19 08:01:28.630178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.960 qpair failed and we were unable to recover it. 00:37:36.960 [2024-11-19 08:01:28.630304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.960 [2024-11-19 08:01:28.630341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.960 qpair failed and we were unable to recover it. 00:37:36.960 [2024-11-19 08:01:28.630455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.960 [2024-11-19 08:01:28.630502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.960 qpair failed and we were unable to recover it. 00:37:36.960 [2024-11-19 08:01:28.630641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.960 [2024-11-19 08:01:28.630677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.960 qpair failed and we were unable to recover it. 00:37:36.960 [2024-11-19 08:01:28.630796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.960 [2024-11-19 08:01:28.630833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.960 qpair failed and we were unable to recover it. 00:37:36.960 [2024-11-19 08:01:28.630951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.960 [2024-11-19 08:01:28.630988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.960 qpair failed and we were unable to recover it. 00:37:36.960 [2024-11-19 08:01:28.631103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.960 [2024-11-19 08:01:28.631138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.960 qpair failed and we were unable to recover it. 00:37:36.960 [2024-11-19 08:01:28.631239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.960 [2024-11-19 08:01:28.631275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.960 qpair failed and we were unable to recover it. 00:37:36.960 [2024-11-19 08:01:28.631422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.960 [2024-11-19 08:01:28.631457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.960 qpair failed and we were unable to recover it. 00:37:36.960 [2024-11-19 08:01:28.631583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.960 [2024-11-19 08:01:28.631634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.960 qpair failed and we were unable to recover it. 00:37:36.960 [2024-11-19 08:01:28.631794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.960 [2024-11-19 08:01:28.631833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.960 qpair failed and we were unable to recover it. 00:37:36.960 [2024-11-19 08:01:28.631970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.960 [2024-11-19 08:01:28.632006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.960 qpair failed and we were unable to recover it. 00:37:36.960 [2024-11-19 08:01:28.632144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.960 [2024-11-19 08:01:28.632180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.960 qpair failed and we were unable to recover it. 00:37:36.960 [2024-11-19 08:01:28.632284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.960 [2024-11-19 08:01:28.632320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.960 qpair failed and we were unable to recover it. 00:37:36.961 [2024-11-19 08:01:28.632464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.961 [2024-11-19 08:01:28.632502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.961 qpair failed and we were unable to recover it. 00:37:36.961 [2024-11-19 08:01:28.632620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.961 [2024-11-19 08:01:28.632656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.961 qpair failed and we were unable to recover it. 00:37:36.961 [2024-11-19 08:01:28.632798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.961 [2024-11-19 08:01:28.632849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.961 qpair failed and we were unable to recover it. 00:37:36.961 [2024-11-19 08:01:28.632966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.961 [2024-11-19 08:01:28.633004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.961 qpair failed and we were unable to recover it. 00:37:36.961 [2024-11-19 08:01:28.633143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.961 [2024-11-19 08:01:28.633179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.961 qpair failed and we were unable to recover it. 00:37:36.961 [2024-11-19 08:01:28.633281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.961 [2024-11-19 08:01:28.633317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.961 qpair failed and we were unable to recover it. 00:37:36.961 [2024-11-19 08:01:28.633448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.961 [2024-11-19 08:01:28.633484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.961 qpair failed and we were unable to recover it. 00:37:36.961 [2024-11-19 08:01:28.633615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.961 [2024-11-19 08:01:28.633652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.961 qpair failed and we were unable to recover it. 00:37:36.961 [2024-11-19 08:01:28.633800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.961 [2024-11-19 08:01:28.633850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.961 qpair failed and we were unable to recover it. 00:37:36.961 [2024-11-19 08:01:28.633969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.961 [2024-11-19 08:01:28.634009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.961 qpair failed and we were unable to recover it. 00:37:36.961 [2024-11-19 08:01:28.634158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.961 [2024-11-19 08:01:28.634195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.961 qpair failed and we were unable to recover it. 00:37:36.961 [2024-11-19 08:01:28.634329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.961 [2024-11-19 08:01:28.634365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.961 qpair failed and we were unable to recover it. 00:37:36.961 [2024-11-19 08:01:28.634495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.961 [2024-11-19 08:01:28.634532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.961 qpair failed and we were unable to recover it. 00:37:36.961 [2024-11-19 08:01:28.634713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.961 [2024-11-19 08:01:28.634750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.961 qpair failed and we were unable to recover it. 00:37:36.961 [2024-11-19 08:01:28.634887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.961 [2024-11-19 08:01:28.634925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.961 qpair failed and we were unable to recover it. 00:37:36.961 [2024-11-19 08:01:28.635055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.961 [2024-11-19 08:01:28.635105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.961 qpair failed and we were unable to recover it. 00:37:36.961 [2024-11-19 08:01:28.635252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.961 [2024-11-19 08:01:28.635290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.961 qpair failed and we were unable to recover it. 00:37:36.961 [2024-11-19 08:01:28.635430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.961 [2024-11-19 08:01:28.635466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.961 qpair failed and we were unable to recover it. 00:37:36.961 [2024-11-19 08:01:28.635603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.961 [2024-11-19 08:01:28.635639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.961 qpair failed and we were unable to recover it. 00:37:36.961 [2024-11-19 08:01:28.635799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.961 [2024-11-19 08:01:28.635850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.961 qpair failed and we were unable to recover it. 00:37:36.961 [2024-11-19 08:01:28.635996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.961 [2024-11-19 08:01:28.636039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.961 qpair failed and we were unable to recover it. 00:37:36.961 [2024-11-19 08:01:28.636183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.961 [2024-11-19 08:01:28.636220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.961 qpair failed and we were unable to recover it. 00:37:36.961 [2024-11-19 08:01:28.636389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.961 [2024-11-19 08:01:28.636425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.961 qpair failed and we were unable to recover it. 00:37:36.961 [2024-11-19 08:01:28.636571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.961 [2024-11-19 08:01:28.636607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.961 qpair failed and we were unable to recover it. 00:37:36.961 [2024-11-19 08:01:28.636728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.961 [2024-11-19 08:01:28.636766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.961 qpair failed and we were unable to recover it. 00:37:36.961 [2024-11-19 08:01:28.636929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.961 [2024-11-19 08:01:28.636965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.961 qpair failed and we were unable to recover it. 00:37:36.961 [2024-11-19 08:01:28.637133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.961 [2024-11-19 08:01:28.637185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.961 qpair failed and we were unable to recover it. 00:37:36.961 [2024-11-19 08:01:28.637329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.961 [2024-11-19 08:01:28.637373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.961 qpair failed and we were unable to recover it. 00:37:36.961 [2024-11-19 08:01:28.637498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.961 [2024-11-19 08:01:28.637548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.961 qpair failed and we were unable to recover it. 00:37:36.961 [2024-11-19 08:01:28.637702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.961 [2024-11-19 08:01:28.637740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.961 qpair failed and we were unable to recover it. 00:37:36.961 [2024-11-19 08:01:28.637875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.961 [2024-11-19 08:01:28.637911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.961 qpair failed and we were unable to recover it. 00:37:36.961 [2024-11-19 08:01:28.638059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.961 [2024-11-19 08:01:28.638096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.961 qpair failed and we were unable to recover it. 00:37:36.961 [2024-11-19 08:01:28.638259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.961 [2024-11-19 08:01:28.638295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.961 qpair failed and we were unable to recover it. 00:37:36.961 [2024-11-19 08:01:28.638429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.961 [2024-11-19 08:01:28.638465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.961 qpair failed and we were unable to recover it. 00:37:36.961 [2024-11-19 08:01:28.638614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.961 [2024-11-19 08:01:28.638651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.961 qpair failed and we were unable to recover it. 00:37:36.961 [2024-11-19 08:01:28.638807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.961 [2024-11-19 08:01:28.638845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.961 qpair failed and we were unable to recover it. 00:37:36.961 [2024-11-19 08:01:28.638970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.961 [2024-11-19 08:01:28.639021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.961 qpair failed and we were unable to recover it. 00:37:36.961 [2024-11-19 08:01:28.639144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.962 [2024-11-19 08:01:28.639180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.962 qpair failed and we were unable to recover it. 00:37:36.962 [2024-11-19 08:01:28.639313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.962 [2024-11-19 08:01:28.639348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.962 qpair failed and we were unable to recover it. 00:37:36.962 [2024-11-19 08:01:28.639455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.962 [2024-11-19 08:01:28.639491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.962 qpair failed and we were unable to recover it. 00:37:36.962 [2024-11-19 08:01:28.639624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.962 [2024-11-19 08:01:28.639659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.962 qpair failed and we were unable to recover it. 00:37:36.962 [2024-11-19 08:01:28.639834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.962 [2024-11-19 08:01:28.639871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.962 qpair failed and we were unable to recover it. 00:37:36.962 [2024-11-19 08:01:28.640016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.962 [2024-11-19 08:01:28.640052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.962 qpair failed and we were unable to recover it. 00:37:36.962 [2024-11-19 08:01:28.640183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.962 [2024-11-19 08:01:28.640219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.962 qpair failed and we were unable to recover it. 00:37:36.962 [2024-11-19 08:01:28.640357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.962 [2024-11-19 08:01:28.640393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.962 qpair failed and we were unable to recover it. 00:37:36.962 [2024-11-19 08:01:28.640561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.962 [2024-11-19 08:01:28.640597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.962 qpair failed and we were unable to recover it. 00:37:36.962 [2024-11-19 08:01:28.640709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.962 [2024-11-19 08:01:28.640745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.962 qpair failed and we were unable to recover it. 00:37:36.962 [2024-11-19 08:01:28.640899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.962 [2024-11-19 08:01:28.640936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.962 qpair failed and we were unable to recover it. 00:37:36.962 [2024-11-19 08:01:28.641079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.962 [2024-11-19 08:01:28.641114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.962 qpair failed and we were unable to recover it. 00:37:36.962 [2024-11-19 08:01:28.641252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.962 [2024-11-19 08:01:28.641288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.962 qpair failed and we were unable to recover it. 00:37:36.962 [2024-11-19 08:01:28.641392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.962 [2024-11-19 08:01:28.641428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.962 qpair failed and we were unable to recover it. 00:37:36.962 [2024-11-19 08:01:28.641561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.962 [2024-11-19 08:01:28.641598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.962 qpair failed and we were unable to recover it. 00:37:36.962 [2024-11-19 08:01:28.641762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.962 [2024-11-19 08:01:28.641814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.962 qpair failed and we were unable to recover it. 00:37:36.962 [2024-11-19 08:01:28.641943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.962 [2024-11-19 08:01:28.641983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.962 qpair failed and we were unable to recover it. 00:37:36.962 [2024-11-19 08:01:28.642176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.962 [2024-11-19 08:01:28.642213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.962 qpair failed and we were unable to recover it. 00:37:36.962 [2024-11-19 08:01:28.642349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.962 [2024-11-19 08:01:28.642386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.962 qpair failed and we were unable to recover it. 00:37:36.962 [2024-11-19 08:01:28.642501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.962 [2024-11-19 08:01:28.642538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.962 qpair failed and we were unable to recover it. 00:37:36.962 [2024-11-19 08:01:28.642682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.962 [2024-11-19 08:01:28.642725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.962 qpair failed and we were unable to recover it. 00:37:36.962 [2024-11-19 08:01:28.642878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.962 [2024-11-19 08:01:28.642928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.962 qpair failed and we were unable to recover it. 00:37:36.962 [2024-11-19 08:01:28.643076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.962 [2024-11-19 08:01:28.643114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.962 qpair failed and we were unable to recover it. 00:37:36.962 [2024-11-19 08:01:28.643222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.962 [2024-11-19 08:01:28.643264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.962 qpair failed and we were unable to recover it. 00:37:36.962 [2024-11-19 08:01:28.643424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.962 [2024-11-19 08:01:28.643459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.962 qpair failed and we were unable to recover it. 00:37:36.962 [2024-11-19 08:01:28.643595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.962 [2024-11-19 08:01:28.643630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.962 qpair failed and we were unable to recover it. 00:37:36.962 [2024-11-19 08:01:28.643778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.962 [2024-11-19 08:01:28.643816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.962 qpair failed and we were unable to recover it. 00:37:36.962 [2024-11-19 08:01:28.643932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.962 [2024-11-19 08:01:28.643969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.962 qpair failed and we were unable to recover it. 00:37:36.962 [2024-11-19 08:01:28.644099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.962 [2024-11-19 08:01:28.644136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.962 qpair failed and we were unable to recover it. 00:37:36.962 [2024-11-19 08:01:28.644249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.962 [2024-11-19 08:01:28.644286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.962 qpair failed and we were unable to recover it. 00:37:36.962 [2024-11-19 08:01:28.644416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.962 [2024-11-19 08:01:28.644452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.962 qpair failed and we were unable to recover it. 00:37:36.962 [2024-11-19 08:01:28.644586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.962 [2024-11-19 08:01:28.644622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.962 qpair failed and we were unable to recover it. 00:37:36.962 [2024-11-19 08:01:28.644762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.962 [2024-11-19 08:01:28.644799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.962 qpair failed and we were unable to recover it. 00:37:36.962 [2024-11-19 08:01:28.644914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.962 [2024-11-19 08:01:28.644951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.962 qpair failed and we were unable to recover it. 00:37:36.962 [2024-11-19 08:01:28.645072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.962 [2024-11-19 08:01:28.645108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.962 qpair failed and we were unable to recover it. 00:37:36.962 [2024-11-19 08:01:28.645300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.962 [2024-11-19 08:01:28.645336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.962 qpair failed and we were unable to recover it. 00:37:36.962 [2024-11-19 08:01:28.645472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.962 [2024-11-19 08:01:28.645508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.962 qpair failed and we were unable to recover it. 00:37:36.962 [2024-11-19 08:01:28.645642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.962 [2024-11-19 08:01:28.645678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.962 qpair failed and we were unable to recover it. 00:37:36.963 [2024-11-19 08:01:28.645843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.963 [2024-11-19 08:01:28.645880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.963 qpair failed and we were unable to recover it. 00:37:36.963 [2024-11-19 08:01:28.646023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.963 [2024-11-19 08:01:28.646060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.963 qpair failed and we were unable to recover it. 00:37:36.963 [2024-11-19 08:01:28.646172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.963 [2024-11-19 08:01:28.646208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.963 qpair failed and we were unable to recover it. 00:37:36.963 [2024-11-19 08:01:28.646321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.963 [2024-11-19 08:01:28.646357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.963 qpair failed and we were unable to recover it. 00:37:36.963 [2024-11-19 08:01:28.646499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.963 [2024-11-19 08:01:28.646535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.963 qpair failed and we were unable to recover it. 00:37:36.963 [2024-11-19 08:01:28.646650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.963 [2024-11-19 08:01:28.646686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.963 qpair failed and we were unable to recover it. 00:37:36.963 [2024-11-19 08:01:28.646809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.963 [2024-11-19 08:01:28.646845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.963 qpair failed and we were unable to recover it. 00:37:36.963 [2024-11-19 08:01:28.646954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.963 [2024-11-19 08:01:28.646991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.963 qpair failed and we were unable to recover it. 00:37:36.963 [2024-11-19 08:01:28.647154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.963 [2024-11-19 08:01:28.647205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.963 qpair failed and we were unable to recover it. 00:37:36.963 [2024-11-19 08:01:28.647359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.963 [2024-11-19 08:01:28.647399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.963 qpair failed and we were unable to recover it. 00:37:36.963 [2024-11-19 08:01:28.647519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.963 [2024-11-19 08:01:28.647555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.963 qpair failed and we were unable to recover it. 00:37:36.963 [2024-11-19 08:01:28.647737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.963 [2024-11-19 08:01:28.647774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.963 qpair failed and we were unable to recover it. 00:37:36.963 [2024-11-19 08:01:28.647926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.963 [2024-11-19 08:01:28.647976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.963 qpair failed and we were unable to recover it. 00:37:36.963 [2024-11-19 08:01:28.648088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.963 [2024-11-19 08:01:28.648126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.963 qpair failed and we were unable to recover it. 00:37:36.963 [2024-11-19 08:01:28.648266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.963 [2024-11-19 08:01:28.648302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.963 qpair failed and we were unable to recover it. 00:37:36.963 [2024-11-19 08:01:28.648405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.963 [2024-11-19 08:01:28.648440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.963 qpair failed and we were unable to recover it. 00:37:36.963 [2024-11-19 08:01:28.648592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.963 [2024-11-19 08:01:28.648627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.963 qpair failed and we were unable to recover it. 00:37:36.963 [2024-11-19 08:01:28.648773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.963 [2024-11-19 08:01:28.648810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.963 qpair failed and we were unable to recover it. 00:37:36.963 [2024-11-19 08:01:28.648956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.963 [2024-11-19 08:01:28.648992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.963 qpair failed and we were unable to recover it. 00:37:36.963 [2024-11-19 08:01:28.649155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.963 [2024-11-19 08:01:28.649191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.963 qpair failed and we were unable to recover it. 00:37:36.963 [2024-11-19 08:01:28.649301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.963 [2024-11-19 08:01:28.649337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.963 qpair failed and we were unable to recover it. 00:37:36.963 [2024-11-19 08:01:28.649484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.963 [2024-11-19 08:01:28.649520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.963 qpair failed and we were unable to recover it. 00:37:36.963 [2024-11-19 08:01:28.649629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.963 [2024-11-19 08:01:28.649666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.963 qpair failed and we were unable to recover it. 00:37:36.963 [2024-11-19 08:01:28.649782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.963 [2024-11-19 08:01:28.649818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.963 qpair failed and we were unable to recover it. 00:37:36.963 [2024-11-19 08:01:28.649954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.963 [2024-11-19 08:01:28.649989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.963 qpair failed and we were unable to recover it. 00:37:36.963 [2024-11-19 08:01:28.650096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.963 [2024-11-19 08:01:28.650138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.963 qpair failed and we were unable to recover it. 00:37:36.963 [2024-11-19 08:01:28.650298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.963 [2024-11-19 08:01:28.650333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.963 qpair failed and we were unable to recover it. 00:37:36.963 [2024-11-19 08:01:28.650488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.963 [2024-11-19 08:01:28.650525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.963 qpair failed and we were unable to recover it. 00:37:36.963 [2024-11-19 08:01:28.650695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.963 [2024-11-19 08:01:28.650747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.963 qpair failed and we were unable to recover it. 00:37:36.963 [2024-11-19 08:01:28.650893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.963 [2024-11-19 08:01:28.650932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.963 qpair failed and we were unable to recover it. 00:37:36.963 [2024-11-19 08:01:28.651067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.963 [2024-11-19 08:01:28.651104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.963 qpair failed and we were unable to recover it. 00:37:36.963 [2024-11-19 08:01:28.651247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.963 [2024-11-19 08:01:28.651284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.963 qpair failed and we were unable to recover it. 00:37:36.963 [2024-11-19 08:01:28.651409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.963 [2024-11-19 08:01:28.651460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.963 qpair failed and we were unable to recover it. 00:37:36.963 [2024-11-19 08:01:28.651601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.963 [2024-11-19 08:01:28.651638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.963 qpair failed and we were unable to recover it. 00:37:36.963 [2024-11-19 08:01:28.651769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.963 [2024-11-19 08:01:28.651820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.963 qpair failed and we were unable to recover it. 00:37:36.963 [2024-11-19 08:01:28.651940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.963 [2024-11-19 08:01:28.651978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.963 qpair failed and we were unable to recover it. 00:37:36.963 [2024-11-19 08:01:28.652122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.963 [2024-11-19 08:01:28.652159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.963 qpair failed and we were unable to recover it. 00:37:36.963 [2024-11-19 08:01:28.652327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.963 [2024-11-19 08:01:28.652364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.964 qpair failed and we were unable to recover it. 00:37:36.964 [2024-11-19 08:01:28.652475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.964 [2024-11-19 08:01:28.652512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.964 qpair failed and we were unable to recover it. 00:37:36.964 [2024-11-19 08:01:28.652666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.964 [2024-11-19 08:01:28.652717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.964 qpair failed and we were unable to recover it. 00:37:36.964 [2024-11-19 08:01:28.652838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.964 [2024-11-19 08:01:28.652875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.964 qpair failed and we were unable to recover it. 00:37:36.964 [2024-11-19 08:01:28.653018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.964 [2024-11-19 08:01:28.653058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.964 qpair failed and we were unable to recover it. 00:37:36.964 [2024-11-19 08:01:28.653196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.964 [2024-11-19 08:01:28.653232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.964 qpair failed and we were unable to recover it. 00:37:36.964 [2024-11-19 08:01:28.653375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.964 [2024-11-19 08:01:28.653411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.964 qpair failed and we were unable to recover it. 00:37:36.964 [2024-11-19 08:01:28.653552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.964 [2024-11-19 08:01:28.653587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.964 qpair failed and we were unable to recover it. 00:37:36.964 [2024-11-19 08:01:28.653714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.964 [2024-11-19 08:01:28.653751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.964 qpair failed and we were unable to recover it. 00:37:36.964 [2024-11-19 08:01:28.653869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.964 [2024-11-19 08:01:28.653904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.964 qpair failed and we were unable to recover it. 00:37:36.964 [2024-11-19 08:01:28.654043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.964 [2024-11-19 08:01:28.654079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.964 qpair failed and we were unable to recover it. 00:37:36.964 [2024-11-19 08:01:28.654187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.964 [2024-11-19 08:01:28.654222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.964 qpair failed and we were unable to recover it. 00:37:36.964 [2024-11-19 08:01:28.654358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.964 [2024-11-19 08:01:28.654393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.964 qpair failed and we were unable to recover it. 00:37:36.964 [2024-11-19 08:01:28.654500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.964 [2024-11-19 08:01:28.654537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.964 qpair failed and we were unable to recover it. 00:37:36.964 [2024-11-19 08:01:28.654705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.964 [2024-11-19 08:01:28.654741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.964 qpair failed and we were unable to recover it. 00:37:36.964 [2024-11-19 08:01:28.654872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.964 [2024-11-19 08:01:28.654923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.964 qpair failed and we were unable to recover it. 00:37:36.964 [2024-11-19 08:01:28.655067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.964 [2024-11-19 08:01:28.655103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.964 qpair failed and we were unable to recover it. 00:37:36.964 [2024-11-19 08:01:28.655271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.964 [2024-11-19 08:01:28.655307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.964 qpair failed and we were unable to recover it. 00:37:36.964 [2024-11-19 08:01:28.655439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.964 [2024-11-19 08:01:28.655474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.964 qpair failed and we were unable to recover it. 00:37:36.964 [2024-11-19 08:01:28.655574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.964 [2024-11-19 08:01:28.655609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.964 qpair failed and we were unable to recover it. 00:37:36.964 [2024-11-19 08:01:28.655769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.964 [2024-11-19 08:01:28.655806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.964 qpair failed and we were unable to recover it. 00:37:36.964 [2024-11-19 08:01:28.655938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.964 [2024-11-19 08:01:28.655973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.964 qpair failed and we were unable to recover it. 00:37:36.964 [2024-11-19 08:01:28.656090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.964 [2024-11-19 08:01:28.656125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.964 qpair failed and we were unable to recover it. 00:37:36.964 [2024-11-19 08:01:28.656254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.964 [2024-11-19 08:01:28.656305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.964 qpair failed and we were unable to recover it. 00:37:36.964 [2024-11-19 08:01:28.656482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.964 [2024-11-19 08:01:28.656519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.964 qpair failed and we were unable to recover it. 00:37:36.964 [2024-11-19 08:01:28.656652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.964 [2024-11-19 08:01:28.656687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.964 qpair failed and we were unable to recover it. 00:37:36.964 [2024-11-19 08:01:28.656808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.964 [2024-11-19 08:01:28.656845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.964 qpair failed and we were unable to recover it. 00:37:36.964 [2024-11-19 08:01:28.656986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.964 [2024-11-19 08:01:28.657022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.964 qpair failed and we were unable to recover it. 00:37:36.964 [2024-11-19 08:01:28.657156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.964 [2024-11-19 08:01:28.657199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.964 qpair failed and we were unable to recover it. 00:37:36.964 [2024-11-19 08:01:28.657370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.964 [2024-11-19 08:01:28.657407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.964 qpair failed and we were unable to recover it. 00:37:36.964 [2024-11-19 08:01:28.657521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.964 [2024-11-19 08:01:28.657561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.964 qpair failed and we were unable to recover it. 00:37:36.964 [2024-11-19 08:01:28.657709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.964 [2024-11-19 08:01:28.657759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.964 qpair failed and we were unable to recover it. 00:37:36.964 [2024-11-19 08:01:28.657867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.964 [2024-11-19 08:01:28.657904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.964 qpair failed and we were unable to recover it. 00:37:36.964 [2024-11-19 08:01:28.658011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.964 [2024-11-19 08:01:28.658046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.964 qpair failed and we were unable to recover it. 00:37:36.965 [2024-11-19 08:01:28.658208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.965 [2024-11-19 08:01:28.658243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.965 qpair failed and we were unable to recover it. 00:37:36.965 [2024-11-19 08:01:28.658352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.965 [2024-11-19 08:01:28.658387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.965 qpair failed and we were unable to recover it. 00:37:36.965 [2024-11-19 08:01:28.658497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.965 [2024-11-19 08:01:28.658533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.965 qpair failed and we were unable to recover it. 00:37:36.965 [2024-11-19 08:01:28.658703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.965 [2024-11-19 08:01:28.658755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.965 qpair failed and we were unable to recover it. 00:37:36.965 [2024-11-19 08:01:28.658906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.965 [2024-11-19 08:01:28.658946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.965 qpair failed and we were unable to recover it. 00:37:36.965 [2024-11-19 08:01:28.659123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.965 [2024-11-19 08:01:28.659161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.965 qpair failed and we were unable to recover it. 00:37:36.965 [2024-11-19 08:01:28.659299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.965 [2024-11-19 08:01:28.659335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.965 qpair failed and we were unable to recover it. 00:37:36.965 [2024-11-19 08:01:28.659502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.965 [2024-11-19 08:01:28.659553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.965 qpair failed and we were unable to recover it. 00:37:36.965 [2024-11-19 08:01:28.659734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.965 [2024-11-19 08:01:28.659771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.965 qpair failed and we were unable to recover it. 00:37:36.965 [2024-11-19 08:01:28.659927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.965 [2024-11-19 08:01:28.659978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.965 qpair failed and we were unable to recover it. 00:37:36.965 [2024-11-19 08:01:28.660127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.965 [2024-11-19 08:01:28.660164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.965 qpair failed and we were unable to recover it. 00:37:36.965 [2024-11-19 08:01:28.660279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.965 [2024-11-19 08:01:28.660316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.965 qpair failed and we were unable to recover it. 00:37:36.965 [2024-11-19 08:01:28.660482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.965 [2024-11-19 08:01:28.660519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.965 qpair failed and we were unable to recover it. 00:37:36.965 [2024-11-19 08:01:28.660636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.965 [2024-11-19 08:01:28.660672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.965 qpair failed and we were unable to recover it. 00:37:36.965 [2024-11-19 08:01:28.660802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.965 [2024-11-19 08:01:28.660888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.965 qpair failed and we were unable to recover it. 00:37:36.965 [2024-11-19 08:01:28.661017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.965 [2024-11-19 08:01:28.661054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.965 qpair failed and we were unable to recover it. 00:37:36.965 [2024-11-19 08:01:28.661186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.965 [2024-11-19 08:01:28.661222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.965 qpair failed and we were unable to recover it. 00:37:36.965 [2024-11-19 08:01:28.661359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.965 [2024-11-19 08:01:28.661398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.965 qpair failed and we were unable to recover it. 00:37:36.965 [2024-11-19 08:01:28.661533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.965 [2024-11-19 08:01:28.661569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.965 qpair failed and we were unable to recover it. 00:37:36.965 [2024-11-19 08:01:28.661722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.965 [2024-11-19 08:01:28.661774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.965 qpair failed and we were unable to recover it. 00:37:36.965 [2024-11-19 08:01:28.661896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.965 [2024-11-19 08:01:28.661933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.965 qpair failed and we were unable to recover it. 00:37:36.965 [2024-11-19 08:01:28.662103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.965 [2024-11-19 08:01:28.662138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.965 qpair failed and we were unable to recover it. 00:37:36.965 [2024-11-19 08:01:28.662284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.965 [2024-11-19 08:01:28.662320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.965 qpair failed and we were unable to recover it. 00:37:36.965 [2024-11-19 08:01:28.662460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.965 [2024-11-19 08:01:28.662495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.965 qpair failed and we were unable to recover it. 00:37:36.965 [2024-11-19 08:01:28.662599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.965 [2024-11-19 08:01:28.662635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.965 qpair failed and we were unable to recover it. 00:37:36.965 [2024-11-19 08:01:28.662777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.965 [2024-11-19 08:01:28.662814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.965 qpair failed and we were unable to recover it. 00:37:36.965 [2024-11-19 08:01:28.662964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.965 [2024-11-19 08:01:28.663005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.965 qpair failed and we were unable to recover it. 00:37:36.965 [2024-11-19 08:01:28.663124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.965 [2024-11-19 08:01:28.663161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.965 qpair failed and we were unable to recover it. 00:37:36.965 [2024-11-19 08:01:28.663327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.965 [2024-11-19 08:01:28.663364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.965 qpair failed and we were unable to recover it. 00:37:36.965 [2024-11-19 08:01:28.663479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.965 [2024-11-19 08:01:28.663515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.965 qpair failed and we were unable to recover it. 00:37:36.965 [2024-11-19 08:01:28.663665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.965 [2024-11-19 08:01:28.663725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.965 qpair failed and we were unable to recover it. 00:37:36.965 [2024-11-19 08:01:28.663872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.965 [2024-11-19 08:01:28.663909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.965 qpair failed and we were unable to recover it. 00:37:36.965 [2024-11-19 08:01:28.664021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.965 [2024-11-19 08:01:28.664056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.965 qpair failed and we were unable to recover it. 00:37:36.965 [2024-11-19 08:01:28.664196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.965 [2024-11-19 08:01:28.664232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.965 qpair failed and we were unable to recover it. 00:37:36.965 [2024-11-19 08:01:28.664338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.965 [2024-11-19 08:01:28.664374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.965 qpair failed and we were unable to recover it. 00:37:36.965 [2024-11-19 08:01:28.664509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.965 [2024-11-19 08:01:28.664544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.965 qpair failed and we were unable to recover it. 00:37:36.965 [2024-11-19 08:01:28.664671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.965 [2024-11-19 08:01:28.664729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.965 qpair failed and we were unable to recover it. 00:37:36.965 [2024-11-19 08:01:28.664872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.965 [2024-11-19 08:01:28.664922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.966 qpair failed and we were unable to recover it. 00:37:36.966 [2024-11-19 08:01:28.665078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.966 [2024-11-19 08:01:28.665117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.966 qpair failed and we were unable to recover it. 00:37:36.966 [2024-11-19 08:01:28.665258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.966 [2024-11-19 08:01:28.665296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.966 qpair failed and we were unable to recover it. 00:37:36.966 [2024-11-19 08:01:28.665432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.966 [2024-11-19 08:01:28.665468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.966 qpair failed and we were unable to recover it. 00:37:36.966 [2024-11-19 08:01:28.665645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.966 [2024-11-19 08:01:28.665707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.966 qpair failed and we were unable to recover it. 00:37:36.966 [2024-11-19 08:01:28.665857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.966 [2024-11-19 08:01:28.665893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.966 qpair failed and we were unable to recover it. 00:37:36.966 [2024-11-19 08:01:28.666006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.966 [2024-11-19 08:01:28.666043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.966 qpair failed and we were unable to recover it. 00:37:36.966 [2024-11-19 08:01:28.666177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.966 [2024-11-19 08:01:28.666213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.966 qpair failed and we were unable to recover it. 00:37:36.966 [2024-11-19 08:01:28.666348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.966 [2024-11-19 08:01:28.666383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.966 qpair failed and we were unable to recover it. 00:37:36.966 [2024-11-19 08:01:28.666528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.966 [2024-11-19 08:01:28.666564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.966 qpair failed and we were unable to recover it. 00:37:36.966 [2024-11-19 08:01:28.666708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.966 [2024-11-19 08:01:28.666745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.966 qpair failed and we were unable to recover it. 00:37:36.966 [2024-11-19 08:01:28.666886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.966 [2024-11-19 08:01:28.666922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.966 qpair failed and we were unable to recover it. 00:37:36.966 [2024-11-19 08:01:28.667084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.966 [2024-11-19 08:01:28.667120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.966 qpair failed and we were unable to recover it. 00:37:36.966 [2024-11-19 08:01:28.667221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.966 [2024-11-19 08:01:28.667256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.966 qpair failed and we were unable to recover it. 00:37:36.966 [2024-11-19 08:01:28.667389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.966 [2024-11-19 08:01:28.667425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.966 qpair failed and we were unable to recover it. 00:37:36.966 [2024-11-19 08:01:28.667569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.966 [2024-11-19 08:01:28.667605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.966 qpair failed and we were unable to recover it. 00:37:36.966 [2024-11-19 08:01:28.667745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.966 [2024-11-19 08:01:28.667782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.966 qpair failed and we were unable to recover it. 00:37:36.966 [2024-11-19 08:01:28.667926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.966 [2024-11-19 08:01:28.667963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.966 qpair failed and we were unable to recover it. 00:37:36.966 [2024-11-19 08:01:28.668111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.966 [2024-11-19 08:01:28.668153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.966 qpair failed and we were unable to recover it. 00:37:36.966 [2024-11-19 08:01:28.668299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.966 [2024-11-19 08:01:28.668336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.966 qpair failed and we were unable to recover it. 00:37:36.966 [2024-11-19 08:01:28.668496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.966 [2024-11-19 08:01:28.668532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.966 qpair failed and we were unable to recover it. 00:37:36.966 [2024-11-19 08:01:28.668700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.966 [2024-11-19 08:01:28.668737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.966 qpair failed and we were unable to recover it. 00:37:36.966 [2024-11-19 08:01:28.668873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.966 [2024-11-19 08:01:28.668910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.966 qpair failed and we were unable to recover it. 00:37:36.966 [2024-11-19 08:01:28.669045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.966 [2024-11-19 08:01:28.669081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.966 qpair failed and we were unable to recover it. 00:37:36.966 [2024-11-19 08:01:28.669192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.966 [2024-11-19 08:01:28.669233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.966 qpair failed and we were unable to recover it. 00:37:36.966 [2024-11-19 08:01:28.669404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.966 [2024-11-19 08:01:28.669439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.966 qpair failed and we were unable to recover it. 00:37:36.966 [2024-11-19 08:01:28.669575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.966 [2024-11-19 08:01:28.669610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.966 qpair failed and we were unable to recover it. 00:37:36.966 [2024-11-19 08:01:28.669752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.966 [2024-11-19 08:01:28.669788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.966 qpair failed and we were unable to recover it. 00:37:36.966 [2024-11-19 08:01:28.669926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.966 [2024-11-19 08:01:28.669961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.966 qpair failed and we were unable to recover it. 00:37:36.966 [2024-11-19 08:01:28.670092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.966 [2024-11-19 08:01:28.670127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.966 qpair failed and we were unable to recover it. 00:37:36.966 [2024-11-19 08:01:28.670251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.966 [2024-11-19 08:01:28.670288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.966 qpair failed and we were unable to recover it. 00:37:36.966 [2024-11-19 08:01:28.670451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.966 [2024-11-19 08:01:28.670499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.966 qpair failed and we were unable to recover it. 00:37:36.966 [2024-11-19 08:01:28.670637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.966 [2024-11-19 08:01:28.670673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.966 qpair failed and we were unable to recover it. 00:37:36.966 [2024-11-19 08:01:28.670820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.966 [2024-11-19 08:01:28.670856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.966 qpair failed and we were unable to recover it. 00:37:36.966 [2024-11-19 08:01:28.671008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.966 [2024-11-19 08:01:28.671058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.966 qpair failed and we were unable to recover it. 00:37:36.966 [2024-11-19 08:01:28.671180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.966 [2024-11-19 08:01:28.671218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.966 qpair failed and we were unable to recover it. 00:37:36.966 [2024-11-19 08:01:28.671364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.966 [2024-11-19 08:01:28.671401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.966 qpair failed and we were unable to recover it. 00:37:36.966 [2024-11-19 08:01:28.671541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.966 [2024-11-19 08:01:28.671577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.966 qpair failed and we were unable to recover it. 00:37:36.966 [2024-11-19 08:01:28.671704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.967 [2024-11-19 08:01:28.671742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.967 qpair failed and we were unable to recover it. 00:37:36.967 [2024-11-19 08:01:28.671878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.967 [2024-11-19 08:01:28.671914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.967 qpair failed and we were unable to recover it. 00:37:36.967 [2024-11-19 08:01:28.672054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.967 [2024-11-19 08:01:28.672090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.967 qpair failed and we were unable to recover it. 00:37:36.967 [2024-11-19 08:01:28.672198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.967 [2024-11-19 08:01:28.672234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.967 qpair failed and we were unable to recover it. 00:37:36.967 [2024-11-19 08:01:28.672402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.967 [2024-11-19 08:01:28.672438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.967 qpair failed and we were unable to recover it. 00:37:36.967 [2024-11-19 08:01:28.672580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.967 [2024-11-19 08:01:28.672617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.967 qpair failed and we were unable to recover it. 00:37:36.967 [2024-11-19 08:01:28.672742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.967 [2024-11-19 08:01:28.672793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.967 qpair failed and we were unable to recover it. 00:37:36.967 [2024-11-19 08:01:28.672934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.967 [2024-11-19 08:01:28.672973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.967 qpair failed and we were unable to recover it. 00:37:36.967 [2024-11-19 08:01:28.673142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.967 [2024-11-19 08:01:28.673180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.967 qpair failed and we were unable to recover it. 00:37:36.967 [2024-11-19 08:01:28.673325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.967 [2024-11-19 08:01:28.673363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.967 qpair failed and we were unable to recover it. 00:37:36.967 [2024-11-19 08:01:28.673545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.967 [2024-11-19 08:01:28.673583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.967 qpair failed and we were unable to recover it. 00:37:36.967 [2024-11-19 08:01:28.673706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.967 [2024-11-19 08:01:28.673743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.967 qpair failed and we were unable to recover it. 00:37:36.967 [2024-11-19 08:01:28.673853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.967 [2024-11-19 08:01:28.673889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.967 qpair failed and we were unable to recover it. 00:37:36.967 [2024-11-19 08:01:28.674036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.967 [2024-11-19 08:01:28.674072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.967 qpair failed and we were unable to recover it. 00:37:36.967 [2024-11-19 08:01:28.674214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.967 [2024-11-19 08:01:28.674250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.967 qpair failed and we were unable to recover it. 00:37:36.967 [2024-11-19 08:01:28.674386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.967 [2024-11-19 08:01:28.674442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.967 qpair failed and we were unable to recover it. 00:37:36.967 [2024-11-19 08:01:28.674592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.967 [2024-11-19 08:01:28.674630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.967 qpair failed and we were unable to recover it. 00:37:36.967 [2024-11-19 08:01:28.674777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.967 [2024-11-19 08:01:28.674814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.967 qpair failed and we were unable to recover it. 00:37:36.967 [2024-11-19 08:01:28.674948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.967 [2024-11-19 08:01:28.674984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.967 qpair failed and we were unable to recover it. 00:37:36.967 [2024-11-19 08:01:28.675120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.967 [2024-11-19 08:01:28.675156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.967 qpair failed and we were unable to recover it. 00:37:36.967 [2024-11-19 08:01:28.675296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.967 [2024-11-19 08:01:28.675332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.967 qpair failed and we were unable to recover it. 00:37:36.967 [2024-11-19 08:01:28.675450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.967 [2024-11-19 08:01:28.675487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.967 qpair failed and we were unable to recover it. 00:37:36.967 [2024-11-19 08:01:28.675652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.967 [2024-11-19 08:01:28.675695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.967 qpair failed and we were unable to recover it. 00:37:36.967 [2024-11-19 08:01:28.675854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.967 [2024-11-19 08:01:28.675904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.967 qpair failed and we were unable to recover it. 00:37:36.967 [2024-11-19 08:01:28.676066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.967 [2024-11-19 08:01:28.676102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.967 qpair failed and we were unable to recover it. 00:37:36.967 [2024-11-19 08:01:28.676208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.967 [2024-11-19 08:01:28.676243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.967 qpair failed and we were unable to recover it. 00:37:36.967 [2024-11-19 08:01:28.676360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.967 [2024-11-19 08:01:28.676402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.967 qpair failed and we were unable to recover it. 00:37:36.967 [2024-11-19 08:01:28.676548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.967 [2024-11-19 08:01:28.676584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.967 qpair failed and we were unable to recover it. 00:37:36.967 [2024-11-19 08:01:28.676755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.967 [2024-11-19 08:01:28.676791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.967 qpair failed and we were unable to recover it. 00:37:36.967 [2024-11-19 08:01:28.676899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.967 [2024-11-19 08:01:28.676935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.967 qpair failed and we were unable to recover it. 00:37:36.967 [2024-11-19 08:01:28.677040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.967 [2024-11-19 08:01:28.677075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.967 qpair failed and we were unable to recover it. 00:37:36.967 [2024-11-19 08:01:28.677214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.967 [2024-11-19 08:01:28.677250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.967 qpair failed and we were unable to recover it. 00:37:36.967 [2024-11-19 08:01:28.677368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.967 [2024-11-19 08:01:28.677405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.967 qpair failed and we were unable to recover it. 00:37:36.967 [2024-11-19 08:01:28.677538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.967 [2024-11-19 08:01:28.677573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.967 qpair failed and we were unable to recover it. 00:37:36.967 [2024-11-19 08:01:28.677675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.967 [2024-11-19 08:01:28.677723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.967 qpair failed and we were unable to recover it. 00:37:36.967 [2024-11-19 08:01:28.677838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.967 [2024-11-19 08:01:28.677874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.967 qpair failed and we were unable to recover it. 00:37:36.967 [2024-11-19 08:01:28.678029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.967 [2024-11-19 08:01:28.678069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.967 qpair failed and we were unable to recover it. 00:37:36.967 [2024-11-19 08:01:28.678210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.967 [2024-11-19 08:01:28.678247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.967 qpair failed and we were unable to recover it. 00:37:36.968 [2024-11-19 08:01:28.678417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.968 [2024-11-19 08:01:28.678454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.968 qpair failed and we were unable to recover it. 00:37:36.968 [2024-11-19 08:01:28.678607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.968 [2024-11-19 08:01:28.678644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.968 qpair failed and we were unable to recover it. 00:37:36.968 [2024-11-19 08:01:28.678799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.968 [2024-11-19 08:01:28.678850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.968 qpair failed and we were unable to recover it. 00:37:36.968 [2024-11-19 08:01:28.678998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.968 [2024-11-19 08:01:28.679037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.968 qpair failed and we were unable to recover it. 00:37:36.968 [2024-11-19 08:01:28.679178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.968 [2024-11-19 08:01:28.679216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.968 qpair failed and we were unable to recover it. 00:37:36.968 [2024-11-19 08:01:28.679317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.968 [2024-11-19 08:01:28.679354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.968 qpair failed and we were unable to recover it. 00:37:36.968 [2024-11-19 08:01:28.679498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.968 [2024-11-19 08:01:28.679535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.968 qpair failed and we were unable to recover it. 00:37:36.968 [2024-11-19 08:01:28.679674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.968 [2024-11-19 08:01:28.679717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.968 qpair failed and we were unable to recover it. 00:37:36.968 [2024-11-19 08:01:28.679836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.968 [2024-11-19 08:01:28.679873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.968 qpair failed and we were unable to recover it. 00:37:36.968 [2024-11-19 08:01:28.680005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.968 [2024-11-19 08:01:28.680041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.968 qpair failed and we were unable to recover it. 00:37:36.968 [2024-11-19 08:01:28.680180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.968 [2024-11-19 08:01:28.680215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.968 qpair failed and we were unable to recover it. 00:37:36.968 [2024-11-19 08:01:28.680317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.968 [2024-11-19 08:01:28.680352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.968 qpair failed and we were unable to recover it. 00:37:36.968 [2024-11-19 08:01:28.680487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.968 [2024-11-19 08:01:28.680526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.968 qpair failed and we were unable to recover it. 00:37:36.968 [2024-11-19 08:01:28.680643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.968 [2024-11-19 08:01:28.680698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.968 qpair failed and we were unable to recover it. 00:37:36.968 [2024-11-19 08:01:28.680858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.968 [2024-11-19 08:01:28.680908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.968 qpair failed and we were unable to recover it. 00:37:36.968 [2024-11-19 08:01:28.681061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.968 [2024-11-19 08:01:28.681097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.968 qpair failed and we were unable to recover it. 00:37:36.968 [2024-11-19 08:01:28.681208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.968 [2024-11-19 08:01:28.681243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.968 qpair failed and we were unable to recover it. 00:37:36.968 [2024-11-19 08:01:28.681404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.968 [2024-11-19 08:01:28.681439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.968 qpair failed and we were unable to recover it. 00:37:36.968 [2024-11-19 08:01:28.681577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.968 [2024-11-19 08:01:28.681613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.968 qpair failed and we were unable to recover it. 00:37:36.968 [2024-11-19 08:01:28.681754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.968 [2024-11-19 08:01:28.681790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.968 qpair failed and we were unable to recover it. 00:37:36.968 [2024-11-19 08:01:28.681927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.968 [2024-11-19 08:01:28.681962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.968 qpair failed and we were unable to recover it. 00:37:36.968 [2024-11-19 08:01:28.682092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.968 [2024-11-19 08:01:28.682127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.968 qpair failed and we were unable to recover it. 00:37:36.968 [2024-11-19 08:01:28.682263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.968 [2024-11-19 08:01:28.682299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.968 qpair failed and we were unable to recover it. 00:37:36.968 [2024-11-19 08:01:28.682459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.968 [2024-11-19 08:01:28.682510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.968 qpair failed and we were unable to recover it. 00:37:36.968 [2024-11-19 08:01:28.682695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.968 [2024-11-19 08:01:28.682735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.968 qpair failed and we were unable to recover it. 00:37:36.968 [2024-11-19 08:01:28.682895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.968 [2024-11-19 08:01:28.682945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.968 qpair failed and we were unable to recover it. 00:37:36.968 [2024-11-19 08:01:28.683067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.968 [2024-11-19 08:01:28.683104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.968 qpair failed and we were unable to recover it. 00:37:36.968 [2024-11-19 08:01:28.683214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.968 [2024-11-19 08:01:28.683249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.968 qpair failed and we were unable to recover it. 00:37:36.968 [2024-11-19 08:01:28.683412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.968 [2024-11-19 08:01:28.683452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.968 qpair failed and we were unable to recover it. 00:37:36.968 [2024-11-19 08:01:28.683557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.968 [2024-11-19 08:01:28.683593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.968 qpair failed and we were unable to recover it. 00:37:36.968 [2024-11-19 08:01:28.683730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.968 [2024-11-19 08:01:28.683781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.968 qpair failed and we were unable to recover it. 00:37:36.968 [2024-11-19 08:01:28.683942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.968 [2024-11-19 08:01:28.683981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.968 qpair failed and we were unable to recover it. 00:37:36.968 [2024-11-19 08:01:28.684149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.968 [2024-11-19 08:01:28.684186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.968 qpair failed and we were unable to recover it. 00:37:36.968 [2024-11-19 08:01:28.684327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.968 [2024-11-19 08:01:28.684363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.968 qpair failed and we were unable to recover it. 00:37:36.968 [2024-11-19 08:01:28.684475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.968 [2024-11-19 08:01:28.684513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.968 qpair failed and we were unable to recover it. 00:37:36.968 [2024-11-19 08:01:28.684651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.968 [2024-11-19 08:01:28.684699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.968 qpair failed and we were unable to recover it. 00:37:36.968 [2024-11-19 08:01:28.684817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.968 [2024-11-19 08:01:28.684853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.968 qpair failed and we were unable to recover it. 00:37:36.968 [2024-11-19 08:01:28.684998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.969 [2024-11-19 08:01:28.685038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.969 qpair failed and we were unable to recover it. 00:37:36.969 [2024-11-19 08:01:28.685177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.969 [2024-11-19 08:01:28.685213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.969 qpair failed and we were unable to recover it. 00:37:36.969 [2024-11-19 08:01:28.685328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.969 [2024-11-19 08:01:28.685365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.969 qpair failed and we were unable to recover it. 00:37:36.969 [2024-11-19 08:01:28.685535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.969 [2024-11-19 08:01:28.685571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.969 qpair failed and we were unable to recover it. 00:37:36.969 [2024-11-19 08:01:28.685715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.969 [2024-11-19 08:01:28.685752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.969 qpair failed and we were unable to recover it. 00:37:36.969 [2024-11-19 08:01:28.685877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.969 [2024-11-19 08:01:28.685914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.969 qpair failed and we were unable to recover it. 00:37:36.969 [2024-11-19 08:01:28.686089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.969 [2024-11-19 08:01:28.686125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.969 qpair failed and we were unable to recover it. 00:37:36.969 [2024-11-19 08:01:28.686263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.969 [2024-11-19 08:01:28.686299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.969 qpair failed and we were unable to recover it. 00:37:36.969 [2024-11-19 08:01:28.686423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.969 [2024-11-19 08:01:28.686474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.969 qpair failed and we were unable to recover it. 00:37:36.969 [2024-11-19 08:01:28.686622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.969 [2024-11-19 08:01:28.686658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.969 qpair failed and we were unable to recover it. 00:37:36.969 [2024-11-19 08:01:28.686832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.969 [2024-11-19 08:01:28.686883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.969 qpair failed and we were unable to recover it. 00:37:36.969 [2024-11-19 08:01:28.687025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.969 [2024-11-19 08:01:28.687063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.969 qpair failed and we were unable to recover it. 00:37:36.969 [2024-11-19 08:01:28.687203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.969 [2024-11-19 08:01:28.687239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.969 qpair failed and we were unable to recover it. 00:37:36.969 [2024-11-19 08:01:28.687380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.969 [2024-11-19 08:01:28.687416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.969 qpair failed and we were unable to recover it. 00:37:36.969 [2024-11-19 08:01:28.687524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.969 [2024-11-19 08:01:28.687561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.969 qpair failed and we were unable to recover it. 00:37:36.969 [2024-11-19 08:01:28.687747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.969 [2024-11-19 08:01:28.687799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.969 qpair failed and we were unable to recover it. 00:37:36.969 [2024-11-19 08:01:28.687960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.969 [2024-11-19 08:01:28.688010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.969 qpair failed and we were unable to recover it. 00:37:36.969 [2024-11-19 08:01:28.688163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.969 [2024-11-19 08:01:28.688202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.969 qpair failed and we were unable to recover it. 00:37:36.969 [2024-11-19 08:01:28.688378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.969 [2024-11-19 08:01:28.688415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.969 qpair failed and we were unable to recover it. 00:37:36.969 [2024-11-19 08:01:28.688555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.969 [2024-11-19 08:01:28.688591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.969 qpair failed and we were unable to recover it. 00:37:36.969 [2024-11-19 08:01:28.688738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.969 [2024-11-19 08:01:28.688776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.969 qpair failed and we were unable to recover it. 00:37:36.969 [2024-11-19 08:01:28.688892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.969 [2024-11-19 08:01:28.688929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.969 qpair failed and we were unable to recover it. 00:37:36.969 [2024-11-19 08:01:28.689095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.969 [2024-11-19 08:01:28.689132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.969 qpair failed and we were unable to recover it. 00:37:36.969 [2024-11-19 08:01:28.689247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.969 [2024-11-19 08:01:28.689284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.969 qpair failed and we were unable to recover it. 00:37:36.969 [2024-11-19 08:01:28.689400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.969 [2024-11-19 08:01:28.689436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.969 qpair failed and we were unable to recover it. 00:37:36.969 [2024-11-19 08:01:28.689571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.969 [2024-11-19 08:01:28.689607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.969 qpair failed and we were unable to recover it. 00:37:36.969 [2024-11-19 08:01:28.689720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.969 [2024-11-19 08:01:28.689757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.969 qpair failed and we were unable to recover it. 00:37:36.969 [2024-11-19 08:01:28.689869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.969 [2024-11-19 08:01:28.689907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.969 qpair failed and we were unable to recover it. 00:37:36.969 [2024-11-19 08:01:28.690019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.969 [2024-11-19 08:01:28.690055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.969 qpair failed and we were unable to recover it. 00:37:36.969 [2024-11-19 08:01:28.690161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.969 [2024-11-19 08:01:28.690196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.969 qpair failed and we were unable to recover it. 00:37:36.969 [2024-11-19 08:01:28.690358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.969 [2024-11-19 08:01:28.690393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.969 qpair failed and we were unable to recover it. 00:37:36.969 [2024-11-19 08:01:28.690574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.969 [2024-11-19 08:01:28.690630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.969 qpair failed and we were unable to recover it. 00:37:36.969 [2024-11-19 08:01:28.690794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.969 [2024-11-19 08:01:28.690844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.969 qpair failed and we were unable to recover it. 00:37:36.969 [2024-11-19 08:01:28.691002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.970 [2024-11-19 08:01:28.691039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.970 qpair failed and we were unable to recover it. 00:37:36.970 [2024-11-19 08:01:28.691180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.970 [2024-11-19 08:01:28.691217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.970 qpair failed and we were unable to recover it. 00:37:36.970 [2024-11-19 08:01:28.691354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.970 [2024-11-19 08:01:28.691390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.970 qpair failed and we were unable to recover it. 00:37:36.970 [2024-11-19 08:01:28.691501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.970 [2024-11-19 08:01:28.691537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.970 qpair failed and we were unable to recover it. 00:37:36.970 [2024-11-19 08:01:28.691659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.970 [2024-11-19 08:01:28.691719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.970 qpair failed and we were unable to recover it. 00:37:36.970 [2024-11-19 08:01:28.691849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.970 [2024-11-19 08:01:28.691890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.970 qpair failed and we were unable to recover it. 00:37:36.970 [2024-11-19 08:01:28.692033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.970 [2024-11-19 08:01:28.692072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.970 qpair failed and we were unable to recover it. 00:37:36.970 [2024-11-19 08:01:28.692183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.970 [2024-11-19 08:01:28.692220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.970 qpair failed and we were unable to recover it. 00:37:36.970 [2024-11-19 08:01:28.692358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.970 [2024-11-19 08:01:28.692394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.970 qpair failed and we were unable to recover it. 00:37:36.970 [2024-11-19 08:01:28.692501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.970 [2024-11-19 08:01:28.692537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.970 qpair failed and we were unable to recover it. 00:37:36.970 [2024-11-19 08:01:28.692675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.970 [2024-11-19 08:01:28.692718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.970 qpair failed and we were unable to recover it. 00:37:36.970 [2024-11-19 08:01:28.692893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.970 [2024-11-19 08:01:28.692932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.970 qpair failed and we were unable to recover it. 00:37:36.970 [2024-11-19 08:01:28.693082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.970 [2024-11-19 08:01:28.693120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.970 qpair failed and we were unable to recover it. 00:37:36.970 [2024-11-19 08:01:28.693263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.970 [2024-11-19 08:01:28.693300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.970 qpair failed and we were unable to recover it. 00:37:36.970 [2024-11-19 08:01:28.693467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.970 [2024-11-19 08:01:28.693503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.970 qpair failed and we were unable to recover it. 00:37:36.970 [2024-11-19 08:01:28.693637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.970 [2024-11-19 08:01:28.693673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.970 qpair failed and we were unable to recover it. 00:37:36.970 [2024-11-19 08:01:28.693790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.970 [2024-11-19 08:01:28.693826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.970 qpair failed and we were unable to recover it. 00:37:36.970 [2024-11-19 08:01:28.693961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.970 [2024-11-19 08:01:28.693997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.970 qpair failed and we were unable to recover it. 00:37:36.970 [2024-11-19 08:01:28.694162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.970 [2024-11-19 08:01:28.694197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.970 qpair failed and we were unable to recover it. 00:37:36.970 [2024-11-19 08:01:28.694353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.970 [2024-11-19 08:01:28.694405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.970 qpair failed and we were unable to recover it. 00:37:36.970 [2024-11-19 08:01:28.694552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.970 [2024-11-19 08:01:28.694589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.970 qpair failed and we were unable to recover it. 00:37:36.970 [2024-11-19 08:01:28.694735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.970 [2024-11-19 08:01:28.694788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.970 qpair failed and we were unable to recover it. 00:37:36.970 [2024-11-19 08:01:28.694928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.970 [2024-11-19 08:01:28.694963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.970 qpair failed and we were unable to recover it. 00:37:36.970 [2024-11-19 08:01:28.695073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.970 [2024-11-19 08:01:28.695109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.970 qpair failed and we were unable to recover it. 00:37:36.970 [2024-11-19 08:01:28.695229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.970 [2024-11-19 08:01:28.695264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.970 qpair failed and we were unable to recover it. 00:37:36.970 [2024-11-19 08:01:28.695382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.970 [2024-11-19 08:01:28.695417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.970 qpair failed and we were unable to recover it. 00:37:36.970 [2024-11-19 08:01:28.695522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.970 [2024-11-19 08:01:28.695558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.970 qpair failed and we were unable to recover it. 00:37:36.970 [2024-11-19 08:01:28.695666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.970 [2024-11-19 08:01:28.695709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.970 qpair failed and we were unable to recover it. 00:37:36.970 [2024-11-19 08:01:28.695840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.970 [2024-11-19 08:01:28.695875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.970 qpair failed and we were unable to recover it. 00:37:36.970 [2024-11-19 08:01:28.695982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.970 [2024-11-19 08:01:28.696018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.970 qpair failed and we were unable to recover it. 00:37:36.970 [2024-11-19 08:01:28.696163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.970 [2024-11-19 08:01:28.696199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.970 qpair failed and we were unable to recover it. 00:37:36.970 [2024-11-19 08:01:28.696377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.970 [2024-11-19 08:01:28.696415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.970 qpair failed and we were unable to recover it. 00:37:36.970 [2024-11-19 08:01:28.696554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.970 [2024-11-19 08:01:28.696590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.970 qpair failed and we were unable to recover it. 00:37:36.970 [2024-11-19 08:01:28.696704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.970 [2024-11-19 08:01:28.696742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.970 qpair failed and we were unable to recover it. 00:37:36.970 [2024-11-19 08:01:28.696875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.970 [2024-11-19 08:01:28.696911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.970 qpair failed and we were unable to recover it. 00:37:36.970 [2024-11-19 08:01:28.697090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.970 [2024-11-19 08:01:28.697127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.970 qpair failed and we were unable to recover it. 00:37:36.970 [2024-11-19 08:01:28.697264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.970 [2024-11-19 08:01:28.697300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.970 qpair failed and we were unable to recover it. 00:37:36.970 [2024-11-19 08:01:28.697469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.970 [2024-11-19 08:01:28.697506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.971 qpair failed and we were unable to recover it. 00:37:36.971 [2024-11-19 08:01:28.697620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.971 [2024-11-19 08:01:28.697664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.971 qpair failed and we were unable to recover it. 00:37:36.971 [2024-11-19 08:01:28.697848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.971 [2024-11-19 08:01:28.697898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.971 qpair failed and we were unable to recover it. 00:37:36.971 [2024-11-19 08:01:28.698024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.971 [2024-11-19 08:01:28.698061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.971 qpair failed and we were unable to recover it. 00:37:36.971 [2024-11-19 08:01:28.698200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.971 [2024-11-19 08:01:28.698235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.971 qpair failed and we were unable to recover it. 00:37:36.971 [2024-11-19 08:01:28.698403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.971 [2024-11-19 08:01:28.698439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.971 qpair failed and we were unable to recover it. 00:37:36.971 [2024-11-19 08:01:28.698580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.971 [2024-11-19 08:01:28.698616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.971 qpair failed and we were unable to recover it. 00:37:36.971 [2024-11-19 08:01:28.698728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.971 [2024-11-19 08:01:28.698765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.971 qpair failed and we were unable to recover it. 00:37:36.971 [2024-11-19 08:01:28.698879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.971 [2024-11-19 08:01:28.698916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.971 qpair failed and we were unable to recover it. 00:37:36.971 [2024-11-19 08:01:28.699081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.971 [2024-11-19 08:01:28.699117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.971 qpair failed and we were unable to recover it. 00:37:36.971 [2024-11-19 08:01:28.699236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.971 [2024-11-19 08:01:28.699271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.971 qpair failed and we were unable to recover it. 00:37:36.971 [2024-11-19 08:01:28.699375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.971 [2024-11-19 08:01:28.699410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.971 qpair failed and we were unable to recover it. 00:37:36.971 [2024-11-19 08:01:28.699570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.971 [2024-11-19 08:01:28.699606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.971 qpair failed and we were unable to recover it. 00:37:36.971 [2024-11-19 08:01:28.699742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.971 [2024-11-19 08:01:28.699778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.971 qpair failed and we were unable to recover it. 00:37:36.971 [2024-11-19 08:01:28.699881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.971 [2024-11-19 08:01:28.699917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.971 qpair failed and we were unable to recover it. 00:37:36.971 [2024-11-19 08:01:28.700094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.971 [2024-11-19 08:01:28.700134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.971 qpair failed and we were unable to recover it. 00:37:36.971 [2024-11-19 08:01:28.700275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.971 [2024-11-19 08:01:28.700312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.971 qpair failed and we were unable to recover it. 00:37:36.971 [2024-11-19 08:01:28.700456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.971 [2024-11-19 08:01:28.700493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.971 qpair failed and we were unable to recover it. 00:37:36.971 [2024-11-19 08:01:28.700599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.971 [2024-11-19 08:01:28.700635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.971 qpair failed and we were unable to recover it. 00:37:36.971 [2024-11-19 08:01:28.700825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.971 [2024-11-19 08:01:28.700877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.971 qpair failed and we were unable to recover it. 00:37:36.971 [2024-11-19 08:01:28.700998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.971 [2024-11-19 08:01:28.701035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.971 qpair failed and we were unable to recover it. 00:37:36.971 [2024-11-19 08:01:28.701169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.971 [2024-11-19 08:01:28.701205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.971 qpair failed and we were unable to recover it. 00:37:36.971 [2024-11-19 08:01:28.701335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.971 [2024-11-19 08:01:28.701371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.971 qpair failed and we were unable to recover it. 00:37:36.971 [2024-11-19 08:01:28.701516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.971 [2024-11-19 08:01:28.701552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.971 qpair failed and we were unable to recover it. 00:37:36.971 [2024-11-19 08:01:28.701700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.971 [2024-11-19 08:01:28.701737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.971 qpair failed and we were unable to recover it. 00:37:36.971 [2024-11-19 08:01:28.701849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.971 [2024-11-19 08:01:28.701886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.971 qpair failed and we were unable to recover it. 00:37:36.971 [2024-11-19 08:01:28.702034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.971 [2024-11-19 08:01:28.702069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.971 qpair failed and we were unable to recover it. 00:37:36.971 [2024-11-19 08:01:28.702201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.971 [2024-11-19 08:01:28.702236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.971 qpair failed and we were unable to recover it. 00:37:36.971 [2024-11-19 08:01:28.702353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.971 [2024-11-19 08:01:28.702389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.971 qpair failed and we were unable to recover it. 00:37:36.971 [2024-11-19 08:01:28.702545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.971 [2024-11-19 08:01:28.702596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.971 qpair failed and we were unable to recover it. 00:37:36.971 [2024-11-19 08:01:28.702769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.971 [2024-11-19 08:01:28.702806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.971 qpair failed and we were unable to recover it. 00:37:36.971 [2024-11-19 08:01:28.702923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.971 [2024-11-19 08:01:28.702960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.971 qpair failed and we were unable to recover it. 00:37:36.971 [2024-11-19 08:01:28.703098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.971 [2024-11-19 08:01:28.703134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.971 qpair failed and we were unable to recover it. 00:37:36.971 [2024-11-19 08:01:28.703238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.971 [2024-11-19 08:01:28.703273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.971 qpair failed and we were unable to recover it. 00:37:36.971 [2024-11-19 08:01:28.703434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.971 [2024-11-19 08:01:28.703469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.971 qpair failed and we were unable to recover it. 00:37:36.971 [2024-11-19 08:01:28.703606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.971 [2024-11-19 08:01:28.703641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.971 qpair failed and we were unable to recover it. 00:37:36.971 [2024-11-19 08:01:28.703778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.971 [2024-11-19 08:01:28.703814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.971 qpair failed and we were unable to recover it. 00:37:36.971 [2024-11-19 08:01:28.703918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.971 [2024-11-19 08:01:28.703953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.972 qpair failed and we were unable to recover it. 00:37:36.972 [2024-11-19 08:01:28.704092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.972 [2024-11-19 08:01:28.704127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.972 qpair failed and we were unable to recover it. 00:37:36.972 [2024-11-19 08:01:28.704268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.972 [2024-11-19 08:01:28.704303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.972 qpair failed and we were unable to recover it. 00:37:36.972 [2024-11-19 08:01:28.704443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.972 [2024-11-19 08:01:28.704480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.972 qpair failed and we were unable to recover it. 00:37:36.972 [2024-11-19 08:01:28.704612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.972 [2024-11-19 08:01:28.704654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.972 qpair failed and we were unable to recover it. 00:37:36.972 [2024-11-19 08:01:28.704818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.972 [2024-11-19 08:01:28.704872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.972 qpair failed and we were unable to recover it. 00:37:36.972 [2024-11-19 08:01:28.705054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.972 [2024-11-19 08:01:28.705090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.972 qpair failed and we were unable to recover it. 00:37:36.972 [2024-11-19 08:01:28.705228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.972 [2024-11-19 08:01:28.705264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.972 qpair failed and we were unable to recover it. 00:37:36.972 [2024-11-19 08:01:28.705367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.972 [2024-11-19 08:01:28.705402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.972 qpair failed and we were unable to recover it. 00:37:36.972 [2024-11-19 08:01:28.705564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.972 [2024-11-19 08:01:28.705600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.972 qpair failed and we were unable to recover it. 00:37:36.972 [2024-11-19 08:01:28.705727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.972 [2024-11-19 08:01:28.705777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.972 qpair failed and we were unable to recover it. 00:37:36.972 [2024-11-19 08:01:28.705920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.972 [2024-11-19 08:01:28.705957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.972 qpair failed and we were unable to recover it. 00:37:36.972 [2024-11-19 08:01:28.706068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.972 [2024-11-19 08:01:28.706109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.972 qpair failed and we were unable to recover it. 00:37:36.972 [2024-11-19 08:01:28.706253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.972 [2024-11-19 08:01:28.706291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.972 qpair failed and we were unable to recover it. 00:37:36.972 [2024-11-19 08:01:28.706430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.972 [2024-11-19 08:01:28.706467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.972 qpair failed and we were unable to recover it. 00:37:36.972 [2024-11-19 08:01:28.706603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.972 [2024-11-19 08:01:28.706640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.972 qpair failed and we were unable to recover it. 00:37:36.972 [2024-11-19 08:01:28.706800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.972 [2024-11-19 08:01:28.706836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.972 qpair failed and we were unable to recover it. 00:37:36.972 [2024-11-19 08:01:28.706987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.972 [2024-11-19 08:01:28.707027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.972 qpair failed and we were unable to recover it. 00:37:36.972 [2024-11-19 08:01:28.707179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.972 [2024-11-19 08:01:28.707217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.972 qpair failed and we were unable to recover it. 00:37:36.972 [2024-11-19 08:01:28.707324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.972 [2024-11-19 08:01:28.707361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.972 qpair failed and we were unable to recover it. 00:37:36.972 [2024-11-19 08:01:28.707523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.972 [2024-11-19 08:01:28.707559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.972 qpair failed and we were unable to recover it. 00:37:36.972 [2024-11-19 08:01:28.707704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.972 [2024-11-19 08:01:28.707741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.972 qpair failed and we were unable to recover it. 00:37:36.972 [2024-11-19 08:01:28.707851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.972 [2024-11-19 08:01:28.707887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.972 qpair failed and we were unable to recover it. 00:37:36.972 [2024-11-19 08:01:28.708030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.972 [2024-11-19 08:01:28.708067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.972 qpair failed and we were unable to recover it. 00:37:36.972 [2024-11-19 08:01:28.708207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.972 [2024-11-19 08:01:28.708244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.972 qpair failed and we were unable to recover it. 00:37:36.972 [2024-11-19 08:01:28.708353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.972 [2024-11-19 08:01:28.708388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.972 qpair failed and we were unable to recover it. 00:37:36.972 [2024-11-19 08:01:28.708515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.972 [2024-11-19 08:01:28.708552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.972 qpair failed and we were unable to recover it. 00:37:36.972 [2024-11-19 08:01:28.708666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.972 [2024-11-19 08:01:28.708710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.972 qpair failed and we were unable to recover it. 00:37:36.972 [2024-11-19 08:01:28.708856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.972 [2024-11-19 08:01:28.708893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.972 qpair failed and we were unable to recover it. 00:37:36.972 [2024-11-19 08:01:28.709036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.972 [2024-11-19 08:01:28.709072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.972 qpair failed and we were unable to recover it. 00:37:36.972 [2024-11-19 08:01:28.709209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.972 [2024-11-19 08:01:28.709246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.972 qpair failed and we were unable to recover it. 00:37:36.972 [2024-11-19 08:01:28.709387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.972 [2024-11-19 08:01:28.709423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.972 qpair failed and we were unable to recover it. 00:37:36.972 [2024-11-19 08:01:28.709527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.972 [2024-11-19 08:01:28.709565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.972 qpair failed and we were unable to recover it. 00:37:36.972 [2024-11-19 08:01:28.709679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.972 [2024-11-19 08:01:28.709720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.972 qpair failed and we were unable to recover it. 00:37:36.972 [2024-11-19 08:01:28.709840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.972 [2024-11-19 08:01:28.709891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.972 qpair failed and we were unable to recover it. 00:37:36.972 [2024-11-19 08:01:28.710019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.972 [2024-11-19 08:01:28.710057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.972 qpair failed and we were unable to recover it. 00:37:36.972 [2024-11-19 08:01:28.710198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.972 [2024-11-19 08:01:28.710235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.972 qpair failed and we were unable to recover it. 00:37:36.972 [2024-11-19 08:01:28.710349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.972 [2024-11-19 08:01:28.710386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.972 qpair failed and we were unable to recover it. 00:37:36.972 [2024-11-19 08:01:28.710528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.973 [2024-11-19 08:01:28.710565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.973 qpair failed and we were unable to recover it. 00:37:36.973 [2024-11-19 08:01:28.710736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.973 [2024-11-19 08:01:28.710774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.973 qpair failed and we were unable to recover it. 00:37:36.973 [2024-11-19 08:01:28.710911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.973 [2024-11-19 08:01:28.710948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.973 qpair failed and we were unable to recover it. 00:37:36.973 [2024-11-19 08:01:28.711083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.973 [2024-11-19 08:01:28.711120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.973 qpair failed and we were unable to recover it. 00:37:36.973 [2024-11-19 08:01:28.711259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.973 [2024-11-19 08:01:28.711295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.973 qpair failed and we were unable to recover it. 00:37:36.973 [2024-11-19 08:01:28.711460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.973 [2024-11-19 08:01:28.711496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.973 qpair failed and we were unable to recover it. 00:37:36.973 [2024-11-19 08:01:28.711623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.973 [2024-11-19 08:01:28.711679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.973 qpair failed and we were unable to recover it. 00:37:36.973 [2024-11-19 08:01:28.711834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.973 [2024-11-19 08:01:28.711880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.973 qpair failed and we were unable to recover it. 00:37:36.973 [2024-11-19 08:01:28.712014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.973 [2024-11-19 08:01:28.712050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.973 qpair failed and we were unable to recover it. 00:37:36.973 [2024-11-19 08:01:28.712212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.973 [2024-11-19 08:01:28.712248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.973 qpair failed and we were unable to recover it. 00:37:36.973 [2024-11-19 08:01:28.712434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.973 [2024-11-19 08:01:28.712472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.973 qpair failed and we were unable to recover it. 00:37:36.973 [2024-11-19 08:01:28.712578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.973 [2024-11-19 08:01:28.712626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.973 qpair failed and we were unable to recover it. 00:37:36.973 [2024-11-19 08:01:28.712836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.973 [2024-11-19 08:01:28.712874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.973 qpair failed and we were unable to recover it. 00:37:36.973 [2024-11-19 08:01:28.712991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.973 [2024-11-19 08:01:28.713028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.973 qpair failed and we were unable to recover it. 00:37:36.973 [2024-11-19 08:01:28.713236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.973 [2024-11-19 08:01:28.713272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.973 qpair failed and we were unable to recover it. 00:37:36.973 [2024-11-19 08:01:28.713373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.973 [2024-11-19 08:01:28.713409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.973 qpair failed and we were unable to recover it. 00:37:36.973 [2024-11-19 08:01:28.713545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.973 [2024-11-19 08:01:28.713580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.973 qpair failed and we were unable to recover it. 00:37:36.973 [2024-11-19 08:01:28.713716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.973 [2024-11-19 08:01:28.713752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.973 qpair failed and we were unable to recover it. 00:37:36.973 [2024-11-19 08:01:28.713896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.973 [2024-11-19 08:01:28.713933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.973 qpair failed and we were unable to recover it. 00:37:36.973 [2024-11-19 08:01:28.714101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.973 [2024-11-19 08:01:28.714137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.973 qpair failed and we were unable to recover it. 00:37:36.973 [2024-11-19 08:01:28.714281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.973 [2024-11-19 08:01:28.714317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.973 qpair failed and we were unable to recover it. 00:37:36.973 [2024-11-19 08:01:28.714447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.973 [2024-11-19 08:01:28.714483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.973 qpair failed and we were unable to recover it. 00:37:36.973 [2024-11-19 08:01:28.714614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.973 [2024-11-19 08:01:28.714666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.973 qpair failed and we were unable to recover it. 00:37:36.973 [2024-11-19 08:01:28.714863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.973 [2024-11-19 08:01:28.714913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.973 qpair failed and we were unable to recover it. 00:37:36.973 [2024-11-19 08:01:28.715061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.973 [2024-11-19 08:01:28.715099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.973 qpair failed and we were unable to recover it. 00:37:36.973 [2024-11-19 08:01:28.715262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.973 [2024-11-19 08:01:28.715298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.973 qpair failed and we were unable to recover it. 00:37:36.973 [2024-11-19 08:01:28.715458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.973 [2024-11-19 08:01:28.715494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.973 qpair failed and we were unable to recover it. 00:37:36.973 [2024-11-19 08:01:28.715644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.973 [2024-11-19 08:01:28.715681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.973 qpair failed and we were unable to recover it. 00:37:36.973 [2024-11-19 08:01:28.715801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.973 [2024-11-19 08:01:28.715837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.973 qpair failed and we were unable to recover it. 00:37:36.973 [2024-11-19 08:01:28.715970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.973 [2024-11-19 08:01:28.716021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.973 qpair failed and we were unable to recover it. 00:37:36.973 [2024-11-19 08:01:28.716199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.973 [2024-11-19 08:01:28.716238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.973 qpair failed and we were unable to recover it. 00:37:36.973 [2024-11-19 08:01:28.716380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.973 [2024-11-19 08:01:28.716417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.973 qpair failed and we were unable to recover it. 00:37:36.973 [2024-11-19 08:01:28.716584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.973 [2024-11-19 08:01:28.716621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.973 qpair failed and we were unable to recover it. 00:37:36.973 [2024-11-19 08:01:28.716754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.973 [2024-11-19 08:01:28.716804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.973 qpair failed and we were unable to recover it. 00:37:36.973 [2024-11-19 08:01:28.716926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.974 [2024-11-19 08:01:28.716963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.974 qpair failed and we were unable to recover it. 00:37:36.974 [2024-11-19 08:01:28.717071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.974 [2024-11-19 08:01:28.717107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.974 qpair failed and we were unable to recover it. 00:37:36.974 [2024-11-19 08:01:28.717243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.974 [2024-11-19 08:01:28.717279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.974 qpair failed and we were unable to recover it. 00:37:36.974 [2024-11-19 08:01:28.717388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.974 [2024-11-19 08:01:28.717423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.974 qpair failed and we were unable to recover it. 00:37:36.974 [2024-11-19 08:01:28.717523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.974 [2024-11-19 08:01:28.717561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.974 qpair failed and we were unable to recover it. 00:37:36.974 [2024-11-19 08:01:28.717676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.974 [2024-11-19 08:01:28.717724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.974 qpair failed and we were unable to recover it. 00:37:36.974 [2024-11-19 08:01:28.717893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.974 [2024-11-19 08:01:28.717929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.974 qpair failed and we were unable to recover it. 00:37:36.974 [2024-11-19 08:01:28.718073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.974 [2024-11-19 08:01:28.718110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.974 qpair failed and we were unable to recover it. 00:37:36.974 [2024-11-19 08:01:28.718215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.974 [2024-11-19 08:01:28.718252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.974 qpair failed and we were unable to recover it. 00:37:36.974 [2024-11-19 08:01:28.718419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.974 [2024-11-19 08:01:28.718456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.974 qpair failed and we were unable to recover it. 00:37:36.974 [2024-11-19 08:01:28.718600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.974 [2024-11-19 08:01:28.718636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.974 qpair failed and we were unable to recover it. 00:37:36.974 [2024-11-19 08:01:28.718793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.974 [2024-11-19 08:01:28.718831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.974 qpair failed and we were unable to recover it. 00:37:36.974 [2024-11-19 08:01:28.718990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.974 [2024-11-19 08:01:28.719046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.974 qpair failed and we were unable to recover it. 00:37:36.974 [2024-11-19 08:01:28.719160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.974 [2024-11-19 08:01:28.719198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.974 qpair failed and we were unable to recover it. 00:37:36.974 [2024-11-19 08:01:28.719336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.974 [2024-11-19 08:01:28.719372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.974 qpair failed and we were unable to recover it. 00:37:36.974 [2024-11-19 08:01:28.719506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.974 [2024-11-19 08:01:28.719542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.974 qpair failed and we were unable to recover it. 00:37:36.974 [2024-11-19 08:01:28.719677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.974 [2024-11-19 08:01:28.719722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.974 qpair failed and we were unable to recover it. 00:37:36.974 [2024-11-19 08:01:28.719861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.974 [2024-11-19 08:01:28.719898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.974 qpair failed and we were unable to recover it. 00:37:36.974 [2024-11-19 08:01:28.720012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.974 [2024-11-19 08:01:28.720050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.974 qpair failed and we were unable to recover it. 00:37:36.974 [2024-11-19 08:01:28.720258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.974 [2024-11-19 08:01:28.720295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.974 qpair failed and we were unable to recover it. 00:37:36.974 [2024-11-19 08:01:28.720405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.974 [2024-11-19 08:01:28.720442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.974 qpair failed and we were unable to recover it. 00:37:36.974 [2024-11-19 08:01:28.720606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.974 [2024-11-19 08:01:28.720641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.974 qpair failed and we were unable to recover it. 00:37:36.974 [2024-11-19 08:01:28.720782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.974 [2024-11-19 08:01:28.720832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.974 qpair failed and we were unable to recover it. 00:37:36.974 [2024-11-19 08:01:28.720940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.974 [2024-11-19 08:01:28.720978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.974 qpair failed and we were unable to recover it. 00:37:36.974 [2024-11-19 08:01:28.721120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.974 [2024-11-19 08:01:28.721158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.974 qpair failed and we were unable to recover it. 00:37:36.974 [2024-11-19 08:01:28.721270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.974 [2024-11-19 08:01:28.721306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.974 qpair failed and we were unable to recover it. 00:37:36.974 [2024-11-19 08:01:28.721458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.974 [2024-11-19 08:01:28.721494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.974 qpair failed and we were unable to recover it. 00:37:36.974 [2024-11-19 08:01:28.721632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.974 [2024-11-19 08:01:28.721669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.974 qpair failed and we were unable to recover it. 00:37:36.974 [2024-11-19 08:01:28.721817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.974 [2024-11-19 08:01:28.721854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.974 qpair failed and we were unable to recover it. 00:37:36.974 [2024-11-19 08:01:28.721967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.974 [2024-11-19 08:01:28.722004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.974 qpair failed and we were unable to recover it. 00:37:36.974 [2024-11-19 08:01:28.722139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.974 [2024-11-19 08:01:28.722175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.974 qpair failed and we were unable to recover it. 00:37:36.974 [2024-11-19 08:01:28.722328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.974 [2024-11-19 08:01:28.722376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.974 qpair failed and we were unable to recover it. 00:37:36.974 [2024-11-19 08:01:28.722549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.974 [2024-11-19 08:01:28.722585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.974 qpair failed and we were unable to recover it. 00:37:36.974 [2024-11-19 08:01:28.722717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.974 [2024-11-19 08:01:28.722768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.974 qpair failed and we were unable to recover it. 00:37:36.974 [2024-11-19 08:01:28.722933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.974 [2024-11-19 08:01:28.722983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.974 qpair failed and we were unable to recover it. 00:37:36.974 [2024-11-19 08:01:28.723135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.974 [2024-11-19 08:01:28.723174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.974 qpair failed and we were unable to recover it. 00:37:36.974 [2024-11-19 08:01:28.723316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.974 [2024-11-19 08:01:28.723353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.974 qpair failed and we were unable to recover it. 00:37:36.974 [2024-11-19 08:01:28.723520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.975 [2024-11-19 08:01:28.723556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.975 qpair failed and we were unable to recover it. 00:37:36.975 [2024-11-19 08:01:28.723751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.975 [2024-11-19 08:01:28.723802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.975 qpair failed and we were unable to recover it. 00:37:36.975 [2024-11-19 08:01:28.723949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.975 [2024-11-19 08:01:28.723985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.975 qpair failed and we were unable to recover it. 00:37:36.975 [2024-11-19 08:01:28.724099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.975 [2024-11-19 08:01:28.724134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.975 qpair failed and we were unable to recover it. 00:37:36.975 [2024-11-19 08:01:28.724265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.975 [2024-11-19 08:01:28.724301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.975 qpair failed and we were unable to recover it. 00:37:36.975 [2024-11-19 08:01:28.724468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.975 [2024-11-19 08:01:28.724503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.975 qpair failed and we were unable to recover it. 00:37:36.975 [2024-11-19 08:01:28.724661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.975 [2024-11-19 08:01:28.724721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.975 qpair failed and we were unable to recover it. 00:37:36.975 [2024-11-19 08:01:28.724872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.975 [2024-11-19 08:01:28.724910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.975 qpair failed and we were unable to recover it. 00:37:36.975 [2024-11-19 08:01:28.725045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.975 [2024-11-19 08:01:28.725081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.975 qpair failed and we were unable to recover it. 00:37:36.975 [2024-11-19 08:01:28.725213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.975 [2024-11-19 08:01:28.725249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.975 qpair failed and we were unable to recover it. 00:37:36.975 [2024-11-19 08:01:28.725385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.975 [2024-11-19 08:01:28.725421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.975 qpair failed and we were unable to recover it. 00:37:36.975 [2024-11-19 08:01:28.725575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.975 [2024-11-19 08:01:28.725612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.975 qpair failed and we were unable to recover it. 00:37:36.975 [2024-11-19 08:01:28.725729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.975 [2024-11-19 08:01:28.725767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.975 qpair failed and we were unable to recover it. 00:37:36.975 [2024-11-19 08:01:28.725906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.975 [2024-11-19 08:01:28.725942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.975 qpair failed and we were unable to recover it. 00:37:36.975 [2024-11-19 08:01:28.726054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.975 [2024-11-19 08:01:28.726105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.975 qpair failed and we were unable to recover it. 00:37:36.975 [2024-11-19 08:01:28.726244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.975 [2024-11-19 08:01:28.726285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.975 qpair failed and we were unable to recover it. 00:37:36.975 [2024-11-19 08:01:28.726430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.975 [2024-11-19 08:01:28.726465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.975 qpair failed and we were unable to recover it. 00:37:36.975 [2024-11-19 08:01:28.726601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.975 [2024-11-19 08:01:28.726636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.975 qpair failed and we were unable to recover it. 00:37:36.975 [2024-11-19 08:01:28.726785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.975 [2024-11-19 08:01:28.726824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.975 qpair failed and we were unable to recover it. 00:37:36.975 [2024-11-19 08:01:28.726957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.975 [2024-11-19 08:01:28.727020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.975 qpair failed and we were unable to recover it. 00:37:36.975 [2024-11-19 08:01:28.727230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.975 [2024-11-19 08:01:28.727269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.975 qpair failed and we were unable to recover it. 00:37:36.975 [2024-11-19 08:01:28.727387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.975 [2024-11-19 08:01:28.727426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.975 qpair failed and we were unable to recover it. 00:37:36.975 [2024-11-19 08:01:28.727538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.975 [2024-11-19 08:01:28.727575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.975 qpair failed and we were unable to recover it. 00:37:36.975 [2024-11-19 08:01:28.727756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.975 [2024-11-19 08:01:28.727806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.975 qpair failed and we were unable to recover it. 00:37:36.975 [2024-11-19 08:01:28.727940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.975 [2024-11-19 08:01:28.727978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.975 qpair failed and we were unable to recover it. 00:37:36.975 [2024-11-19 08:01:28.728113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.975 [2024-11-19 08:01:28.728149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.975 qpair failed and we were unable to recover it. 00:37:36.975 [2024-11-19 08:01:28.728278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.975 [2024-11-19 08:01:28.728314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.975 qpair failed and we were unable to recover it. 00:37:36.975 [2024-11-19 08:01:28.728427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.975 [2024-11-19 08:01:28.728464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.975 qpair failed and we were unable to recover it. 00:37:36.975 [2024-11-19 08:01:28.728634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.975 [2024-11-19 08:01:28.728671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.975 qpair failed and we were unable to recover it. 00:37:36.975 [2024-11-19 08:01:28.728837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.975 [2024-11-19 08:01:28.728876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.975 qpair failed and we were unable to recover it. 00:37:36.975 [2024-11-19 08:01:28.729015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.975 [2024-11-19 08:01:28.729053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.975 qpair failed and we were unable to recover it. 00:37:36.975 [2024-11-19 08:01:28.729192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.975 [2024-11-19 08:01:28.729228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.975 qpair failed and we were unable to recover it. 00:37:36.975 [2024-11-19 08:01:28.729345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.975 [2024-11-19 08:01:28.729380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.975 qpair failed and we were unable to recover it. 00:37:36.975 [2024-11-19 08:01:28.729491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.975 [2024-11-19 08:01:28.729526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.975 qpair failed and we were unable to recover it. 00:37:36.975 [2024-11-19 08:01:28.729661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.975 [2024-11-19 08:01:28.729705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.975 qpair failed and we were unable to recover it. 00:37:36.975 [2024-11-19 08:01:28.729835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.975 [2024-11-19 08:01:28.729871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.975 qpair failed and we were unable to recover it. 00:37:36.975 [2024-11-19 08:01:28.729981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.975 [2024-11-19 08:01:28.730016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.975 qpair failed and we were unable to recover it. 00:37:36.975 [2024-11-19 08:01:28.730154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.975 [2024-11-19 08:01:28.730190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.976 qpair failed and we were unable to recover it. 00:37:36.976 [2024-11-19 08:01:28.730331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.976 [2024-11-19 08:01:28.730366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.976 qpair failed and we were unable to recover it. 00:37:36.976 [2024-11-19 08:01:28.730508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.976 [2024-11-19 08:01:28.730546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.976 qpair failed and we were unable to recover it. 00:37:36.976 [2024-11-19 08:01:28.730701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.976 [2024-11-19 08:01:28.730752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.976 qpair failed and we were unable to recover it. 00:37:36.976 [2024-11-19 08:01:28.730884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.976 [2024-11-19 08:01:28.730934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.976 qpair failed and we were unable to recover it. 00:37:36.976 [2024-11-19 08:01:28.731112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.976 [2024-11-19 08:01:28.731149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.976 qpair failed and we were unable to recover it. 00:37:36.976 [2024-11-19 08:01:28.731314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.976 [2024-11-19 08:01:28.731350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.976 qpair failed and we were unable to recover it. 00:37:36.976 [2024-11-19 08:01:28.731516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.976 [2024-11-19 08:01:28.731552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.976 qpair failed and we were unable to recover it. 00:37:36.976 [2024-11-19 08:01:28.731657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.976 [2024-11-19 08:01:28.731699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.976 qpair failed and we were unable to recover it. 00:37:36.976 [2024-11-19 08:01:28.731818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.976 [2024-11-19 08:01:28.731856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.976 qpair failed and we were unable to recover it. 00:37:36.976 [2024-11-19 08:01:28.732001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.976 [2024-11-19 08:01:28.732040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.976 qpair failed and we were unable to recover it. 00:37:36.976 [2024-11-19 08:01:28.732158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.976 [2024-11-19 08:01:28.732196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.976 qpair failed and we were unable to recover it. 00:37:36.976 [2024-11-19 08:01:28.732366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.976 [2024-11-19 08:01:28.732402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.976 qpair failed and we were unable to recover it. 00:37:36.976 [2024-11-19 08:01:28.732540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.976 [2024-11-19 08:01:28.732575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.976 qpair failed and we were unable to recover it. 00:37:36.976 [2024-11-19 08:01:28.732723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.976 [2024-11-19 08:01:28.732764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.976 qpair failed and we were unable to recover it. 00:37:36.976 [2024-11-19 08:01:28.732880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.976 [2024-11-19 08:01:28.732917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.976 qpair failed and we were unable to recover it. 00:37:36.976 [2024-11-19 08:01:28.733054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.976 [2024-11-19 08:01:28.733091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.976 qpair failed and we were unable to recover it. 00:37:36.976 [2024-11-19 08:01:28.733193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.976 [2024-11-19 08:01:28.733229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.976 qpair failed and we were unable to recover it. 00:37:36.976 [2024-11-19 08:01:28.733395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.976 [2024-11-19 08:01:28.733439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.976 qpair failed and we were unable to recover it. 00:37:36.976 [2024-11-19 08:01:28.733565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.976 [2024-11-19 08:01:28.733616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.976 qpair failed and we were unable to recover it. 00:37:36.976 [2024-11-19 08:01:28.733741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.976 [2024-11-19 08:01:28.733778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.976 qpair failed and we were unable to recover it. 00:37:36.976 [2024-11-19 08:01:28.733918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.976 [2024-11-19 08:01:28.733953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.976 qpair failed and we were unable to recover it. 00:37:36.976 [2024-11-19 08:01:28.734084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.976 [2024-11-19 08:01:28.734119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.976 qpair failed and we were unable to recover it. 00:37:36.976 [2024-11-19 08:01:28.734281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.976 [2024-11-19 08:01:28.734316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.976 qpair failed and we were unable to recover it. 00:37:36.976 [2024-11-19 08:01:28.734460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.976 [2024-11-19 08:01:28.734497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.976 qpair failed and we were unable to recover it. 00:37:36.976 [2024-11-19 08:01:28.734613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.976 [2024-11-19 08:01:28.734649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.976 qpair failed and we were unable to recover it. 00:37:36.976 [2024-11-19 08:01:28.734799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.976 [2024-11-19 08:01:28.734836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.976 qpair failed and we were unable to recover it. 00:37:36.976 [2024-11-19 08:01:28.734944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.976 [2024-11-19 08:01:28.734981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.976 qpair failed and we were unable to recover it. 00:37:36.976 [2024-11-19 08:01:28.735131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.976 [2024-11-19 08:01:28.735181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.976 qpair failed and we were unable to recover it. 00:37:36.976 [2024-11-19 08:01:28.735327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.976 [2024-11-19 08:01:28.735366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.976 qpair failed and we were unable to recover it. 00:37:36.976 [2024-11-19 08:01:28.735509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.976 [2024-11-19 08:01:28.735546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.976 qpair failed and we were unable to recover it. 00:37:36.976 [2024-11-19 08:01:28.735682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.976 [2024-11-19 08:01:28.735723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.976 qpair failed and we were unable to recover it. 00:37:36.976 [2024-11-19 08:01:28.735862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.976 [2024-11-19 08:01:28.735913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.976 qpair failed and we were unable to recover it. 00:37:36.976 [2024-11-19 08:01:28.736037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.976 [2024-11-19 08:01:28.736076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.976 qpair failed and we were unable to recover it. 00:37:36.976 [2024-11-19 08:01:28.736190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.976 [2024-11-19 08:01:28.736227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.976 qpair failed and we were unable to recover it. 00:37:36.976 [2024-11-19 08:01:28.736374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.976 [2024-11-19 08:01:28.736412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.976 qpair failed and we were unable to recover it. 00:37:36.976 [2024-11-19 08:01:28.736534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.976 [2024-11-19 08:01:28.736573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.976 qpair failed and we were unable to recover it. 00:37:36.976 [2024-11-19 08:01:28.736725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.976 [2024-11-19 08:01:28.736764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.976 qpair failed and we were unable to recover it. 00:37:36.976 [2024-11-19 08:01:28.736878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.977 [2024-11-19 08:01:28.736916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.977 qpair failed and we were unable to recover it. 00:37:36.977 [2024-11-19 08:01:28.737067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.977 [2024-11-19 08:01:28.737105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.977 qpair failed and we were unable to recover it. 00:37:36.977 [2024-11-19 08:01:28.737248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.977 [2024-11-19 08:01:28.737284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.977 qpair failed and we were unable to recover it. 00:37:36.977 [2024-11-19 08:01:28.737391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.977 [2024-11-19 08:01:28.737428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.977 qpair failed and we were unable to recover it. 00:37:36.977 [2024-11-19 08:01:28.737569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.977 [2024-11-19 08:01:28.737606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.977 qpair failed and we were unable to recover it. 00:37:36.977 [2024-11-19 08:01:28.737774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.977 [2024-11-19 08:01:28.737812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.977 qpair failed and we were unable to recover it. 00:37:36.977 [2024-11-19 08:01:28.737930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.977 [2024-11-19 08:01:28.737967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.977 qpair failed and we were unable to recover it. 00:37:36.977 [2024-11-19 08:01:28.738083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.977 [2024-11-19 08:01:28.738120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.977 qpair failed and we were unable to recover it. 00:37:36.977 [2024-11-19 08:01:28.738260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.977 [2024-11-19 08:01:28.738297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.977 qpair failed and we were unable to recover it. 00:37:36.977 [2024-11-19 08:01:28.738405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.977 [2024-11-19 08:01:28.738441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.977 qpair failed and we were unable to recover it. 00:37:36.977 [2024-11-19 08:01:28.738543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.977 [2024-11-19 08:01:28.738579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.977 qpair failed and we were unable to recover it. 00:37:36.977 [2024-11-19 08:01:28.738718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.977 [2024-11-19 08:01:28.738755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.977 qpair failed and we were unable to recover it. 00:37:36.977 [2024-11-19 08:01:28.738886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.977 [2024-11-19 08:01:28.738923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.977 qpair failed and we were unable to recover it. 00:37:36.977 [2024-11-19 08:01:28.739063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.977 [2024-11-19 08:01:28.739100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.977 qpair failed and we were unable to recover it. 00:37:36.977 [2024-11-19 08:01:28.739235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.977 [2024-11-19 08:01:28.739272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.977 qpair failed and we were unable to recover it. 00:37:36.977 [2024-11-19 08:01:28.739392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.977 [2024-11-19 08:01:28.739429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.977 qpair failed and we were unable to recover it. 00:37:36.977 [2024-11-19 08:01:28.739553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.977 [2024-11-19 08:01:28.739602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.977 qpair failed and we were unable to recover it. 00:37:36.977 [2024-11-19 08:01:28.739754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.977 [2024-11-19 08:01:28.739791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.977 qpair failed and we were unable to recover it. 00:37:36.977 [2024-11-19 08:01:28.739938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.977 [2024-11-19 08:01:28.739976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.977 qpair failed and we were unable to recover it. 00:37:36.977 [2024-11-19 08:01:28.740114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.977 [2024-11-19 08:01:28.740151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.977 qpair failed and we were unable to recover it. 00:37:36.977 [2024-11-19 08:01:28.740281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.977 [2024-11-19 08:01:28.740322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.977 qpair failed and we were unable to recover it. 00:37:36.977 [2024-11-19 08:01:28.740438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.977 [2024-11-19 08:01:28.740474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.977 qpair failed and we were unable to recover it. 00:37:36.977 [2024-11-19 08:01:28.740605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.977 [2024-11-19 08:01:28.740640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.977 qpair failed and we were unable to recover it. 00:37:36.977 [2024-11-19 08:01:28.740772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.977 [2024-11-19 08:01:28.740813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.977 qpair failed and we were unable to recover it. 00:37:36.977 [2024-11-19 08:01:28.740933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.977 [2024-11-19 08:01:28.740970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.977 qpair failed and we were unable to recover it. 00:37:36.977 [2024-11-19 08:01:28.741085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.977 [2024-11-19 08:01:28.741120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.977 qpair failed and we were unable to recover it. 00:37:36.977 [2024-11-19 08:01:28.741255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.977 [2024-11-19 08:01:28.741291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.977 qpair failed and we were unable to recover it. 00:37:36.977 [2024-11-19 08:01:28.741410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.977 [2024-11-19 08:01:28.741448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.977 qpair failed and we were unable to recover it. 00:37:36.977 [2024-11-19 08:01:28.741585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.977 [2024-11-19 08:01:28.741621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.977 qpair failed and we were unable to recover it. 00:37:36.977 [2024-11-19 08:01:28.741761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.977 [2024-11-19 08:01:28.741805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.977 qpair failed and we were unable to recover it. 00:37:36.977 [2024-11-19 08:01:28.741944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.977 [2024-11-19 08:01:28.741980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.977 qpair failed and we were unable to recover it. 00:37:36.977 [2024-11-19 08:01:28.742097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.977 [2024-11-19 08:01:28.742133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.977 qpair failed and we were unable to recover it. 00:37:36.977 [2024-11-19 08:01:28.742296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.977 [2024-11-19 08:01:28.742332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.977 qpair failed and we were unable to recover it. 00:37:36.977 [2024-11-19 08:01:28.742441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.977 [2024-11-19 08:01:28.742478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.977 qpair failed and we were unable to recover it. 00:37:36.977 [2024-11-19 08:01:28.742595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.977 [2024-11-19 08:01:28.742631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.977 qpair failed and we were unable to recover it. 00:37:36.977 [2024-11-19 08:01:28.742779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.977 [2024-11-19 08:01:28.742817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.977 qpair failed and we were unable to recover it. 00:37:36.977 [2024-11-19 08:01:28.742949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.977 [2024-11-19 08:01:28.742985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.977 qpair failed and we were unable to recover it. 00:37:36.977 [2024-11-19 08:01:28.743084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.977 [2024-11-19 08:01:28.743120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.977 qpair failed and we were unable to recover it. 00:37:36.978 [2024-11-19 08:01:28.743262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.978 [2024-11-19 08:01:28.743298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.978 qpair failed and we were unable to recover it. 00:37:36.978 [2024-11-19 08:01:28.743413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.978 [2024-11-19 08:01:28.743450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.978 qpair failed and we were unable to recover it. 00:37:36.978 [2024-11-19 08:01:28.743615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.978 [2024-11-19 08:01:28.743651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.978 qpair failed and we were unable to recover it. 00:37:36.978 [2024-11-19 08:01:28.743775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.978 [2024-11-19 08:01:28.743812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.978 qpair failed and we were unable to recover it. 00:37:36.978 [2024-11-19 08:01:28.743951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.978 [2024-11-19 08:01:28.743986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.978 qpair failed and we were unable to recover it. 00:37:36.978 [2024-11-19 08:01:28.744125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.978 [2024-11-19 08:01:28.744167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.978 qpair failed and we were unable to recover it. 00:37:36.978 [2024-11-19 08:01:28.744300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.978 [2024-11-19 08:01:28.744336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.978 qpair failed and we were unable to recover it. 00:37:36.978 [2024-11-19 08:01:28.744466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.978 [2024-11-19 08:01:28.744503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.978 qpair failed and we were unable to recover it. 00:37:36.978 [2024-11-19 08:01:28.744684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.978 [2024-11-19 08:01:28.744743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.978 qpair failed and we were unable to recover it. 00:37:36.978 [2024-11-19 08:01:28.744871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.978 [2024-11-19 08:01:28.744908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.978 qpair failed and we were unable to recover it. 00:37:36.978 [2024-11-19 08:01:28.745072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.978 [2024-11-19 08:01:28.745108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.978 qpair failed and we were unable to recover it. 00:37:36.978 [2024-11-19 08:01:28.745245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.978 [2024-11-19 08:01:28.745281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.978 qpair failed and we were unable to recover it. 00:37:36.978 [2024-11-19 08:01:28.745413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.978 [2024-11-19 08:01:28.745449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.978 qpair failed and we were unable to recover it. 00:37:36.978 [2024-11-19 08:01:28.745582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.978 [2024-11-19 08:01:28.745618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.978 qpair failed and we were unable to recover it. 00:37:36.978 [2024-11-19 08:01:28.745763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.978 [2024-11-19 08:01:28.745801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.978 qpair failed and we were unable to recover it. 00:37:36.978 [2024-11-19 08:01:28.745925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.978 [2024-11-19 08:01:28.745976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.978 qpair failed and we were unable to recover it. 00:37:36.978 [2024-11-19 08:01:28.746086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.978 [2024-11-19 08:01:28.746123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.978 qpair failed and we were unable to recover it. 00:37:36.978 [2024-11-19 08:01:28.746288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.978 [2024-11-19 08:01:28.746324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.978 qpair failed and we were unable to recover it. 00:37:36.978 [2024-11-19 08:01:28.746483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.978 [2024-11-19 08:01:28.746519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.978 qpair failed and we were unable to recover it. 00:37:36.978 [2024-11-19 08:01:28.746657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.978 [2024-11-19 08:01:28.746701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.978 qpair failed and we were unable to recover it. 00:37:36.978 [2024-11-19 08:01:28.746857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.978 [2024-11-19 08:01:28.746907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.978 qpair failed and we were unable to recover it. 00:37:36.978 [2024-11-19 08:01:28.747055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.978 [2024-11-19 08:01:28.747093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.978 qpair failed and we were unable to recover it. 00:37:36.978 [2024-11-19 08:01:28.747205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.978 [2024-11-19 08:01:28.747247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.978 qpair failed and we were unable to recover it. 00:37:36.978 [2024-11-19 08:01:28.747379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.978 [2024-11-19 08:01:28.747415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.978 qpair failed and we were unable to recover it. 00:37:36.978 [2024-11-19 08:01:28.747526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.978 [2024-11-19 08:01:28.747562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.978 qpair failed and we were unable to recover it. 00:37:36.978 [2024-11-19 08:01:28.747715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.978 [2024-11-19 08:01:28.747766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.978 qpair failed and we were unable to recover it. 00:37:36.978 [2024-11-19 08:01:28.747881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.978 [2024-11-19 08:01:28.747920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.978 qpair failed and we were unable to recover it. 00:37:36.978 [2024-11-19 08:01:28.748066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.978 [2024-11-19 08:01:28.748103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.978 qpair failed and we were unable to recover it. 00:37:36.978 [2024-11-19 08:01:28.748212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.978 [2024-11-19 08:01:28.748250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.978 qpair failed and we were unable to recover it. 00:37:36.978 [2024-11-19 08:01:28.748394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.978 [2024-11-19 08:01:28.748431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.978 qpair failed and we were unable to recover it. 00:37:36.978 [2024-11-19 08:01:28.748582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.978 [2024-11-19 08:01:28.748619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.978 qpair failed and we were unable to recover it. 00:37:36.978 [2024-11-19 08:01:28.748766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.978 [2024-11-19 08:01:28.748805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.978 qpair failed and we were unable to recover it. 00:37:36.978 [2024-11-19 08:01:28.748939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.978 [2024-11-19 08:01:28.748975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.978 qpair failed and we were unable to recover it. 00:37:36.979 [2024-11-19 08:01:28.749107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.979 [2024-11-19 08:01:28.749143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.979 qpair failed and we were unable to recover it. 00:37:36.979 [2024-11-19 08:01:28.749284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.979 [2024-11-19 08:01:28.749319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.979 qpair failed and we were unable to recover it. 00:37:36.979 [2024-11-19 08:01:28.749488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.979 [2024-11-19 08:01:28.749525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.979 qpair failed and we were unable to recover it. 00:37:36.979 [2024-11-19 08:01:28.749657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.979 [2024-11-19 08:01:28.749716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.979 qpair failed and we were unable to recover it. 00:37:36.979 [2024-11-19 08:01:28.749838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.979 [2024-11-19 08:01:28.749876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.979 qpair failed and we were unable to recover it. 00:37:36.979 [2024-11-19 08:01:28.749981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.979 [2024-11-19 08:01:28.750018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.979 qpair failed and we were unable to recover it. 00:37:36.979 [2024-11-19 08:01:28.750156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.979 [2024-11-19 08:01:28.750193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.979 qpair failed and we were unable to recover it. 00:37:36.979 [2024-11-19 08:01:28.750357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.979 [2024-11-19 08:01:28.750394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.979 qpair failed and we were unable to recover it. 00:37:36.979 [2024-11-19 08:01:28.750501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.979 [2024-11-19 08:01:28.750538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.979 qpair failed and we were unable to recover it. 00:37:36.979 [2024-11-19 08:01:28.750681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.979 [2024-11-19 08:01:28.750725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.979 qpair failed and we were unable to recover it. 00:37:36.979 [2024-11-19 08:01:28.750868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.979 [2024-11-19 08:01:28.750903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.979 qpair failed and we were unable to recover it. 00:37:36.979 [2024-11-19 08:01:28.751040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.979 [2024-11-19 08:01:28.751076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.979 qpair failed and we were unable to recover it. 00:37:36.979 [2024-11-19 08:01:28.751214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.979 [2024-11-19 08:01:28.751250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.979 qpair failed and we were unable to recover it. 00:37:36.979 [2024-11-19 08:01:28.751407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.979 [2024-11-19 08:01:28.751458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.979 qpair failed and we were unable to recover it. 00:37:36.979 [2024-11-19 08:01:28.751581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.979 [2024-11-19 08:01:28.751618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.979 qpair failed and we were unable to recover it. 00:37:36.979 [2024-11-19 08:01:28.751820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.979 [2024-11-19 08:01:28.751871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.979 qpair failed and we were unable to recover it. 00:37:36.979 [2024-11-19 08:01:28.752002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.979 [2024-11-19 08:01:28.752039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.979 qpair failed and we were unable to recover it. 00:37:36.979 [2024-11-19 08:01:28.752175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.979 [2024-11-19 08:01:28.752222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.979 qpair failed and we were unable to recover it. 00:37:36.979 [2024-11-19 08:01:28.752338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.979 [2024-11-19 08:01:28.752374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.979 qpair failed and we were unable to recover it. 00:37:36.979 [2024-11-19 08:01:28.752505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.979 [2024-11-19 08:01:28.752541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.979 qpair failed and we were unable to recover it. 00:37:36.979 [2024-11-19 08:01:28.752670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.979 [2024-11-19 08:01:28.752717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.979 qpair failed and we were unable to recover it. 00:37:36.979 [2024-11-19 08:01:28.752876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.979 [2024-11-19 08:01:28.752915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.979 qpair failed and we were unable to recover it. 00:37:36.979 [2024-11-19 08:01:28.753056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.979 [2024-11-19 08:01:28.753092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.979 qpair failed and we were unable to recover it. 00:37:36.979 [2024-11-19 08:01:28.753240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.979 [2024-11-19 08:01:28.753276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.979 qpair failed and we were unable to recover it. 00:37:36.979 [2024-11-19 08:01:28.753390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.979 [2024-11-19 08:01:28.753426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.979 qpair failed and we were unable to recover it. 00:37:36.979 [2024-11-19 08:01:28.753562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.979 [2024-11-19 08:01:28.753598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.979 qpair failed and we were unable to recover it. 00:37:36.979 [2024-11-19 08:01:28.753761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.979 [2024-11-19 08:01:28.753798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.979 qpair failed and we were unable to recover it. 00:37:36.979 [2024-11-19 08:01:28.753944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.979 [2024-11-19 08:01:28.753980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.979 qpair failed and we were unable to recover it. 00:37:36.979 [2024-11-19 08:01:28.754143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.979 [2024-11-19 08:01:28.754179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.979 qpair failed and we were unable to recover it. 00:37:36.979 [2024-11-19 08:01:28.754296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.979 [2024-11-19 08:01:28.754332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.979 qpair failed and we were unable to recover it. 00:37:36.979 [2024-11-19 08:01:28.754507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.979 [2024-11-19 08:01:28.754544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.979 qpair failed and we were unable to recover it. 00:37:36.979 [2024-11-19 08:01:28.754673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.979 [2024-11-19 08:01:28.754718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.979 qpair failed and we were unable to recover it. 00:37:36.979 [2024-11-19 08:01:28.754887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.979 [2024-11-19 08:01:28.754923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.979 qpair failed and we were unable to recover it. 00:37:36.979 [2024-11-19 08:01:28.755035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.979 [2024-11-19 08:01:28.755071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.979 qpair failed and we were unable to recover it. 00:37:36.979 [2024-11-19 08:01:28.755206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.979 [2024-11-19 08:01:28.755241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.979 qpair failed and we were unable to recover it. 00:37:36.979 [2024-11-19 08:01:28.755392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.979 [2024-11-19 08:01:28.755442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.979 qpair failed and we were unable to recover it. 00:37:36.979 [2024-11-19 08:01:28.755618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.979 [2024-11-19 08:01:28.755655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.979 qpair failed and we were unable to recover it. 00:37:36.979 [2024-11-19 08:01:28.755799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.980 [2024-11-19 08:01:28.755849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.980 qpair failed and we were unable to recover it. 00:37:36.980 [2024-11-19 08:01:28.755998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.980 [2024-11-19 08:01:28.756034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.980 qpair failed and we were unable to recover it. 00:37:36.980 [2024-11-19 08:01:28.756195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.980 [2024-11-19 08:01:28.756232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.980 qpair failed and we were unable to recover it. 00:37:36.980 [2024-11-19 08:01:28.756371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.980 [2024-11-19 08:01:28.756408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.980 qpair failed and we were unable to recover it. 00:37:36.980 [2024-11-19 08:01:28.756572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.980 [2024-11-19 08:01:28.756608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.980 qpair failed and we were unable to recover it. 00:37:36.980 [2024-11-19 08:01:28.756720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.980 [2024-11-19 08:01:28.756755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.980 qpair failed and we were unable to recover it. 00:37:36.980 [2024-11-19 08:01:28.756918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.980 [2024-11-19 08:01:28.756969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.980 qpair failed and we were unable to recover it. 00:37:36.980 [2024-11-19 08:01:28.757095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.980 [2024-11-19 08:01:28.757133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.980 qpair failed and we were unable to recover it. 00:37:36.980 [2024-11-19 08:01:28.757275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.980 [2024-11-19 08:01:28.757312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.980 qpair failed and we were unable to recover it. 00:37:36.980 [2024-11-19 08:01:28.757448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.980 [2024-11-19 08:01:28.757485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.980 qpair failed and we were unable to recover it. 00:37:36.980 [2024-11-19 08:01:28.757652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.980 [2024-11-19 08:01:28.757696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.980 qpair failed and we were unable to recover it. 00:37:36.980 [2024-11-19 08:01:28.757824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.980 [2024-11-19 08:01:28.757865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.980 qpair failed and we were unable to recover it. 00:37:36.980 [2024-11-19 08:01:28.757982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.980 [2024-11-19 08:01:28.758019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.980 qpair failed and we were unable to recover it. 00:37:36.980 [2024-11-19 08:01:28.758181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.980 [2024-11-19 08:01:28.758217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.980 qpair failed and we were unable to recover it. 00:37:36.980 [2024-11-19 08:01:28.758332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.980 [2024-11-19 08:01:28.758368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.980 qpair failed and we were unable to recover it. 00:37:36.980 [2024-11-19 08:01:28.758504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.980 [2024-11-19 08:01:28.758540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.980 qpair failed and we were unable to recover it. 00:37:36.980 [2024-11-19 08:01:28.758706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.980 [2024-11-19 08:01:28.758743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.980 qpair failed and we were unable to recover it. 00:37:36.980 [2024-11-19 08:01:28.758848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.980 [2024-11-19 08:01:28.758884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.980 qpair failed and we were unable to recover it. 00:37:36.980 [2024-11-19 08:01:28.759002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.980 [2024-11-19 08:01:28.759038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.980 qpair failed and we were unable to recover it. 00:37:36.980 [2024-11-19 08:01:28.759149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.980 [2024-11-19 08:01:28.759189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.980 qpair failed and we were unable to recover it. 00:37:36.980 [2024-11-19 08:01:28.759324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.980 [2024-11-19 08:01:28.759360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.980 qpair failed and we were unable to recover it. 00:37:36.980 [2024-11-19 08:01:28.759495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.980 [2024-11-19 08:01:28.759533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.980 qpair failed and we were unable to recover it. 00:37:36.980 [2024-11-19 08:01:28.759705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.980 [2024-11-19 08:01:28.759743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.980 qpair failed and we were unable to recover it. 00:37:36.980 [2024-11-19 08:01:28.759847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.980 [2024-11-19 08:01:28.759884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.980 qpair failed and we were unable to recover it. 00:37:36.980 [2024-11-19 08:01:28.759984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.980 [2024-11-19 08:01:28.760020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.980 qpair failed and we were unable to recover it. 00:37:36.980 [2024-11-19 08:01:28.760130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.980 [2024-11-19 08:01:28.760166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.980 qpair failed and we were unable to recover it. 00:37:36.980 [2024-11-19 08:01:28.760308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.980 [2024-11-19 08:01:28.760346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.980 qpair failed and we were unable to recover it. 00:37:36.980 [2024-11-19 08:01:28.760455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.980 [2024-11-19 08:01:28.760491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.980 qpair failed and we were unable to recover it. 00:37:36.980 [2024-11-19 08:01:28.760632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.980 [2024-11-19 08:01:28.760668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.980 qpair failed and we were unable to recover it. 00:37:36.980 [2024-11-19 08:01:28.760813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.980 [2024-11-19 08:01:28.760850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.980 qpair failed and we were unable to recover it. 00:37:36.980 [2024-11-19 08:01:28.760955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.980 [2024-11-19 08:01:28.760991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.980 qpair failed and we were unable to recover it. 00:37:36.980 [2024-11-19 08:01:28.761141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.980 [2024-11-19 08:01:28.761176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.980 qpair failed and we were unable to recover it. 00:37:36.980 [2024-11-19 08:01:28.761342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.980 [2024-11-19 08:01:28.761377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.980 qpair failed and we were unable to recover it. 00:37:36.980 [2024-11-19 08:01:28.761523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.980 [2024-11-19 08:01:28.761560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.980 qpair failed and we were unable to recover it. 00:37:36.980 [2024-11-19 08:01:28.761716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.980 [2024-11-19 08:01:28.761753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.980 qpair failed and we were unable to recover it. 00:37:36.980 [2024-11-19 08:01:28.761859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.980 [2024-11-19 08:01:28.761895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.980 qpair failed and we were unable to recover it. 00:37:36.980 [2024-11-19 08:01:28.762030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.980 [2024-11-19 08:01:28.762067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.980 qpair failed and we were unable to recover it. 00:37:36.980 [2024-11-19 08:01:28.762230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.980 [2024-11-19 08:01:28.762267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.980 qpair failed and we were unable to recover it. 00:37:36.981 [2024-11-19 08:01:28.762406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.981 [2024-11-19 08:01:28.762444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.981 qpair failed and we were unable to recover it. 00:37:36.981 [2024-11-19 08:01:28.762586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.981 [2024-11-19 08:01:28.762623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.981 qpair failed and we were unable to recover it. 00:37:36.981 [2024-11-19 08:01:28.762778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.981 [2024-11-19 08:01:28.762814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.981 qpair failed and we were unable to recover it. 00:37:36.981 [2024-11-19 08:01:28.762922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.981 [2024-11-19 08:01:28.762960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.981 qpair failed and we were unable to recover it. 00:37:36.981 [2024-11-19 08:01:28.763128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.981 [2024-11-19 08:01:28.763165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.981 qpair failed and we were unable to recover it. 00:37:36.981 [2024-11-19 08:01:28.763304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.981 [2024-11-19 08:01:28.763342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.981 qpair failed and we were unable to recover it. 00:37:36.981 [2024-11-19 08:01:28.763478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.981 [2024-11-19 08:01:28.763514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.981 qpair failed and we were unable to recover it. 00:37:36.981 [2024-11-19 08:01:28.763627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.981 [2024-11-19 08:01:28.763663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.981 qpair failed and we were unable to recover it. 00:37:36.981 [2024-11-19 08:01:28.763814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.981 [2024-11-19 08:01:28.763865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.981 qpair failed and we were unable to recover it. 00:37:36.981 [2024-11-19 08:01:28.763977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.981 [2024-11-19 08:01:28.764014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.981 qpair failed and we were unable to recover it. 00:37:36.981 [2024-11-19 08:01:28.764144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.981 [2024-11-19 08:01:28.764180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.981 qpair failed and we were unable to recover it. 00:37:36.981 [2024-11-19 08:01:28.764292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.981 [2024-11-19 08:01:28.764328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.981 qpair failed and we were unable to recover it. 00:37:36.981 [2024-11-19 08:01:28.764467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.981 [2024-11-19 08:01:28.764502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.981 qpair failed and we were unable to recover it. 00:37:36.981 [2024-11-19 08:01:28.764614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.981 [2024-11-19 08:01:28.764651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.981 qpair failed and we were unable to recover it. 00:37:36.981 [2024-11-19 08:01:28.764799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.981 [2024-11-19 08:01:28.764835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.981 qpair failed and we were unable to recover it. 00:37:36.981 [2024-11-19 08:01:28.764971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.981 [2024-11-19 08:01:28.765007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.981 qpair failed and we were unable to recover it. 00:37:36.981 [2024-11-19 08:01:28.765148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.981 [2024-11-19 08:01:28.765184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.981 qpair failed and we were unable to recover it. 00:37:36.981 [2024-11-19 08:01:28.765314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.981 [2024-11-19 08:01:28.765350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.981 qpair failed and we were unable to recover it. 00:37:36.981 [2024-11-19 08:01:28.765499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.981 [2024-11-19 08:01:28.765537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.981 qpair failed and we were unable to recover it. 00:37:36.981 [2024-11-19 08:01:28.765703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.981 [2024-11-19 08:01:28.765754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.981 qpair failed and we were unable to recover it. 00:37:36.981 [2024-11-19 08:01:28.765902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.981 [2024-11-19 08:01:28.765941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.981 qpair failed and we were unable to recover it. 00:37:36.981 [2024-11-19 08:01:28.766071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.981 [2024-11-19 08:01:28.766114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.981 qpair failed and we were unable to recover it. 00:37:36.981 [2024-11-19 08:01:28.766278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.981 [2024-11-19 08:01:28.766315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.981 qpair failed and we were unable to recover it. 00:37:36.981 [2024-11-19 08:01:28.766447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.981 [2024-11-19 08:01:28.766498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.981 qpair failed and we were unable to recover it. 00:37:36.981 [2024-11-19 08:01:28.766636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.981 [2024-11-19 08:01:28.766673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.981 qpair failed and we were unable to recover it. 00:37:36.981 [2024-11-19 08:01:28.766852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.981 [2024-11-19 08:01:28.766902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.981 qpair failed and we were unable to recover it. 00:37:36.981 [2024-11-19 08:01:28.767049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.981 [2024-11-19 08:01:28.767086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.981 qpair failed and we were unable to recover it. 00:37:36.981 [2024-11-19 08:01:28.767231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.981 [2024-11-19 08:01:28.767268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.981 qpair failed and we were unable to recover it. 00:37:36.981 [2024-11-19 08:01:28.767431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.981 [2024-11-19 08:01:28.767466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.981 qpair failed and we were unable to recover it. 00:37:36.981 [2024-11-19 08:01:28.767579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.981 [2024-11-19 08:01:28.767617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.981 qpair failed and we were unable to recover it. 00:37:36.981 [2024-11-19 08:01:28.767744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.981 [2024-11-19 08:01:28.767795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.981 qpair failed and we were unable to recover it. 00:37:36.981 [2024-11-19 08:01:28.767921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.981 [2024-11-19 08:01:28.767960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.981 qpair failed and we were unable to recover it. 00:37:36.981 [2024-11-19 08:01:28.768078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.981 [2024-11-19 08:01:28.768115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.981 qpair failed and we were unable to recover it. 00:37:36.981 [2024-11-19 08:01:28.768254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.981 [2024-11-19 08:01:28.768291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.981 qpair failed and we were unable to recover it. 00:37:36.981 [2024-11-19 08:01:28.768406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.981 [2024-11-19 08:01:28.768443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.981 qpair failed and we were unable to recover it. 00:37:36.981 [2024-11-19 08:01:28.768617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.981 [2024-11-19 08:01:28.768653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.981 qpair failed and we were unable to recover it. 00:37:36.981 [2024-11-19 08:01:28.768783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.981 [2024-11-19 08:01:28.768821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.981 qpair failed and we were unable to recover it. 00:37:36.981 [2024-11-19 08:01:28.768939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.981 [2024-11-19 08:01:28.768976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.982 qpair failed and we were unable to recover it. 00:37:36.982 [2024-11-19 08:01:28.769129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.982 [2024-11-19 08:01:28.769166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.982 qpair failed and we were unable to recover it. 00:37:36.982 [2024-11-19 08:01:28.769309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.982 [2024-11-19 08:01:28.769358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.982 qpair failed and we were unable to recover it. 00:37:36.982 [2024-11-19 08:01:28.769503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.982 [2024-11-19 08:01:28.769541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.982 qpair failed and we were unable to recover it. 00:37:36.982 [2024-11-19 08:01:28.769685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.982 [2024-11-19 08:01:28.769746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.982 qpair failed and we were unable to recover it. 00:37:36.982 [2024-11-19 08:01:28.769880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.982 [2024-11-19 08:01:28.769930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.982 qpair failed and we were unable to recover it. 00:37:36.982 [2024-11-19 08:01:28.770077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.982 [2024-11-19 08:01:28.770115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.982 qpair failed and we were unable to recover it. 00:37:36.982 [2024-11-19 08:01:28.770283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.982 [2024-11-19 08:01:28.770319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.982 qpair failed and we were unable to recover it. 00:37:36.982 [2024-11-19 08:01:28.770432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.982 [2024-11-19 08:01:28.770467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.982 qpair failed and we were unable to recover it. 00:37:36.982 [2024-11-19 08:01:28.770588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.982 [2024-11-19 08:01:28.770625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.982 qpair failed and we were unable to recover it. 00:37:36.982 [2024-11-19 08:01:28.770816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.982 [2024-11-19 08:01:28.770867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.982 qpair failed and we were unable to recover it. 00:37:36.982 [2024-11-19 08:01:28.771002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.982 [2024-11-19 08:01:28.771041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.982 qpair failed and we were unable to recover it. 00:37:36.982 [2024-11-19 08:01:28.771211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.982 [2024-11-19 08:01:28.771248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.982 qpair failed and we were unable to recover it. 00:37:36.982 [2024-11-19 08:01:28.771387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.982 [2024-11-19 08:01:28.771423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.982 qpair failed and we were unable to recover it. 00:37:36.982 [2024-11-19 08:01:28.771533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.982 [2024-11-19 08:01:28.771569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.982 qpair failed and we were unable to recover it. 00:37:36.982 [2024-11-19 08:01:28.771729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.982 [2024-11-19 08:01:28.771780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.982 qpair failed and we were unable to recover it. 00:37:36.982 [2024-11-19 08:01:28.771954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.982 [2024-11-19 08:01:28.771993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.982 qpair failed and we were unable to recover it. 00:37:36.982 [2024-11-19 08:01:28.772110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.982 [2024-11-19 08:01:28.772146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.982 qpair failed and we were unable to recover it. 00:37:36.982 [2024-11-19 08:01:28.772259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.982 [2024-11-19 08:01:28.772295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.982 qpair failed and we were unable to recover it. 00:37:36.982 [2024-11-19 08:01:28.772449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.982 [2024-11-19 08:01:28.772486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.982 qpair failed and we were unable to recover it. 00:37:36.982 [2024-11-19 08:01:28.772625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.982 [2024-11-19 08:01:28.772675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.982 qpair failed and we were unable to recover it. 00:37:36.982 [2024-11-19 08:01:28.772850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.982 [2024-11-19 08:01:28.772888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.982 qpair failed and we were unable to recover it. 00:37:36.982 [2024-11-19 08:01:28.773034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.982 [2024-11-19 08:01:28.773084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.982 qpair failed and we were unable to recover it. 00:37:36.982 [2024-11-19 08:01:28.773218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.982 [2024-11-19 08:01:28.773254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.982 qpair failed and we were unable to recover it. 00:37:36.982 [2024-11-19 08:01:28.773363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.982 [2024-11-19 08:01:28.773403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.982 qpair failed and we were unable to recover it. 00:37:36.982 [2024-11-19 08:01:28.773527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.982 [2024-11-19 08:01:28.773565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.982 qpair failed and we were unable to recover it. 00:37:36.982 [2024-11-19 08:01:28.773708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.982 [2024-11-19 08:01:28.773759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.982 qpair failed and we were unable to recover it. 00:37:36.982 [2024-11-19 08:01:28.773885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.982 [2024-11-19 08:01:28.773925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.982 qpair failed and we were unable to recover it. 00:37:36.982 [2024-11-19 08:01:28.774073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.982 [2024-11-19 08:01:28.774111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.982 qpair failed and we were unable to recover it. 00:37:36.982 [2024-11-19 08:01:28.774215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.982 [2024-11-19 08:01:28.774251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.982 qpair failed and we were unable to recover it. 00:37:36.982 [2024-11-19 08:01:28.774382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.982 [2024-11-19 08:01:28.774433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.982 qpair failed and we were unable to recover it. 00:37:36.982 [2024-11-19 08:01:28.774578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.982 [2024-11-19 08:01:28.774616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.982 qpair failed and we were unable to recover it. 00:37:36.982 [2024-11-19 08:01:28.774755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.982 [2024-11-19 08:01:28.774791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.982 qpair failed and we were unable to recover it. 00:37:36.982 [2024-11-19 08:01:28.774922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.982 [2024-11-19 08:01:28.774959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.982 qpair failed and we were unable to recover it. 00:37:36.982 [2024-11-19 08:01:28.775120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.982 [2024-11-19 08:01:28.775155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.982 qpair failed and we were unable to recover it. 00:37:36.982 [2024-11-19 08:01:28.775277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.982 [2024-11-19 08:01:28.775328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.982 qpair failed and we were unable to recover it. 00:37:36.982 [2024-11-19 08:01:28.775478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.982 [2024-11-19 08:01:28.775515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.982 qpair failed and we were unable to recover it. 00:37:36.982 [2024-11-19 08:01:28.775670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.982 [2024-11-19 08:01:28.775716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.982 qpair failed and we were unable to recover it. 00:37:36.982 [2024-11-19 08:01:28.775865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.983 [2024-11-19 08:01:28.775902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.983 qpair failed and we were unable to recover it. 00:37:36.983 [2024-11-19 08:01:28.776065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.983 [2024-11-19 08:01:28.776101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.983 qpair failed and we were unable to recover it. 00:37:36.983 [2024-11-19 08:01:28.776214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.983 [2024-11-19 08:01:28.776250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.983 qpair failed and we were unable to recover it. 00:37:36.983 [2024-11-19 08:01:28.776420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.983 [2024-11-19 08:01:28.776457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.983 qpair failed and we were unable to recover it. 00:37:36.983 [2024-11-19 08:01:28.776562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.983 [2024-11-19 08:01:28.776599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.983 qpair failed and we were unable to recover it. 00:37:36.983 [2024-11-19 08:01:28.776755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.983 [2024-11-19 08:01:28.776806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.983 qpair failed and we were unable to recover it. 00:37:36.983 [2024-11-19 08:01:28.776957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.983 [2024-11-19 08:01:28.776995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.983 qpair failed and we were unable to recover it. 00:37:36.983 [2024-11-19 08:01:28.777113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.983 [2024-11-19 08:01:28.777150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.983 qpair failed and we were unable to recover it. 00:37:36.983 [2024-11-19 08:01:28.777261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.983 [2024-11-19 08:01:28.777297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.983 qpair failed and we were unable to recover it. 00:37:36.983 [2024-11-19 08:01:28.777435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.983 [2024-11-19 08:01:28.777471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.983 qpair failed and we were unable to recover it. 00:37:36.983 [2024-11-19 08:01:28.777621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.983 [2024-11-19 08:01:28.777658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.983 qpair failed and we were unable to recover it. 00:37:36.983 [2024-11-19 08:01:28.777785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.983 [2024-11-19 08:01:28.777822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.983 qpair failed and we were unable to recover it. 00:37:36.983 [2024-11-19 08:01:28.777966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.983 [2024-11-19 08:01:28.778003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.983 qpair failed and we were unable to recover it. 00:37:36.983 [2024-11-19 08:01:28.778122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.983 [2024-11-19 08:01:28.778158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.983 qpair failed and we were unable to recover it. 00:37:36.983 [2024-11-19 08:01:28.778297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.983 [2024-11-19 08:01:28.778334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.983 qpair failed and we were unable to recover it. 00:37:36.983 [2024-11-19 08:01:28.778472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.983 [2024-11-19 08:01:28.778509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.983 qpair failed and we were unable to recover it. 00:37:36.983 [2024-11-19 08:01:28.778643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.983 [2024-11-19 08:01:28.778679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.983 qpair failed and we were unable to recover it. 00:37:36.983 [2024-11-19 08:01:28.778822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.983 [2024-11-19 08:01:28.778859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.983 qpair failed and we were unable to recover it. 00:37:36.983 [2024-11-19 08:01:28.779018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.983 [2024-11-19 08:01:28.779054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.983 qpair failed and we were unable to recover it. 00:37:36.983 [2024-11-19 08:01:28.779164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.983 [2024-11-19 08:01:28.779201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.983 qpair failed and we were unable to recover it. 00:37:36.983 [2024-11-19 08:01:28.779344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.983 [2024-11-19 08:01:28.779380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.983 qpair failed and we were unable to recover it. 00:37:36.983 [2024-11-19 08:01:28.779516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.983 [2024-11-19 08:01:28.779552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.983 qpair failed and we were unable to recover it. 00:37:36.983 [2024-11-19 08:01:28.779695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.983 [2024-11-19 08:01:28.779731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.983 qpair failed and we were unable to recover it. 00:37:36.983 [2024-11-19 08:01:28.779867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.983 [2024-11-19 08:01:28.779905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.983 qpair failed and we were unable to recover it. 00:37:36.983 [2024-11-19 08:01:28.780012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.983 [2024-11-19 08:01:28.780049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.983 qpair failed and we were unable to recover it. 00:37:36.983 [2024-11-19 08:01:28.780192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.983 [2024-11-19 08:01:28.780228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.983 qpair failed and we were unable to recover it. 00:37:36.983 [2024-11-19 08:01:28.780350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.983 [2024-11-19 08:01:28.780392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.983 qpair failed and we were unable to recover it. 00:37:36.983 [2024-11-19 08:01:28.780537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.983 [2024-11-19 08:01:28.780573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.983 qpair failed and we were unable to recover it. 00:37:36.983 [2024-11-19 08:01:28.780737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.983 [2024-11-19 08:01:28.780788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.983 qpair failed and we were unable to recover it. 00:37:36.983 [2024-11-19 08:01:28.780906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.983 [2024-11-19 08:01:28.780944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.983 qpair failed and we were unable to recover it. 00:37:36.983 [2024-11-19 08:01:28.781058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.983 [2024-11-19 08:01:28.781095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.983 qpair failed and we were unable to recover it. 00:37:36.983 [2024-11-19 08:01:28.781242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.983 [2024-11-19 08:01:28.781279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.983 qpair failed and we were unable to recover it. 00:37:36.983 [2024-11-19 08:01:28.781432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.983 [2024-11-19 08:01:28.781467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.983 qpair failed and we were unable to recover it. 00:37:36.983 [2024-11-19 08:01:28.781575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.983 [2024-11-19 08:01:28.781612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.984 qpair failed and we were unable to recover it. 00:37:36.984 [2024-11-19 08:01:28.781788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.984 [2024-11-19 08:01:28.781838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.984 qpair failed and we were unable to recover it. 00:37:36.984 [2024-11-19 08:01:28.782017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.984 [2024-11-19 08:01:28.782056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.984 qpair failed and we were unable to recover it. 00:37:36.984 [2024-11-19 08:01:28.782164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.984 [2024-11-19 08:01:28.782200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.984 qpair failed and we were unable to recover it. 00:37:36.984 [2024-11-19 08:01:28.782343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.984 [2024-11-19 08:01:28.782380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.984 qpair failed and we were unable to recover it. 00:37:36.984 [2024-11-19 08:01:28.782533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.984 [2024-11-19 08:01:28.782583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.984 qpair failed and we were unable to recover it. 00:37:36.984 [2024-11-19 08:01:28.782762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.984 [2024-11-19 08:01:28.782799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.984 qpair failed and we were unable to recover it. 00:37:36.984 [2024-11-19 08:01:28.782923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.984 [2024-11-19 08:01:28.782961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.984 qpair failed and we were unable to recover it. 00:37:36.984 [2024-11-19 08:01:28.783103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.984 [2024-11-19 08:01:28.783139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.984 qpair failed and we were unable to recover it. 00:37:36.984 [2024-11-19 08:01:28.783275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.984 [2024-11-19 08:01:28.783310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.984 qpair failed and we were unable to recover it. 00:37:36.984 [2024-11-19 08:01:28.783422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.984 [2024-11-19 08:01:28.783459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.984 qpair failed and we were unable to recover it. 00:37:36.984 [2024-11-19 08:01:28.783628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.984 [2024-11-19 08:01:28.783665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.984 qpair failed and we were unable to recover it. 00:37:36.984 [2024-11-19 08:01:28.783834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.984 [2024-11-19 08:01:28.783884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.984 qpair failed and we were unable to recover it. 00:37:36.984 [2024-11-19 08:01:28.784049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.984 [2024-11-19 08:01:28.784100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.984 qpair failed and we were unable to recover it. 00:37:36.984 [2024-11-19 08:01:28.784246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.984 [2024-11-19 08:01:28.784284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.984 qpair failed and we were unable to recover it. 00:37:36.984 [2024-11-19 08:01:28.784418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.984 [2024-11-19 08:01:28.784454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.984 qpair failed and we were unable to recover it. 00:37:36.984 [2024-11-19 08:01:28.784568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.984 [2024-11-19 08:01:28.784604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.984 qpair failed and we were unable to recover it. 00:37:36.984 [2024-11-19 08:01:28.784734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.984 [2024-11-19 08:01:28.784771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.984 qpair failed and we were unable to recover it. 00:37:36.984 [2024-11-19 08:01:28.784895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.984 [2024-11-19 08:01:28.784946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.984 qpair failed and we were unable to recover it. 00:37:36.984 [2024-11-19 08:01:28.785072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.984 [2024-11-19 08:01:28.785113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.984 qpair failed and we were unable to recover it. 00:37:36.984 [2024-11-19 08:01:28.785258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.984 [2024-11-19 08:01:28.785296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.984 qpair failed and we were unable to recover it. 00:37:36.984 [2024-11-19 08:01:28.785459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.984 [2024-11-19 08:01:28.785496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.984 qpair failed and we were unable to recover it. 00:37:36.984 [2024-11-19 08:01:28.785636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.984 [2024-11-19 08:01:28.785672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.984 qpair failed and we were unable to recover it. 00:37:36.984 [2024-11-19 08:01:28.785796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.984 [2024-11-19 08:01:28.785833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.984 qpair failed and we were unable to recover it. 00:37:36.984 [2024-11-19 08:01:28.785970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.984 [2024-11-19 08:01:28.786006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.984 qpair failed and we were unable to recover it. 00:37:36.984 [2024-11-19 08:01:28.786142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.984 [2024-11-19 08:01:28.786178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.984 qpair failed and we were unable to recover it. 00:37:36.984 [2024-11-19 08:01:28.786313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.984 [2024-11-19 08:01:28.786348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.984 qpair failed and we were unable to recover it. 00:37:36.984 [2024-11-19 08:01:28.786488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.984 [2024-11-19 08:01:28.786524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.984 qpair failed and we were unable to recover it. 00:37:36.984 [2024-11-19 08:01:28.786642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.984 [2024-11-19 08:01:28.786682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.984 qpair failed and we were unable to recover it. 00:37:36.984 [2024-11-19 08:01:28.786871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.984 [2024-11-19 08:01:28.786921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.984 qpair failed and we were unable to recover it. 00:37:36.984 [2024-11-19 08:01:28.787075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.984 [2024-11-19 08:01:28.787113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.984 qpair failed and we were unable to recover it. 00:37:36.984 [2024-11-19 08:01:28.787218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.984 [2024-11-19 08:01:28.787254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.984 qpair failed and we were unable to recover it. 00:37:36.984 [2024-11-19 08:01:28.787387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.984 [2024-11-19 08:01:28.787423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.984 qpair failed and we were unable to recover it. 00:37:36.984 [2024-11-19 08:01:28.787546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.984 [2024-11-19 08:01:28.787602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.984 qpair failed and we were unable to recover it. 00:37:36.984 [2024-11-19 08:01:28.787753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.984 [2024-11-19 08:01:28.787791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.984 qpair failed and we were unable to recover it. 00:37:36.984 [2024-11-19 08:01:28.787901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.984 [2024-11-19 08:01:28.787937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.984 qpair failed and we were unable to recover it. 00:37:36.984 [2024-11-19 08:01:28.788064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.984 [2024-11-19 08:01:28.788101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.984 qpair failed and we were unable to recover it. 00:37:36.984 [2024-11-19 08:01:28.788213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.984 [2024-11-19 08:01:28.788249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.984 qpair failed and we were unable to recover it. 00:37:36.985 [2024-11-19 08:01:28.788406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.985 [2024-11-19 08:01:28.788457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.985 qpair failed and we were unable to recover it. 00:37:36.985 [2024-11-19 08:01:28.788591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.985 [2024-11-19 08:01:28.788642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.985 qpair failed and we were unable to recover it. 00:37:36.985 [2024-11-19 08:01:28.788792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.985 [2024-11-19 08:01:28.788842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.985 qpair failed and we were unable to recover it. 00:37:36.985 [2024-11-19 08:01:28.788991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.985 [2024-11-19 08:01:28.789029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.985 qpair failed and we were unable to recover it. 00:37:36.985 [2024-11-19 08:01:28.789147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.985 [2024-11-19 08:01:28.789183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.985 qpair failed and we were unable to recover it. 00:37:36.985 [2024-11-19 08:01:28.789294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.985 [2024-11-19 08:01:28.789331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.985 qpair failed and we were unable to recover it. 00:37:36.985 [2024-11-19 08:01:28.789505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.985 [2024-11-19 08:01:28.789543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.985 qpair failed and we were unable to recover it. 00:37:36.985 [2024-11-19 08:01:28.789684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.985 [2024-11-19 08:01:28.789728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.985 qpair failed and we were unable to recover it. 00:37:36.985 [2024-11-19 08:01:28.789864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.985 [2024-11-19 08:01:28.789901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.985 qpair failed and we were unable to recover it. 00:37:36.985 [2024-11-19 08:01:28.790042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.985 [2024-11-19 08:01:28.790079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.985 qpair failed and we were unable to recover it. 00:37:36.985 [2024-11-19 08:01:28.790230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.985 [2024-11-19 08:01:28.790281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.985 qpair failed and we were unable to recover it. 00:37:36.985 [2024-11-19 08:01:28.790426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.985 [2024-11-19 08:01:28.790464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.985 qpair failed and we were unable to recover it. 00:37:36.985 [2024-11-19 08:01:28.790583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.985 [2024-11-19 08:01:28.790620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.985 qpair failed and we were unable to recover it. 00:37:36.985 [2024-11-19 08:01:28.790780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.985 [2024-11-19 08:01:28.790818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.985 qpair failed and we were unable to recover it. 00:37:36.985 [2024-11-19 08:01:28.790978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.985 [2024-11-19 08:01:28.791028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.985 qpair failed and we were unable to recover it. 00:37:36.985 [2024-11-19 08:01:28.791173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.985 [2024-11-19 08:01:28.791211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.985 qpair failed and we were unable to recover it. 00:37:36.985 [2024-11-19 08:01:28.791340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.985 [2024-11-19 08:01:28.791376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.985 qpair failed and we were unable to recover it. 00:37:36.985 [2024-11-19 08:01:28.791489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.985 [2024-11-19 08:01:28.791524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.985 qpair failed and we were unable to recover it. 00:37:36.985 [2024-11-19 08:01:28.791665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.985 [2024-11-19 08:01:28.791708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.985 qpair failed and we were unable to recover it. 00:37:36.985 [2024-11-19 08:01:28.791836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.985 [2024-11-19 08:01:28.791887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.985 qpair failed and we were unable to recover it. 00:37:36.985 [2024-11-19 08:01:28.792007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.985 [2024-11-19 08:01:28.792045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.985 qpair failed and we were unable to recover it. 00:37:36.985 [2024-11-19 08:01:28.792191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.985 [2024-11-19 08:01:28.792227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.985 qpair failed and we were unable to recover it. 00:37:36.985 [2024-11-19 08:01:28.792370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.985 [2024-11-19 08:01:28.792406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.985 qpair failed and we were unable to recover it. 00:37:36.985 [2024-11-19 08:01:28.792520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.985 [2024-11-19 08:01:28.792555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.985 qpair failed and we were unable to recover it. 00:37:36.985 [2024-11-19 08:01:28.792703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.985 [2024-11-19 08:01:28.792741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.985 qpair failed and we were unable to recover it. 00:37:36.985 [2024-11-19 08:01:28.792875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.985 [2024-11-19 08:01:28.792911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.985 qpair failed and we were unable to recover it. 00:37:36.985 [2024-11-19 08:01:28.793051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.985 [2024-11-19 08:01:28.793087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.985 qpair failed and we were unable to recover it. 00:37:36.985 [2024-11-19 08:01:28.793191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.985 [2024-11-19 08:01:28.793227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.985 qpair failed and we were unable to recover it. 00:37:36.985 [2024-11-19 08:01:28.793451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.985 [2024-11-19 08:01:28.793489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.985 qpair failed and we were unable to recover it. 00:37:36.985 [2024-11-19 08:01:28.793649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.985 [2024-11-19 08:01:28.793684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.985 qpair failed and we were unable to recover it. 00:37:36.985 [2024-11-19 08:01:28.793849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.985 [2024-11-19 08:01:28.793885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.985 qpair failed and we were unable to recover it. 00:37:36.985 [2024-11-19 08:01:28.794001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.985 [2024-11-19 08:01:28.794037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.985 qpair failed and we were unable to recover it. 00:37:36.985 [2024-11-19 08:01:28.794178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.985 [2024-11-19 08:01:28.794213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.985 qpair failed and we were unable to recover it. 00:37:36.985 [2024-11-19 08:01:28.794322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.985 [2024-11-19 08:01:28.794357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.985 qpair failed and we were unable to recover it. 00:37:36.985 [2024-11-19 08:01:28.794496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.985 [2024-11-19 08:01:28.794533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.985 qpair failed and we were unable to recover it. 00:37:36.985 [2024-11-19 08:01:28.794649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.985 [2024-11-19 08:01:28.794704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.985 qpair failed and we were unable to recover it. 00:37:36.985 [2024-11-19 08:01:28.794859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.985 [2024-11-19 08:01:28.794909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.985 qpair failed and we were unable to recover it. 00:37:36.985 [2024-11-19 08:01:28.795061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.985 [2024-11-19 08:01:28.795100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.985 qpair failed and we were unable to recover it. 00:37:36.985 [2024-11-19 08:01:28.795265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.985 [2024-11-19 08:01:28.795303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.985 qpair failed and we were unable to recover it. 00:37:36.985 [2024-11-19 08:01:28.795416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.985 [2024-11-19 08:01:28.795452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.985 qpair failed and we were unable to recover it. 00:37:36.985 [2024-11-19 08:01:28.795694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.985 [2024-11-19 08:01:28.795731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.986 qpair failed and we were unable to recover it. 00:37:36.986 [2024-11-19 08:01:28.795865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.986 [2024-11-19 08:01:28.795902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.986 qpair failed and we were unable to recover it. 00:37:36.986 [2024-11-19 08:01:28.796019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.986 [2024-11-19 08:01:28.796056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.986 qpair failed and we were unable to recover it. 00:37:36.986 [2024-11-19 08:01:28.796194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.986 [2024-11-19 08:01:28.796230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.986 qpair failed and we were unable to recover it. 00:37:36.986 [2024-11-19 08:01:28.796367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.986 [2024-11-19 08:01:28.796403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.986 qpair failed and we were unable to recover it. 00:37:36.986 [2024-11-19 08:01:28.796518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.986 [2024-11-19 08:01:28.796554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.986 qpair failed and we were unable to recover it. 00:37:36.986 [2024-11-19 08:01:28.796766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.986 [2024-11-19 08:01:28.796817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.986 qpair failed and we were unable to recover it. 00:37:36.986 [2024-11-19 08:01:28.796958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.986 [2024-11-19 08:01:28.796996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.986 qpair failed and we were unable to recover it. 00:37:36.986 [2024-11-19 08:01:28.797153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.986 [2024-11-19 08:01:28.797190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.986 qpair failed and we were unable to recover it. 00:37:36.986 [2024-11-19 08:01:28.797309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.986 [2024-11-19 08:01:28.797356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.986 qpair failed and we were unable to recover it. 00:37:36.986 [2024-11-19 08:01:28.797499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.986 [2024-11-19 08:01:28.797535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.986 qpair failed and we were unable to recover it. 00:37:36.986 [2024-11-19 08:01:28.797671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.986 [2024-11-19 08:01:28.797716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.986 qpair failed and we were unable to recover it. 00:37:36.986 [2024-11-19 08:01:28.797878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.986 [2024-11-19 08:01:28.797914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.986 qpair failed and we were unable to recover it. 00:37:36.986 [2024-11-19 08:01:28.798076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.986 [2024-11-19 08:01:28.798111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.986 qpair failed and we were unable to recover it. 00:37:36.986 [2024-11-19 08:01:28.798249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.986 [2024-11-19 08:01:28.798285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.986 qpair failed and we were unable to recover it. 00:37:36.986 [2024-11-19 08:01:28.798429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.986 [2024-11-19 08:01:28.798466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.986 qpair failed and we were unable to recover it. 00:37:36.986 [2024-11-19 08:01:28.798622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.986 [2024-11-19 08:01:28.798673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.986 qpair failed and we were unable to recover it. 00:37:36.986 [2024-11-19 08:01:28.798890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.986 [2024-11-19 08:01:28.798942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.986 qpair failed and we were unable to recover it. 00:37:36.986 [2024-11-19 08:01:28.799090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.986 [2024-11-19 08:01:28.799129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.986 qpair failed and we were unable to recover it. 00:37:36.986 [2024-11-19 08:01:28.799271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.986 [2024-11-19 08:01:28.799307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.986 qpair failed and we were unable to recover it. 00:37:36.986 [2024-11-19 08:01:28.799427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.986 [2024-11-19 08:01:28.799463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.986 qpair failed and we were unable to recover it. 00:37:36.986 [2024-11-19 08:01:28.799603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.986 [2024-11-19 08:01:28.799640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.986 qpair failed and we were unable to recover it. 00:37:36.986 [2024-11-19 08:01:28.799773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.986 [2024-11-19 08:01:28.799810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.986 qpair failed and we were unable to recover it. 00:37:36.986 [2024-11-19 08:01:28.799968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.986 [2024-11-19 08:01:28.800019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.986 qpair failed and we were unable to recover it. 00:37:36.986 [2024-11-19 08:01:28.800196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.986 [2024-11-19 08:01:28.800235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.986 qpair failed and we were unable to recover it. 00:37:36.986 [2024-11-19 08:01:28.800377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.986 [2024-11-19 08:01:28.800415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.986 qpair failed and we were unable to recover it. 00:37:36.986 [2024-11-19 08:01:28.800586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.986 [2024-11-19 08:01:28.800623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.986 qpair failed and we were unable to recover it. 00:37:36.986 [2024-11-19 08:01:28.800782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.986 [2024-11-19 08:01:28.800832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.986 qpair failed and we were unable to recover it. 00:37:36.986 [2024-11-19 08:01:28.800966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.986 [2024-11-19 08:01:28.801016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.986 qpair failed and we were unable to recover it. 00:37:36.986 [2024-11-19 08:01:28.801193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.986 [2024-11-19 08:01:28.801231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.986 qpair failed and we were unable to recover it. 00:37:36.986 [2024-11-19 08:01:28.801348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.986 [2024-11-19 08:01:28.801384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.986 qpair failed and we were unable to recover it. 00:37:36.986 [2024-11-19 08:01:28.801529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.986 [2024-11-19 08:01:28.801565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.986 qpair failed and we were unable to recover it. 00:37:36.986 [2024-11-19 08:01:28.801734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.986 [2024-11-19 08:01:28.801784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.986 qpair failed and we were unable to recover it. 00:37:36.986 [2024-11-19 08:01:28.801907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.986 [2024-11-19 08:01:28.801946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.986 qpair failed and we were unable to recover it. 00:37:36.986 [2024-11-19 08:01:28.802112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.986 [2024-11-19 08:01:28.802149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.986 qpair failed and we were unable to recover it. 00:37:36.986 [2024-11-19 08:01:28.802286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.986 [2024-11-19 08:01:28.802328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.986 qpair failed and we were unable to recover it. 00:37:36.986 [2024-11-19 08:01:28.802516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.986 [2024-11-19 08:01:28.802567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.986 qpair failed and we were unable to recover it. 00:37:36.986 [2024-11-19 08:01:28.802716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.986 [2024-11-19 08:01:28.802755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.986 qpair failed and we were unable to recover it. 00:37:36.986 [2024-11-19 08:01:28.802867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.986 [2024-11-19 08:01:28.802904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.986 qpair failed and we were unable to recover it. 00:37:36.986 [2024-11-19 08:01:28.803044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.986 [2024-11-19 08:01:28.803079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.986 qpair failed and we were unable to recover it. 00:37:36.986 [2024-11-19 08:01:28.803213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.986 [2024-11-19 08:01:28.803263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.986 qpair failed and we were unable to recover it. 00:37:36.986 [2024-11-19 08:01:28.803386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.987 [2024-11-19 08:01:28.803423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.987 qpair failed and we were unable to recover it. 00:37:36.987 [2024-11-19 08:01:28.803541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.987 [2024-11-19 08:01:28.803578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.987 qpair failed and we were unable to recover it. 00:37:36.987 [2024-11-19 08:01:28.803719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.987 [2024-11-19 08:01:28.803763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.987 qpair failed and we were unable to recover it. 00:37:36.987 [2024-11-19 08:01:28.803873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.987 [2024-11-19 08:01:28.803909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.987 qpair failed and we were unable to recover it. 00:37:36.987 [2024-11-19 08:01:28.804043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.987 [2024-11-19 08:01:28.804079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.987 qpair failed and we were unable to recover it. 00:37:36.987 [2024-11-19 08:01:28.804221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.987 [2024-11-19 08:01:28.804257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.987 qpair failed and we were unable to recover it. 00:37:36.987 [2024-11-19 08:01:28.804374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.987 [2024-11-19 08:01:28.804414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.987 qpair failed and we were unable to recover it. 00:37:36.987 [2024-11-19 08:01:28.804591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.987 [2024-11-19 08:01:28.804629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.987 qpair failed and we were unable to recover it. 00:37:36.987 [2024-11-19 08:01:28.804829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.987 [2024-11-19 08:01:28.804880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.987 qpair failed and we were unable to recover it. 00:37:36.987 [2024-11-19 08:01:28.805054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.987 [2024-11-19 08:01:28.805093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.987 qpair failed and we were unable to recover it. 00:37:36.987 [2024-11-19 08:01:28.805236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.987 [2024-11-19 08:01:28.805273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.987 qpair failed and we were unable to recover it. 00:37:36.987 [2024-11-19 08:01:28.805413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.987 [2024-11-19 08:01:28.805449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.987 qpair failed and we were unable to recover it. 00:37:36.987 [2024-11-19 08:01:28.805587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.987 [2024-11-19 08:01:28.805624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.987 qpair failed and we were unable to recover it. 00:37:36.987 [2024-11-19 08:01:28.805790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.987 [2024-11-19 08:01:28.805839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.987 qpair failed and we were unable to recover it. 00:37:36.987 [2024-11-19 08:01:28.805992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.987 [2024-11-19 08:01:28.806031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.987 qpair failed and we were unable to recover it. 00:37:36.987 [2024-11-19 08:01:28.806167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.987 [2024-11-19 08:01:28.806203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.987 qpair failed and we were unable to recover it. 00:37:36.987 [2024-11-19 08:01:28.806341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.987 [2024-11-19 08:01:28.806377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.987 qpair failed and we were unable to recover it. 00:37:36.987 [2024-11-19 08:01:28.806513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.987 [2024-11-19 08:01:28.806549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.987 qpair failed and we were unable to recover it. 00:37:36.987 [2024-11-19 08:01:28.806663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.987 [2024-11-19 08:01:28.806710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.987 qpair failed and we were unable to recover it. 00:37:36.987 [2024-11-19 08:01:28.806852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.987 [2024-11-19 08:01:28.806890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.987 qpair failed and we were unable to recover it. 00:37:36.987 [2024-11-19 08:01:28.806997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.987 [2024-11-19 08:01:28.807033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.987 qpair failed and we were unable to recover it. 00:37:36.987 [2024-11-19 08:01:28.807146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.987 [2024-11-19 08:01:28.807182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.987 qpair failed and we were unable to recover it. 00:37:36.987 [2024-11-19 08:01:28.807322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.987 [2024-11-19 08:01:28.807357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.987 qpair failed and we were unable to recover it. 00:37:36.987 [2024-11-19 08:01:28.807478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.987 [2024-11-19 08:01:28.807529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.987 qpair failed and we were unable to recover it. 00:37:36.987 [2024-11-19 08:01:28.807720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.987 [2024-11-19 08:01:28.807771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.987 qpair failed and we were unable to recover it. 00:37:36.987 [2024-11-19 08:01:28.807891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.987 [2024-11-19 08:01:28.807928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.987 qpair failed and we were unable to recover it. 00:37:36.987 [2024-11-19 08:01:28.808066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.987 [2024-11-19 08:01:28.808101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.987 qpair failed and we were unable to recover it. 00:37:36.987 [2024-11-19 08:01:28.808242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.987 [2024-11-19 08:01:28.808278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.987 qpair failed and we were unable to recover it. 00:37:36.987 [2024-11-19 08:01:28.808411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.987 [2024-11-19 08:01:28.808447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.987 qpair failed and we were unable to recover it. 00:37:36.987 [2024-11-19 08:01:28.808612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.987 [2024-11-19 08:01:28.808649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.987 qpair failed and we were unable to recover it. 00:37:36.987 [2024-11-19 08:01:28.808808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.987 [2024-11-19 08:01:28.808859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.987 qpair failed and we were unable to recover it. 00:37:36.987 [2024-11-19 08:01:28.808984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.987 [2024-11-19 08:01:28.809023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.987 qpair failed and we were unable to recover it. 00:37:36.987 [2024-11-19 08:01:28.809139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.987 [2024-11-19 08:01:28.809175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.987 qpair failed and we were unable to recover it. 00:37:36.987 [2024-11-19 08:01:28.809293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.987 [2024-11-19 08:01:28.809329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.987 qpair failed and we were unable to recover it. 00:37:36.987 [2024-11-19 08:01:28.809498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.987 [2024-11-19 08:01:28.809541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.987 qpair failed and we were unable to recover it. 00:37:36.987 [2024-11-19 08:01:28.809700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.987 [2024-11-19 08:01:28.809750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.987 qpair failed and we were unable to recover it. 00:37:36.987 [2024-11-19 08:01:28.809882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.988 [2024-11-19 08:01:28.809933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.988 qpair failed and we were unable to recover it. 00:37:36.988 [2024-11-19 08:01:28.810085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.988 [2024-11-19 08:01:28.810123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.988 qpair failed and we were unable to recover it. 00:37:36.988 [2024-11-19 08:01:28.810225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.988 [2024-11-19 08:01:28.810262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.988 qpair failed and we were unable to recover it. 00:37:36.988 [2024-11-19 08:01:28.810371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.988 [2024-11-19 08:01:28.810408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.988 qpair failed and we were unable to recover it. 00:37:36.988 [2024-11-19 08:01:28.810564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.988 [2024-11-19 08:01:28.810614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.988 qpair failed and we were unable to recover it. 00:37:36.988 [2024-11-19 08:01:28.810767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.988 [2024-11-19 08:01:28.810806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.988 qpair failed and we were unable to recover it. 00:37:36.988 [2024-11-19 08:01:28.810951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.988 [2024-11-19 08:01:28.810992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.988 qpair failed and we were unable to recover it. 00:37:36.988 [2024-11-19 08:01:28.811136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.988 [2024-11-19 08:01:28.811173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.988 qpair failed and we were unable to recover it. 00:37:36.988 [2024-11-19 08:01:28.811293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.988 [2024-11-19 08:01:28.811329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.988 qpair failed and we were unable to recover it. 00:37:36.988 [2024-11-19 08:01:28.811462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.988 [2024-11-19 08:01:28.811498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.988 qpair failed and we were unable to recover it. 00:37:36.988 [2024-11-19 08:01:28.811641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.988 [2024-11-19 08:01:28.811678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.988 qpair failed and we were unable to recover it. 00:37:36.988 [2024-11-19 08:01:28.811835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.988 [2024-11-19 08:01:28.811871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.988 qpair failed and we were unable to recover it. 00:37:36.988 [2024-11-19 08:01:28.812015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.988 [2024-11-19 08:01:28.812052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.988 qpair failed and we were unable to recover it. 00:37:36.988 [2024-11-19 08:01:28.812212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.988 [2024-11-19 08:01:28.812248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.988 qpair failed and we were unable to recover it. 00:37:36.988 [2024-11-19 08:01:28.812417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.988 [2024-11-19 08:01:28.812453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.988 qpair failed and we were unable to recover it. 00:37:36.988 [2024-11-19 08:01:28.812598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.988 [2024-11-19 08:01:28.812636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.988 qpair failed and we were unable to recover it. 00:37:36.988 [2024-11-19 08:01:28.812756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.988 [2024-11-19 08:01:28.812794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.988 qpair failed and we were unable to recover it. 00:37:36.988 [2024-11-19 08:01:28.812911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.988 [2024-11-19 08:01:28.812947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.988 qpair failed and we were unable to recover it. 00:37:36.988 [2024-11-19 08:01:28.813108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.988 [2024-11-19 08:01:28.813143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.988 qpair failed and we were unable to recover it. 00:37:36.988 [2024-11-19 08:01:28.813279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.988 [2024-11-19 08:01:28.813315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.988 qpair failed and we were unable to recover it. 00:37:36.988 [2024-11-19 08:01:28.813456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.988 [2024-11-19 08:01:28.813492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.988 qpair failed and we were unable to recover it. 00:37:36.988 [2024-11-19 08:01:28.813625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.988 [2024-11-19 08:01:28.813660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.988 qpair failed and we were unable to recover it. 00:37:36.988 [2024-11-19 08:01:28.813860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.988 [2024-11-19 08:01:28.813911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.988 qpair failed and we were unable to recover it. 00:37:36.988 [2024-11-19 08:01:28.814081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.988 [2024-11-19 08:01:28.814132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.988 qpair failed and we were unable to recover it. 00:37:36.988 [2024-11-19 08:01:28.814283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.988 [2024-11-19 08:01:28.814323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.988 qpair failed and we were unable to recover it. 00:37:36.988 [2024-11-19 08:01:28.814437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.988 [2024-11-19 08:01:28.814473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.988 qpair failed and we were unable to recover it. 00:37:36.988 [2024-11-19 08:01:28.814617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.989 [2024-11-19 08:01:28.814653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.989 qpair failed and we were unable to recover it. 00:37:36.989 [2024-11-19 08:01:28.814789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.989 [2024-11-19 08:01:28.814825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.989 qpair failed and we were unable to recover it. 00:37:36.989 [2024-11-19 08:01:28.814934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.989 [2024-11-19 08:01:28.814969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.989 qpair failed and we were unable to recover it. 00:37:36.989 [2024-11-19 08:01:28.815075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.989 [2024-11-19 08:01:28.815111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.989 qpair failed and we were unable to recover it. 00:37:36.989 [2024-11-19 08:01:28.815327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.989 [2024-11-19 08:01:28.815362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.989 qpair failed and we were unable to recover it. 00:37:36.989 [2024-11-19 08:01:28.815536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.989 [2024-11-19 08:01:28.815571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.989 qpair failed and we were unable to recover it. 00:37:36.989 [2024-11-19 08:01:28.815709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.989 [2024-11-19 08:01:28.815746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.989 qpair failed and we were unable to recover it. 00:37:36.989 [2024-11-19 08:01:28.815852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.989 [2024-11-19 08:01:28.815887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.989 qpair failed and we were unable to recover it. 00:37:36.989 [2024-11-19 08:01:28.816082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.989 [2024-11-19 08:01:28.816120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.989 qpair failed and we were unable to recover it. 00:37:36.989 [2024-11-19 08:01:28.816268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.989 [2024-11-19 08:01:28.816304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.989 qpair failed and we were unable to recover it. 00:37:36.989 [2024-11-19 08:01:28.816465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.989 [2024-11-19 08:01:28.816500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.989 qpair failed and we were unable to recover it. 00:37:36.989 [2024-11-19 08:01:28.816607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.989 [2024-11-19 08:01:28.816643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.989 qpair failed and we were unable to recover it. 00:37:36.989 [2024-11-19 08:01:28.816774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.989 [2024-11-19 08:01:28.816815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.989 qpair failed and we were unable to recover it. 00:37:36.989 [2024-11-19 08:01:28.816949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.989 [2024-11-19 08:01:28.816984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.989 qpair failed and we were unable to recover it. 00:37:36.989 [2024-11-19 08:01:28.817126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.989 [2024-11-19 08:01:28.817162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.989 qpair failed and we were unable to recover it. 00:37:36.989 [2024-11-19 08:01:28.817304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.989 [2024-11-19 08:01:28.817340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.989 qpair failed and we were unable to recover it. 00:37:36.989 [2024-11-19 08:01:28.817478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.989 [2024-11-19 08:01:28.817514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.989 qpair failed and we were unable to recover it. 00:37:36.989 [2024-11-19 08:01:28.817653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.989 [2024-11-19 08:01:28.817700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.989 qpair failed and we were unable to recover it. 00:37:36.989 [2024-11-19 08:01:28.817867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.989 [2024-11-19 08:01:28.817903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.989 qpair failed and we were unable to recover it. 00:37:36.989 [2024-11-19 08:01:28.818040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.989 [2024-11-19 08:01:28.818077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.989 qpair failed and we were unable to recover it. 00:37:36.989 [2024-11-19 08:01:28.818216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.989 [2024-11-19 08:01:28.818251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.989 qpair failed and we were unable to recover it. 00:37:36.989 [2024-11-19 08:01:28.818415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.989 [2024-11-19 08:01:28.818451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.989 qpair failed and we were unable to recover it. 00:37:36.989 [2024-11-19 08:01:28.818591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.989 [2024-11-19 08:01:28.818641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.989 qpair failed and we were unable to recover it. 00:37:36.989 [2024-11-19 08:01:28.818812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.989 [2024-11-19 08:01:28.818863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.989 qpair failed and we were unable to recover it. 00:37:36.989 [2024-11-19 08:01:28.818982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.989 [2024-11-19 08:01:28.819021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.989 qpair failed and we were unable to recover it. 00:37:36.989 [2024-11-19 08:01:28.819188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.989 [2024-11-19 08:01:28.819225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.989 qpair failed and we were unable to recover it. 00:37:36.989 [2024-11-19 08:01:28.819369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.989 [2024-11-19 08:01:28.819405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.989 qpair failed and we were unable to recover it. 00:37:36.989 [2024-11-19 08:01:28.819512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.989 [2024-11-19 08:01:28.819548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.989 qpair failed and we were unable to recover it. 00:37:36.989 [2024-11-19 08:01:28.819720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.989 [2024-11-19 08:01:28.819758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.989 qpair failed and we were unable to recover it. 00:37:36.989 [2024-11-19 08:01:28.819894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.989 [2024-11-19 08:01:28.819930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.989 qpair failed and we were unable to recover it. 00:37:36.989 [2024-11-19 08:01:28.820070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.989 [2024-11-19 08:01:28.820107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.989 qpair failed and we were unable to recover it. 00:37:36.989 [2024-11-19 08:01:28.820275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.989 [2024-11-19 08:01:28.820311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.989 qpair failed and we were unable to recover it. 00:37:36.989 [2024-11-19 08:01:28.820492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.989 [2024-11-19 08:01:28.820555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.989 qpair failed and we were unable to recover it. 00:37:36.989 [2024-11-19 08:01:28.820730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.989 [2024-11-19 08:01:28.820769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.989 qpair failed and we were unable to recover it. 00:37:36.989 [2024-11-19 08:01:28.820902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.989 [2024-11-19 08:01:28.820939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.989 qpair failed and we were unable to recover it. 00:37:36.989 [2024-11-19 08:01:28.821103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.989 [2024-11-19 08:01:28.821140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.989 qpair failed and we were unable to recover it. 00:37:36.989 [2024-11-19 08:01:28.821282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.989 [2024-11-19 08:01:28.821317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.989 qpair failed and we were unable to recover it. 00:37:36.989 [2024-11-19 08:01:28.821470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.990 [2024-11-19 08:01:28.821519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.990 qpair failed and we were unable to recover it. 00:37:36.990 [2024-11-19 08:01:28.821663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.990 [2024-11-19 08:01:28.821710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.990 qpair failed and we were unable to recover it. 00:37:36.990 [2024-11-19 08:01:28.821846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.990 [2024-11-19 08:01:28.821910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.990 qpair failed and we were unable to recover it. 00:37:36.990 [2024-11-19 08:01:28.822029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.990 [2024-11-19 08:01:28.822069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.990 qpair failed and we were unable to recover it. 00:37:36.990 [2024-11-19 08:01:28.822180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.990 [2024-11-19 08:01:28.822217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.990 qpair failed and we were unable to recover it. 00:37:36.990 [2024-11-19 08:01:28.822319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.990 [2024-11-19 08:01:28.822356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.990 qpair failed and we were unable to recover it. 00:37:36.990 [2024-11-19 08:01:28.822466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.990 [2024-11-19 08:01:28.822503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.990 qpair failed and we were unable to recover it. 00:37:36.990 [2024-11-19 08:01:28.822627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.990 [2024-11-19 08:01:28.822677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.990 qpair failed and we were unable to recover it. 00:37:36.990 [2024-11-19 08:01:28.822814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.990 [2024-11-19 08:01:28.822852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.990 qpair failed and we were unable to recover it. 00:37:36.990 [2024-11-19 08:01:28.822998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.990 [2024-11-19 08:01:28.823035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.990 qpair failed and we were unable to recover it. 00:37:36.990 [2024-11-19 08:01:28.823155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.990 [2024-11-19 08:01:28.823191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.990 qpair failed and we were unable to recover it. 00:37:36.990 [2024-11-19 08:01:28.823333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.990 [2024-11-19 08:01:28.823372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.990 qpair failed and we were unable to recover it. 00:37:36.990 [2024-11-19 08:01:28.823512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.990 [2024-11-19 08:01:28.823551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.990 qpair failed and we were unable to recover it. 00:37:36.990 [2024-11-19 08:01:28.823699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.990 [2024-11-19 08:01:28.823737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.990 qpair failed and we were unable to recover it. 00:37:36.990 [2024-11-19 08:01:28.823849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.990 [2024-11-19 08:01:28.823886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.990 qpair failed and we were unable to recover it. 00:37:36.990 [2024-11-19 08:01:28.824040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.990 [2024-11-19 08:01:28.824082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.990 qpair failed and we were unable to recover it. 00:37:36.990 [2024-11-19 08:01:28.824215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.990 [2024-11-19 08:01:28.824252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.990 qpair failed and we were unable to recover it. 00:37:36.990 [2024-11-19 08:01:28.824387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.990 [2024-11-19 08:01:28.824423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.990 qpair failed and we were unable to recover it. 00:37:36.990 [2024-11-19 08:01:28.824530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.990 [2024-11-19 08:01:28.824566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.990 qpair failed and we were unable to recover it. 00:37:36.990 [2024-11-19 08:01:28.824719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.990 [2024-11-19 08:01:28.824769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.990 qpair failed and we were unable to recover it. 00:37:36.990 [2024-11-19 08:01:28.824934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.990 [2024-11-19 08:01:28.824983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.990 qpair failed and we were unable to recover it. 00:37:36.990 [2024-11-19 08:01:28.825136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.990 [2024-11-19 08:01:28.825174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.990 qpair failed and we were unable to recover it. 00:37:36.990 [2024-11-19 08:01:28.825396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.990 [2024-11-19 08:01:28.825433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.990 qpair failed and we were unable to recover it. 00:37:36.990 [2024-11-19 08:01:28.825566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.990 [2024-11-19 08:01:28.825601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.990 qpair failed and we were unable to recover it. 00:37:36.990 [2024-11-19 08:01:28.825738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.990 [2024-11-19 08:01:28.825774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.990 qpair failed and we were unable to recover it. 00:37:36.990 [2024-11-19 08:01:28.825879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.990 [2024-11-19 08:01:28.825915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.990 qpair failed and we were unable to recover it. 00:37:36.990 [2024-11-19 08:01:28.826023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.990 [2024-11-19 08:01:28.826058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.990 qpair failed and we were unable to recover it. 00:37:36.990 [2024-11-19 08:01:28.826198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.990 [2024-11-19 08:01:28.826234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.990 qpair failed and we were unable to recover it. 00:37:36.990 [2024-11-19 08:01:28.826372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.990 [2024-11-19 08:01:28.826408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.990 qpair failed and we were unable to recover it. 00:37:36.990 [2024-11-19 08:01:28.826556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.990 [2024-11-19 08:01:28.826595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.990 qpair failed and we were unable to recover it. 00:37:36.990 [2024-11-19 08:01:28.826742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.990 [2024-11-19 08:01:28.826780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.990 qpair failed and we were unable to recover it. 00:37:36.990 [2024-11-19 08:01:28.826896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.990 [2024-11-19 08:01:28.826934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.990 qpair failed and we were unable to recover it. 00:37:36.990 [2024-11-19 08:01:28.827051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.990 [2024-11-19 08:01:28.827087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.990 qpair failed and we were unable to recover it. 00:37:36.990 [2024-11-19 08:01:28.827213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.990 [2024-11-19 08:01:28.827264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.990 qpair failed and we were unable to recover it. 00:37:36.990 [2024-11-19 08:01:28.827417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.990 [2024-11-19 08:01:28.827455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.990 qpair failed and we were unable to recover it. 00:37:36.990 [2024-11-19 08:01:28.827565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.990 [2024-11-19 08:01:28.827601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.990 qpair failed and we were unable to recover it. 00:37:36.990 [2024-11-19 08:01:28.827734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.990 [2024-11-19 08:01:28.827770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.990 qpair failed and we were unable to recover it. 00:37:36.990 [2024-11-19 08:01:28.827906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.990 [2024-11-19 08:01:28.827941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.990 qpair failed and we were unable to recover it. 00:37:36.991 [2024-11-19 08:01:28.828052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.991 [2024-11-19 08:01:28.828088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.991 qpair failed and we were unable to recover it. 00:37:36.991 [2024-11-19 08:01:28.828204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.991 [2024-11-19 08:01:28.828241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.991 qpair failed and we were unable to recover it. 00:37:36.991 [2024-11-19 08:01:28.828380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.991 [2024-11-19 08:01:28.828418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.991 qpair failed and we were unable to recover it. 00:37:36.991 [2024-11-19 08:01:28.828551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.991 [2024-11-19 08:01:28.828588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.991 qpair failed and we were unable to recover it. 00:37:36.991 [2024-11-19 08:01:28.828727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.991 [2024-11-19 08:01:28.828764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.991 qpair failed and we were unable to recover it. 00:37:36.991 [2024-11-19 08:01:28.828880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.991 [2024-11-19 08:01:28.828916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.991 qpair failed and we were unable to recover it. 00:37:36.991 [2024-11-19 08:01:28.829021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.991 [2024-11-19 08:01:28.829056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.991 qpair failed and we were unable to recover it. 00:37:36.991 [2024-11-19 08:01:28.829207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.991 [2024-11-19 08:01:28.829243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.991 qpair failed and we were unable to recover it. 00:37:36.991 [2024-11-19 08:01:28.829356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.991 [2024-11-19 08:01:28.829392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.991 qpair failed and we were unable to recover it. 00:37:36.991 [2024-11-19 08:01:28.829548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.991 [2024-11-19 08:01:28.829599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.991 qpair failed and we were unable to recover it. 00:37:36.991 [2024-11-19 08:01:28.829752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.991 [2024-11-19 08:01:28.829790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.991 qpair failed and we were unable to recover it. 00:37:36.991 [2024-11-19 08:01:28.829938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.991 [2024-11-19 08:01:28.829976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.991 qpair failed and we were unable to recover it. 00:37:36.991 [2024-11-19 08:01:28.830094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.991 [2024-11-19 08:01:28.830130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.991 qpair failed and we were unable to recover it. 00:37:36.991 [2024-11-19 08:01:28.830269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.991 [2024-11-19 08:01:28.830304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.991 qpair failed and we were unable to recover it. 00:37:36.991 [2024-11-19 08:01:28.830407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.991 [2024-11-19 08:01:28.830442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.991 qpair failed and we were unable to recover it. 00:37:36.991 [2024-11-19 08:01:28.830561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.991 [2024-11-19 08:01:28.830596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.991 qpair failed and we were unable to recover it. 00:37:36.991 [2024-11-19 08:01:28.830757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.991 [2024-11-19 08:01:28.830793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.991 qpair failed and we were unable to recover it. 00:37:36.991 [2024-11-19 08:01:28.830893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.991 [2024-11-19 08:01:28.830934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.991 qpair failed and we were unable to recover it. 00:37:36.991 [2024-11-19 08:01:28.831097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.991 [2024-11-19 08:01:28.831138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.991 qpair failed and we were unable to recover it. 00:37:36.991 [2024-11-19 08:01:28.831242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.991 [2024-11-19 08:01:28.831278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.991 qpair failed and we were unable to recover it. 00:37:36.991 [2024-11-19 08:01:28.831382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.991 [2024-11-19 08:01:28.831419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.991 qpair failed and we were unable to recover it. 00:37:36.991 [2024-11-19 08:01:28.831565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.991 [2024-11-19 08:01:28.831602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.991 qpair failed and we were unable to recover it. 00:37:36.991 [2024-11-19 08:01:28.831756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.991 [2024-11-19 08:01:28.831814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.991 qpair failed and we were unable to recover it. 00:37:36.991 [2024-11-19 08:01:28.831936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.991 [2024-11-19 08:01:28.831973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.991 qpair failed and we were unable to recover it. 00:37:36.991 [2024-11-19 08:01:28.832109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.991 [2024-11-19 08:01:28.832145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.991 qpair failed and we were unable to recover it. 00:37:36.991 [2024-11-19 08:01:28.832286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.991 [2024-11-19 08:01:28.832322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.991 qpair failed and we were unable to recover it. 00:37:36.991 [2024-11-19 08:01:28.832484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.991 [2024-11-19 08:01:28.832519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.991 qpair failed and we were unable to recover it. 00:37:36.991 [2024-11-19 08:01:28.832627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.991 [2024-11-19 08:01:28.832664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.991 qpair failed and we were unable to recover it. 00:37:36.991 [2024-11-19 08:01:28.832813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.991 [2024-11-19 08:01:28.832850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.991 qpair failed and we were unable to recover it. 00:37:36.991 [2024-11-19 08:01:28.832987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.991 [2024-11-19 08:01:28.833037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.991 qpair failed and we were unable to recover it. 00:37:36.991 [2024-11-19 08:01:28.833162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.991 [2024-11-19 08:01:28.833198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.991 qpair failed and we were unable to recover it. 00:37:36.991 [2024-11-19 08:01:28.833342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.991 [2024-11-19 08:01:28.833377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.991 qpair failed and we were unable to recover it. 00:37:36.991 [2024-11-19 08:01:28.833474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.991 [2024-11-19 08:01:28.833510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.991 qpair failed and we were unable to recover it. 00:37:36.991 [2024-11-19 08:01:28.833647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.991 [2024-11-19 08:01:28.833683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.991 qpair failed and we were unable to recover it. 00:37:36.991 [2024-11-19 08:01:28.833843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.991 [2024-11-19 08:01:28.833894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.991 qpair failed and we were unable to recover it. 00:37:36.991 [2024-11-19 08:01:28.834021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.991 [2024-11-19 08:01:28.834058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.991 qpair failed and we were unable to recover it. 00:37:36.991 [2024-11-19 08:01:28.834165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.991 [2024-11-19 08:01:28.834200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.991 qpair failed and we were unable to recover it. 00:37:36.991 [2024-11-19 08:01:28.834342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.992 [2024-11-19 08:01:28.834378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.992 qpair failed and we were unable to recover it. 00:37:36.992 [2024-11-19 08:01:28.834529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.992 [2024-11-19 08:01:28.834567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.992 qpair failed and we were unable to recover it. 00:37:36.992 [2024-11-19 08:01:28.834731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.992 [2024-11-19 08:01:28.834782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.992 qpair failed and we were unable to recover it. 00:37:36.992 [2024-11-19 08:01:28.834904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.992 [2024-11-19 08:01:28.834941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.992 qpair failed and we were unable to recover it. 00:37:36.992 [2024-11-19 08:01:28.835078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.992 [2024-11-19 08:01:28.835113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.992 qpair failed and we were unable to recover it. 00:37:36.992 [2024-11-19 08:01:28.835211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.992 [2024-11-19 08:01:28.835246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.992 qpair failed and we were unable to recover it. 00:37:36.992 [2024-11-19 08:01:28.835387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.992 [2024-11-19 08:01:28.835423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.992 qpair failed and we were unable to recover it. 00:37:36.992 [2024-11-19 08:01:28.835563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.992 [2024-11-19 08:01:28.835603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.992 qpair failed and we were unable to recover it. 00:37:36.992 [2024-11-19 08:01:28.835771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.992 [2024-11-19 08:01:28.835809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.992 qpair failed and we were unable to recover it. 00:37:36.992 [2024-11-19 08:01:28.835920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.992 [2024-11-19 08:01:28.835960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.992 qpair failed and we were unable to recover it. 00:37:36.992 [2024-11-19 08:01:28.836128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.992 [2024-11-19 08:01:28.836165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.992 qpair failed and we were unable to recover it. 00:37:36.992 [2024-11-19 08:01:28.836281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.992 [2024-11-19 08:01:28.836317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.992 qpair failed and we were unable to recover it. 00:37:36.992 [2024-11-19 08:01:28.836449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.992 [2024-11-19 08:01:28.836485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.992 qpair failed and we were unable to recover it. 00:37:36.992 [2024-11-19 08:01:28.836616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.992 [2024-11-19 08:01:28.836652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.992 qpair failed and we were unable to recover it. 00:37:36.992 [2024-11-19 08:01:28.836771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.992 [2024-11-19 08:01:28.836808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.992 qpair failed and we were unable to recover it. 00:37:36.992 [2024-11-19 08:01:28.836948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.992 [2024-11-19 08:01:28.836985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.992 qpair failed and we were unable to recover it. 00:37:36.992 [2024-11-19 08:01:28.837127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.992 [2024-11-19 08:01:28.837163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.992 qpair failed and we were unable to recover it. 00:37:36.992 [2024-11-19 08:01:28.837306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.992 [2024-11-19 08:01:28.837344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.992 qpair failed and we were unable to recover it. 00:37:36.992 [2024-11-19 08:01:28.837478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.992 [2024-11-19 08:01:28.837516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.992 qpair failed and we were unable to recover it. 00:37:36.992 [2024-11-19 08:01:28.837630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.992 [2024-11-19 08:01:28.837668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.992 qpair failed and we were unable to recover it. 00:37:36.992 [2024-11-19 08:01:28.837819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.992 [2024-11-19 08:01:28.837861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.992 qpair failed and we were unable to recover it. 00:37:36.992 [2024-11-19 08:01:28.838003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.992 [2024-11-19 08:01:28.838039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.992 qpair failed and we were unable to recover it. 00:37:36.992 [2024-11-19 08:01:28.838145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.992 [2024-11-19 08:01:28.838182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.992 qpair failed and we were unable to recover it. 00:37:36.992 [2024-11-19 08:01:28.838325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.992 [2024-11-19 08:01:28.838362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.992 qpair failed and we were unable to recover it. 00:37:36.992 [2024-11-19 08:01:28.838478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.992 [2024-11-19 08:01:28.838514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.992 qpair failed and we were unable to recover it. 00:37:36.992 [2024-11-19 08:01:28.838654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.992 [2024-11-19 08:01:28.838700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.992 qpair failed and we were unable to recover it. 00:37:36.992 [2024-11-19 08:01:28.838848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.992 [2024-11-19 08:01:28.838885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.992 qpair failed and we were unable to recover it. 00:37:36.992 [2024-11-19 08:01:28.838997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.992 [2024-11-19 08:01:28.839034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.992 qpair failed and we were unable to recover it. 00:37:36.992 [2024-11-19 08:01:28.839178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.992 [2024-11-19 08:01:28.839214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.992 qpair failed and we were unable to recover it. 00:37:36.992 [2024-11-19 08:01:28.839366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.992 [2024-11-19 08:01:28.839402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.992 qpair failed and we were unable to recover it. 00:37:36.992 [2024-11-19 08:01:28.839538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.992 [2024-11-19 08:01:28.839575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.992 qpair failed and we were unable to recover it. 00:37:36.992 [2024-11-19 08:01:28.839719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.992 [2024-11-19 08:01:28.839755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.993 qpair failed and we were unable to recover it. 00:37:36.993 [2024-11-19 08:01:28.839866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.993 [2024-11-19 08:01:28.839903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.993 qpair failed and we were unable to recover it. 00:37:36.993 [2024-11-19 08:01:28.840049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.993 [2024-11-19 08:01:28.840086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.993 qpair failed and we were unable to recover it. 00:37:36.993 [2024-11-19 08:01:28.840231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.993 [2024-11-19 08:01:28.840268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.993 qpair failed and we were unable to recover it. 00:37:36.993 [2024-11-19 08:01:28.840398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.993 [2024-11-19 08:01:28.840434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.993 qpair failed and we were unable to recover it. 00:37:36.993 [2024-11-19 08:01:28.840572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.993 [2024-11-19 08:01:28.840609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.993 qpair failed and we were unable to recover it. 00:37:36.993 [2024-11-19 08:01:28.840774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.993 [2024-11-19 08:01:28.840824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.993 qpair failed and we were unable to recover it. 00:37:36.993 [2024-11-19 08:01:28.840995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.993 [2024-11-19 08:01:28.841032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.993 qpair failed and we were unable to recover it. 00:37:36.993 [2024-11-19 08:01:28.841173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.993 [2024-11-19 08:01:28.841210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.993 qpair failed and we were unable to recover it. 00:37:36.993 [2024-11-19 08:01:28.841315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.993 [2024-11-19 08:01:28.841351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.993 qpair failed and we were unable to recover it. 00:37:36.993 [2024-11-19 08:01:28.841461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.993 [2024-11-19 08:01:28.841497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.993 qpair failed and we were unable to recover it. 00:37:36.993 [2024-11-19 08:01:28.841608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.993 [2024-11-19 08:01:28.841645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.993 qpair failed and we were unable to recover it. 00:37:36.993 [2024-11-19 08:01:28.841788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.993 [2024-11-19 08:01:28.841826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.993 qpair failed and we were unable to recover it. 00:37:36.993 [2024-11-19 08:01:28.841970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.993 [2024-11-19 08:01:28.842007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.993 qpair failed and we were unable to recover it. 00:37:36.993 [2024-11-19 08:01:28.842169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.993 [2024-11-19 08:01:28.842205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.993 qpair failed and we were unable to recover it. 00:37:36.993 [2024-11-19 08:01:28.842371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.993 [2024-11-19 08:01:28.842407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.993 qpair failed and we were unable to recover it. 00:37:36.993 [2024-11-19 08:01:28.842565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.993 [2024-11-19 08:01:28.842602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.993 qpair failed and we were unable to recover it. 00:37:36.993 [2024-11-19 08:01:28.842734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.993 [2024-11-19 08:01:28.842772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.993 qpair failed and we were unable to recover it. 00:37:36.993 [2024-11-19 08:01:28.842910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.993 [2024-11-19 08:01:28.842947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.993 qpair failed and we were unable to recover it. 00:37:36.993 [2024-11-19 08:01:28.843112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.993 [2024-11-19 08:01:28.843148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.993 qpair failed and we were unable to recover it. 00:37:36.993 [2024-11-19 08:01:28.843311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.993 [2024-11-19 08:01:28.843347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.993 qpair failed and we were unable to recover it. 00:37:36.993 [2024-11-19 08:01:28.843508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.993 [2024-11-19 08:01:28.843546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.993 qpair failed and we were unable to recover it. 00:37:36.993 [2024-11-19 08:01:28.843682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.993 [2024-11-19 08:01:28.843764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.993 qpair failed and we were unable to recover it. 00:37:36.993 [2024-11-19 08:01:28.843903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.993 [2024-11-19 08:01:28.843938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.993 qpair failed and we were unable to recover it. 00:37:36.993 [2024-11-19 08:01:28.844039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.993 [2024-11-19 08:01:28.844078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.993 qpair failed and we were unable to recover it. 00:37:36.993 [2024-11-19 08:01:28.844230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.993 [2024-11-19 08:01:28.844266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.993 qpair failed and we were unable to recover it. 00:37:36.993 [2024-11-19 08:01:28.844379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.993 [2024-11-19 08:01:28.844415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.993 qpair failed and we were unable to recover it. 00:37:36.993 [2024-11-19 08:01:28.844563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.993 [2024-11-19 08:01:28.844601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.993 qpair failed and we were unable to recover it. 00:37:36.993 [2024-11-19 08:01:28.844742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.993 [2024-11-19 08:01:28.844793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:36.993 qpair failed and we were unable to recover it. 00:37:36.993 [2024-11-19 08:01:28.844914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.993 [2024-11-19 08:01:28.844951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.993 qpair failed and we were unable to recover it. 00:37:36.993 [2024-11-19 08:01:28.845106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.993 [2024-11-19 08:01:28.845143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.993 qpair failed and we were unable to recover it. 00:37:36.993 [2024-11-19 08:01:28.845296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.993 [2024-11-19 08:01:28.845332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.993 qpair failed and we were unable to recover it. 00:37:36.993 [2024-11-19 08:01:28.845476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.993 [2024-11-19 08:01:28.845513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.993 qpair failed and we were unable to recover it. 00:37:36.993 [2024-11-19 08:01:28.845654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.993 [2024-11-19 08:01:28.845700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.993 qpair failed and we were unable to recover it. 00:37:36.993 [2024-11-19 08:01:28.845852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.993 [2024-11-19 08:01:28.845888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.993 qpair failed and we were unable to recover it. 00:37:36.993 [2024-11-19 08:01:28.846003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.993 [2024-11-19 08:01:28.846039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.993 qpair failed and we were unable to recover it. 00:37:36.993 [2024-11-19 08:01:28.846181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.993 [2024-11-19 08:01:28.846218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.993 qpair failed and we were unable to recover it. 00:37:36.993 [2024-11-19 08:01:28.846371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.993 [2024-11-19 08:01:28.846424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.993 qpair failed and we were unable to recover it. 00:37:36.993 [2024-11-19 08:01:28.846586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.994 [2024-11-19 08:01:28.846624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.994 qpair failed and we were unable to recover it. 00:37:36.994 [2024-11-19 08:01:28.846771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.994 [2024-11-19 08:01:28.846809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.994 qpair failed and we were unable to recover it. 00:37:36.994 [2024-11-19 08:01:28.846979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.994 [2024-11-19 08:01:28.847014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.994 qpair failed and we were unable to recover it. 00:37:36.994 [2024-11-19 08:01:28.847151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.994 [2024-11-19 08:01:28.847188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.994 qpair failed and we were unable to recover it. 00:37:36.994 [2024-11-19 08:01:28.847349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.994 [2024-11-19 08:01:28.847385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:36.994 qpair failed and we were unable to recover it. 00:37:36.994 [2024-11-19 08:01:28.847532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.994 [2024-11-19 08:01:28.847569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.994 qpair failed and we were unable to recover it. 00:37:36.994 [2024-11-19 08:01:28.847678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.994 [2024-11-19 08:01:28.847733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:36.994 qpair failed and we were unable to recover it. 00:37:36.994 [2024-11-19 08:01:28.847917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.994 [2024-11-19 08:01:28.847976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.994 qpair failed and we were unable to recover it. 00:37:36.994 [2024-11-19 08:01:28.848153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.994 [2024-11-19 08:01:28.848191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:36.994 qpair failed and we were unable to recover it. 00:37:36.994 [2024-11-19 08:01:28.848337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.288 [2024-11-19 08:01:28.848374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.288 qpair failed and we were unable to recover it. 00:37:37.288 [2024-11-19 08:01:28.848488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.288 [2024-11-19 08:01:28.848524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.288 qpair failed and we were unable to recover it. 00:37:37.288 [2024-11-19 08:01:28.848663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.288 [2024-11-19 08:01:28.848708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.288 qpair failed and we were unable to recover it. 00:37:37.288 [2024-11-19 08:01:28.848845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.288 [2024-11-19 08:01:28.848894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.288 qpair failed and we were unable to recover it. 00:37:37.288 [2024-11-19 08:01:28.849052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.288 [2024-11-19 08:01:28.849090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.288 qpair failed and we were unable to recover it. 00:37:37.288 [2024-11-19 08:01:28.849233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.288 [2024-11-19 08:01:28.849269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.288 qpair failed and we were unable to recover it. 00:37:37.288 [2024-11-19 08:01:28.849434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.288 [2024-11-19 08:01:28.849471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.288 qpair failed and we were unable to recover it. 00:37:37.288 [2024-11-19 08:01:28.849620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.288 [2024-11-19 08:01:28.849657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.288 qpair failed and we were unable to recover it. 00:37:37.288 [2024-11-19 08:01:28.849819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.288 [2024-11-19 08:01:28.849856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.288 qpair failed and we were unable to recover it. 00:37:37.288 [2024-11-19 08:01:28.850027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.288 [2024-11-19 08:01:28.850082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.288 qpair failed and we were unable to recover it. 00:37:37.288 [2024-11-19 08:01:28.850230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.288 [2024-11-19 08:01:28.850271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.288 qpair failed and we were unable to recover it. 00:37:37.288 [2024-11-19 08:01:28.850382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.288 [2024-11-19 08:01:28.850419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.288 qpair failed and we were unable to recover it. 00:37:37.288 [2024-11-19 08:01:28.850653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.288 [2024-11-19 08:01:28.850696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.288 qpair failed and we were unable to recover it. 00:37:37.288 [2024-11-19 08:01:28.850861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.288 [2024-11-19 08:01:28.850911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.288 qpair failed and we were unable to recover it. 00:37:37.288 [2024-11-19 08:01:28.851038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.288 [2024-11-19 08:01:28.851076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.288 qpair failed and we were unable to recover it. 00:37:37.288 [2024-11-19 08:01:28.851246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.288 [2024-11-19 08:01:28.851282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.288 qpair failed and we were unable to recover it. 00:37:37.288 [2024-11-19 08:01:28.851395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.288 [2024-11-19 08:01:28.851432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.288 qpair failed and we were unable to recover it. 00:37:37.288 [2024-11-19 08:01:28.851572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.288 [2024-11-19 08:01:28.851609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.288 qpair failed and we were unable to recover it. 00:37:37.288 [2024-11-19 08:01:28.851773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.288 [2024-11-19 08:01:28.851824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.288 qpair failed and we were unable to recover it. 00:37:37.288 [2024-11-19 08:01:28.851939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.288 [2024-11-19 08:01:28.851977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.288 qpair failed and we were unable to recover it. 00:37:37.288 [2024-11-19 08:01:28.852141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.288 [2024-11-19 08:01:28.852178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.288 qpair failed and we were unable to recover it. 00:37:37.288 [2024-11-19 08:01:28.852298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.288 [2024-11-19 08:01:28.852337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.288 qpair failed and we were unable to recover it. 00:37:37.288 [2024-11-19 08:01:28.852454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.288 [2024-11-19 08:01:28.852492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.288 qpair failed and we were unable to recover it. 00:37:37.288 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3138556 Killed "${NVMF_APP[@]}" "$@" 00:37:37.288 [2024-11-19 08:01:28.852650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.288 [2024-11-19 08:01:28.852693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.288 qpair failed and we were unable to recover it. 00:37:37.288 [2024-11-19 08:01:28.852810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.289 [2024-11-19 08:01:28.852846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.289 qpair failed and we were unable to recover it. 00:37:37.289 [2024-11-19 08:01:28.852987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.289 [2024-11-19 08:01:28.853023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.289 08:01:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:37:37.289 qpair failed and we were unable to recover it. 00:37:37.289 [2024-11-19 08:01:28.853159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.289 08:01:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:37:37.289 [2024-11-19 08:01:28.853195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.289 qpair failed and we were unable to recover it. 00:37:37.289 08:01:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:37.289 [2024-11-19 08:01:28.853344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.289 [2024-11-19 08:01:28.853381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.289 qpair failed and we were unable to recover it. 00:37:37.289 08:01:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:37.289 [2024-11-19 08:01:28.853517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.289 [2024-11-19 08:01:28.853553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.289 08:01:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:37.289 qpair failed and we were unable to recover it. 00:37:37.289 [2024-11-19 08:01:28.853683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.289 [2024-11-19 08:01:28.853725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.289 qpair failed and we were unable to recover it. 00:37:37.289 [2024-11-19 08:01:28.853859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.289 [2024-11-19 08:01:28.853894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.289 qpair failed and we were unable to recover it. 00:37:37.289 [2024-11-19 08:01:28.854011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.289 [2024-11-19 08:01:28.854047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.289 qpair failed and we were unable to recover it. 00:37:37.289 [2024-11-19 08:01:28.854154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.289 [2024-11-19 08:01:28.854190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.289 qpair failed and we were unable to recover it. 00:37:37.289 [2024-11-19 08:01:28.854321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.289 [2024-11-19 08:01:28.854362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.289 qpair failed and we were unable to recover it. 00:37:37.289 [2024-11-19 08:01:28.854503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.289 [2024-11-19 08:01:28.854539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.289 qpair failed and we were unable to recover it. 00:37:37.289 [2024-11-19 08:01:28.854656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.289 [2024-11-19 08:01:28.854704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.289 qpair failed and we were unable to recover it. 00:37:37.289 [2024-11-19 08:01:28.854862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.289 [2024-11-19 08:01:28.854898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.289 qpair failed and we were unable to recover it. 00:37:37.289 [2024-11-19 08:01:28.855020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.289 [2024-11-19 08:01:28.855057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.289 qpair failed and we were unable to recover it. 00:37:37.289 [2024-11-19 08:01:28.855221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.289 [2024-11-19 08:01:28.855257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.289 qpair failed and we were unable to recover it. 00:37:37.289 [2024-11-19 08:01:28.855395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.289 [2024-11-19 08:01:28.855431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.289 qpair failed and we were unable to recover it. 00:37:37.289 [2024-11-19 08:01:28.855574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.289 [2024-11-19 08:01:28.855611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.289 qpair failed and we were unable to recover it. 00:37:37.289 [2024-11-19 08:01:28.855805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.289 [2024-11-19 08:01:28.855856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.289 qpair failed and we were unable to recover it. 00:37:37.289 [2024-11-19 08:01:28.856009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.289 [2024-11-19 08:01:28.856048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.289 qpair failed and we were unable to recover it. 00:37:37.289 [2024-11-19 08:01:28.856188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.289 [2024-11-19 08:01:28.856224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.289 qpair failed and we were unable to recover it. 00:37:37.289 [2024-11-19 08:01:28.856368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.289 [2024-11-19 08:01:28.856404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.289 qpair failed and we were unable to recover it. 00:37:37.289 [2024-11-19 08:01:28.856521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.289 [2024-11-19 08:01:28.856557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.289 qpair failed and we were unable to recover it. 00:37:37.289 [2024-11-19 08:01:28.856665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.289 [2024-11-19 08:01:28.856710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.289 qpair failed and we were unable to recover it. 00:37:37.289 [2024-11-19 08:01:28.856898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.289 [2024-11-19 08:01:28.856933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.289 qpair failed and we were unable to recover it. 00:37:37.289 [2024-11-19 08:01:28.857074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.289 [2024-11-19 08:01:28.857114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.289 qpair failed and we were unable to recover it. 00:37:37.289 [2024-11-19 08:01:28.857253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.289 [2024-11-19 08:01:28.857289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.289 qpair failed and we were unable to recover it. 00:37:37.289 [2024-11-19 08:01:28.857404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.289 [2024-11-19 08:01:28.857440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.289 qpair failed and we were unable to recover it. 00:37:37.289 [2024-11-19 08:01:28.857547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.289 [2024-11-19 08:01:28.857582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.289 qpair failed and we were unable to recover it. 00:37:37.289 [2024-11-19 08:01:28.857736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.289 [2024-11-19 08:01:28.857773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.289 qpair failed and we were unable to recover it. 00:37:37.289 [2024-11-19 08:01:28.857901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.289 [2024-11-19 08:01:28.857951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.289 qpair failed and we were unable to recover it. 00:37:37.289 [2024-11-19 08:01:28.858115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.289 [2024-11-19 08:01:28.858153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.289 qpair failed and we were unable to recover it. 00:37:37.289 [2024-11-19 08:01:28.858308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.289 [2024-11-19 08:01:28.858344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.289 qpair failed and we were unable to recover it. 00:37:37.289 [2024-11-19 08:01:28.858447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.289 [2024-11-19 08:01:28.858483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.289 qpair failed and we were unable to recover it. 00:37:37.289 [2024-11-19 08:01:28.858622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.289 [2024-11-19 08:01:28.858658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.289 qpair failed and we were unable to recover it. 00:37:37.289 [2024-11-19 08:01:28.858809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.289 [2024-11-19 08:01:28.858846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.289 qpair failed and we were unable to recover it. 00:37:37.289 [2024-11-19 08:01:28.858967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.290 [2024-11-19 08:01:28.859005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.290 qpair failed and we were unable to recover it. 00:37:37.290 [2024-11-19 08:01:28.859160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.290 [2024-11-19 08:01:28.859198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.290 qpair failed and we were unable to recover it. 00:37:37.290 [2024-11-19 08:01:28.859332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.290 [2024-11-19 08:01:28.859369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.290 qpair failed and we were unable to recover it. 00:37:37.290 08:01:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3139121 00:37:37.290 [2024-11-19 08:01:28.859529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.290 [2024-11-19 08:01:28.859567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.290 qpair failed and we were unable to recover it. 00:37:37.290 08:01:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:37:37.290 [2024-11-19 08:01:28.859744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.290 08:01:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3139121 00:37:37.290 [2024-11-19 08:01:28.859795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.290 qpair failed and we were unable to recover it. 00:37:37.290 [2024-11-19 08:01:28.859916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.290 [2024-11-19 08:01:28.859958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.290 qpair failed and we were unable to recover it. 00:37:37.290 08:01:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3139121 ']' 00:37:37.290 [2024-11-19 08:01:28.860094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.290 [2024-11-19 08:01:28.860132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.290 qpair failed and we were unable to recover it. 00:37:37.290 [2024-11-19 08:01:28.860266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.290 [2024-11-19 08:01:28.860304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.290 qpair failed and we were unable to recover it. 00:37:37.290 08:01:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:37.290 [2024-11-19 08:01:28.860455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.290 [2024-11-19 08:01:28.860493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.290 qpair failed and we were unable to recover it. 00:37:37.290 08:01:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:37.290 [2024-11-19 08:01:28.860655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.290 [2024-11-19 08:01:28.860704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.290 qpair failed and we were unable to recover it. 00:37:37.290 08:01:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:37.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:37.290 [2024-11-19 08:01:28.860851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.290 [2024-11-19 08:01:28.860904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.290 qpair failed and we were unable to recover it. 00:37:37.290 08:01:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:37.290 [2024-11-19 08:01:28.861079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.290 [2024-11-19 08:01:28.861116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.290 qpair failed and we were unable to recover it. 00:37:37.290 08:01:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:37.290 [2024-11-19 08:01:28.861261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.290 [2024-11-19 08:01:28.861299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.290 qpair failed and we were unable to recover it. 00:37:37.290 [2024-11-19 08:01:28.861407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.290 [2024-11-19 08:01:28.861442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.290 qpair failed and we were unable to recover it. 00:37:37.290 [2024-11-19 08:01:28.861607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.290 [2024-11-19 08:01:28.861643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.290 qpair failed and we were unable to recover it. 00:37:37.290 [2024-11-19 08:01:28.861810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.290 [2024-11-19 08:01:28.861847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.290 qpair failed and we were unable to recover it. 00:37:37.290 [2024-11-19 08:01:28.862009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.290 [2024-11-19 08:01:28.862045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.290 qpair failed and we were unable to recover it. 00:37:37.290 [2024-11-19 08:01:28.862182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.290 [2024-11-19 08:01:28.862217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.290 qpair failed and we were unable to recover it. 00:37:37.290 [2024-11-19 08:01:28.862361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.290 [2024-11-19 08:01:28.862397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.290 qpair failed and we were unable to recover it. 00:37:37.290 [2024-11-19 08:01:28.862537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.290 [2024-11-19 08:01:28.862573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.290 qpair failed and we were unable to recover it. 00:37:37.290 [2024-11-19 08:01:28.862701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.290 [2024-11-19 08:01:28.862749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.290 qpair failed and we were unable to recover it. 00:37:37.290 [2024-11-19 08:01:28.862888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.290 [2024-11-19 08:01:28.862924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.290 qpair failed and we were unable to recover it. 00:37:37.290 [2024-11-19 08:01:28.863086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.290 [2024-11-19 08:01:28.863122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.290 qpair failed and we were unable to recover it. 00:37:37.290 [2024-11-19 08:01:28.863273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.290 [2024-11-19 08:01:28.863310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.290 qpair failed and we were unable to recover it. 00:37:37.290 [2024-11-19 08:01:28.863457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.290 [2024-11-19 08:01:28.863492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.290 qpair failed and we were unable to recover it. 00:37:37.290 [2024-11-19 08:01:28.863633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.290 [2024-11-19 08:01:28.863669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.290 qpair failed and we were unable to recover it. 00:37:37.290 [2024-11-19 08:01:28.863833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.290 [2024-11-19 08:01:28.863882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.290 qpair failed and we were unable to recover it. 00:37:37.290 [2024-11-19 08:01:28.864065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.290 [2024-11-19 08:01:28.864102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.290 qpair failed and we were unable to recover it. 00:37:37.290 [2024-11-19 08:01:28.864213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.290 [2024-11-19 08:01:28.864248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.290 qpair failed and we were unable to recover it. 00:37:37.290 [2024-11-19 08:01:28.864390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.290 [2024-11-19 08:01:28.864425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.290 qpair failed and we were unable to recover it. 00:37:37.290 [2024-11-19 08:01:28.864591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.290 [2024-11-19 08:01:28.864626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.290 qpair failed and we were unable to recover it. 00:37:37.290 [2024-11-19 08:01:28.864773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.290 [2024-11-19 08:01:28.864809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.290 qpair failed and we were unable to recover it. 00:37:37.290 [2024-11-19 08:01:28.865028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.290 [2024-11-19 08:01:28.865065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.290 qpair failed and we were unable to recover it. 00:37:37.290 [2024-11-19 08:01:28.865234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.291 [2024-11-19 08:01:28.865269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.291 qpair failed and we were unable to recover it. 00:37:37.291 [2024-11-19 08:01:28.865402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.291 [2024-11-19 08:01:28.865438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.291 qpair failed and we were unable to recover it. 00:37:37.291 [2024-11-19 08:01:28.865544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.291 [2024-11-19 08:01:28.865581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.291 qpair failed and we were unable to recover it. 00:37:37.291 [2024-11-19 08:01:28.865743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.291 [2024-11-19 08:01:28.865799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.291 qpair failed and we were unable to recover it. 00:37:37.291 [2024-11-19 08:01:28.865937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.291 [2024-11-19 08:01:28.865987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.291 qpair failed and we were unable to recover it. 00:37:37.291 [2024-11-19 08:01:28.866162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.291 [2024-11-19 08:01:28.866199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.291 qpair failed and we were unable to recover it. 00:37:37.291 [2024-11-19 08:01:28.866343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.291 [2024-11-19 08:01:28.866380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.291 qpair failed and we were unable to recover it. 00:37:37.291 [2024-11-19 08:01:28.866519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.291 [2024-11-19 08:01:28.866555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.291 qpair failed and we were unable to recover it. 00:37:37.291 [2024-11-19 08:01:28.866673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.291 [2024-11-19 08:01:28.866716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.291 qpair failed and we were unable to recover it. 00:37:37.291 [2024-11-19 08:01:28.866828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.291 [2024-11-19 08:01:28.866863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.291 qpair failed and we were unable to recover it. 00:37:37.291 [2024-11-19 08:01:28.866980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.291 [2024-11-19 08:01:28.867015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.291 qpair failed and we were unable to recover it. 00:37:37.291 [2024-11-19 08:01:28.867155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.291 [2024-11-19 08:01:28.867191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.291 qpair failed and we were unable to recover it. 00:37:37.291 [2024-11-19 08:01:28.867294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.291 [2024-11-19 08:01:28.867330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.291 qpair failed and we were unable to recover it. 00:37:37.291 [2024-11-19 08:01:28.867449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.291 [2024-11-19 08:01:28.867499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.291 qpair failed and we were unable to recover it. 00:37:37.291 [2024-11-19 08:01:28.867670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.291 [2024-11-19 08:01:28.867713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.291 qpair failed and we were unable to recover it. 00:37:37.291 [2024-11-19 08:01:28.867858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.291 [2024-11-19 08:01:28.867893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.291 qpair failed and we were unable to recover it. 00:37:37.291 [2024-11-19 08:01:28.868027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.291 [2024-11-19 08:01:28.868062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.291 qpair failed and we were unable to recover it. 00:37:37.291 [2024-11-19 08:01:28.868219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.291 [2024-11-19 08:01:28.868254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.291 qpair failed and we were unable to recover it. 00:37:37.291 [2024-11-19 08:01:28.868382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.291 [2024-11-19 08:01:28.868417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.291 qpair failed and we were unable to recover it. 00:37:37.291 [2024-11-19 08:01:28.868552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.291 [2024-11-19 08:01:28.868588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.291 qpair failed and we were unable to recover it. 00:37:37.291 [2024-11-19 08:01:28.868732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.291 [2024-11-19 08:01:28.868771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.291 qpair failed and we were unable to recover it. 00:37:37.291 [2024-11-19 08:01:28.868925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.291 [2024-11-19 08:01:28.868974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.291 qpair failed and we were unable to recover it. 00:37:37.291 [2024-11-19 08:01:28.869117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.291 [2024-11-19 08:01:28.869154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.291 qpair failed and we were unable to recover it. 00:37:37.291 [2024-11-19 08:01:28.869269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.291 [2024-11-19 08:01:28.869306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.291 qpair failed and we were unable to recover it. 00:37:37.291 [2024-11-19 08:01:28.869445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.291 [2024-11-19 08:01:28.869480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.291 qpair failed and we were unable to recover it. 00:37:37.291 [2024-11-19 08:01:28.869594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.291 [2024-11-19 08:01:28.869644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.291 qpair failed and we were unable to recover it. 00:37:37.291 [2024-11-19 08:01:28.869802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.291 [2024-11-19 08:01:28.869840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.291 qpair failed and we were unable to recover it. 00:37:37.291 [2024-11-19 08:01:28.869959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.291 [2024-11-19 08:01:28.869996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.291 qpair failed and we were unable to recover it. 00:37:37.291 [2024-11-19 08:01:28.870162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.291 [2024-11-19 08:01:28.870197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.291 qpair failed and we were unable to recover it. 00:37:37.291 [2024-11-19 08:01:28.870334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.291 [2024-11-19 08:01:28.870369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.291 qpair failed and we were unable to recover it. 00:37:37.291 [2024-11-19 08:01:28.870516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.291 [2024-11-19 08:01:28.870552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.291 qpair failed and we were unable to recover it. 00:37:37.291 [2024-11-19 08:01:28.870697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.291 [2024-11-19 08:01:28.870734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.291 qpair failed and we were unable to recover it. 00:37:37.291 [2024-11-19 08:01:28.870846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.291 [2024-11-19 08:01:28.870882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.291 qpair failed and we were unable to recover it. 00:37:37.291 [2024-11-19 08:01:28.871041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.291 [2024-11-19 08:01:28.871077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.291 qpair failed and we were unable to recover it. 00:37:37.291 [2024-11-19 08:01:28.871190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.291 [2024-11-19 08:01:28.871226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.291 qpair failed and we were unable to recover it. 00:37:37.291 [2024-11-19 08:01:28.871331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.291 [2024-11-19 08:01:28.871367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.291 qpair failed and we were unable to recover it. 00:37:37.291 [2024-11-19 08:01:28.871505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.291 [2024-11-19 08:01:28.871541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.291 qpair failed and we were unable to recover it. 00:37:37.291 [2024-11-19 08:01:28.871685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.291 [2024-11-19 08:01:28.871730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.291 qpair failed and we were unable to recover it. 00:37:37.292 [2024-11-19 08:01:28.871843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.292 [2024-11-19 08:01:28.871879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.292 qpair failed and we were unable to recover it. 00:37:37.292 [2024-11-19 08:01:28.872014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.292 [2024-11-19 08:01:28.872052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.292 qpair failed and we were unable to recover it. 00:37:37.292 [2024-11-19 08:01:28.872183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.292 [2024-11-19 08:01:28.872219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.292 qpair failed and we were unable to recover it. 00:37:37.292 [2024-11-19 08:01:28.872344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.292 [2024-11-19 08:01:28.872393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.292 qpair failed and we were unable to recover it. 00:37:37.292 [2024-11-19 08:01:28.872539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.292 [2024-11-19 08:01:28.872576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.292 qpair failed and we were unable to recover it. 00:37:37.292 [2024-11-19 08:01:28.872722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.292 [2024-11-19 08:01:28.872774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.292 qpair failed and we were unable to recover it. 00:37:37.292 [2024-11-19 08:01:28.872884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.292 [2024-11-19 08:01:28.872920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.292 qpair failed and we were unable to recover it. 00:37:37.292 [2024-11-19 08:01:28.873059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.292 [2024-11-19 08:01:28.873094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.292 qpair failed and we were unable to recover it. 00:37:37.292 [2024-11-19 08:01:28.873230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.292 [2024-11-19 08:01:28.873264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.292 qpair failed and we were unable to recover it. 00:37:37.292 [2024-11-19 08:01:28.873418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.292 [2024-11-19 08:01:28.873454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.292 qpair failed and we were unable to recover it. 00:37:37.292 [2024-11-19 08:01:28.873590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.292 [2024-11-19 08:01:28.873636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.292 qpair failed and we were unable to recover it. 00:37:37.292 [2024-11-19 08:01:28.873787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.292 [2024-11-19 08:01:28.873822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.292 qpair failed and we were unable to recover it. 00:37:37.292 [2024-11-19 08:01:28.873957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.292 [2024-11-19 08:01:28.873992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.292 qpair failed and we were unable to recover it. 00:37:37.292 [2024-11-19 08:01:28.874146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.292 [2024-11-19 08:01:28.874196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.292 qpair failed and we were unable to recover it. 00:37:37.292 [2024-11-19 08:01:28.874371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.292 [2024-11-19 08:01:28.874409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.292 qpair failed and we were unable to recover it. 00:37:37.292 [2024-11-19 08:01:28.874576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.292 [2024-11-19 08:01:28.874612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.292 qpair failed and we were unable to recover it. 00:37:37.292 [2024-11-19 08:01:28.874721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.292 [2024-11-19 08:01:28.874758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.292 qpair failed and we were unable to recover it. 00:37:37.292 [2024-11-19 08:01:28.874977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.292 [2024-11-19 08:01:28.875013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.292 qpair failed and we were unable to recover it. 00:37:37.292 [2024-11-19 08:01:28.875116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.292 [2024-11-19 08:01:28.875152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.292 qpair failed and we were unable to recover it. 00:37:37.292 [2024-11-19 08:01:28.875265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.292 [2024-11-19 08:01:28.875302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.292 qpair failed and we were unable to recover it. 00:37:37.292 [2024-11-19 08:01:28.875443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.292 [2024-11-19 08:01:28.875478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.292 qpair failed and we were unable to recover it. 00:37:37.292 [2024-11-19 08:01:28.875592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.292 [2024-11-19 08:01:28.875629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.292 qpair failed and we were unable to recover it. 00:37:37.292 [2024-11-19 08:01:28.875779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.292 [2024-11-19 08:01:28.875816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.292 qpair failed and we were unable to recover it. 00:37:37.292 [2024-11-19 08:01:28.875933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.292 [2024-11-19 08:01:28.875968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.292 qpair failed and we were unable to recover it. 00:37:37.292 [2024-11-19 08:01:28.876127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.292 [2024-11-19 08:01:28.876162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.292 qpair failed and we were unable to recover it. 00:37:37.292 [2024-11-19 08:01:28.876272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.292 [2024-11-19 08:01:28.876307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.292 qpair failed and we were unable to recover it. 00:37:37.292 [2024-11-19 08:01:28.876447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.292 [2024-11-19 08:01:28.876482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.292 qpair failed and we were unable to recover it. 00:37:37.292 [2024-11-19 08:01:28.876597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.292 [2024-11-19 08:01:28.876632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.292 qpair failed and we were unable to recover it. 00:37:37.292 [2024-11-19 08:01:28.876781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.292 [2024-11-19 08:01:28.876832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.292 qpair failed and we were unable to recover it. 00:37:37.292 [2024-11-19 08:01:28.876967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.292 [2024-11-19 08:01:28.877016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.292 qpair failed and we were unable to recover it. 00:37:37.292 [2024-11-19 08:01:28.877242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.292 [2024-11-19 08:01:28.877279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.292 qpair failed and we were unable to recover it. 00:37:37.292 [2024-11-19 08:01:28.877415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.292 [2024-11-19 08:01:28.877450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.292 qpair failed and we were unable to recover it. 00:37:37.292 [2024-11-19 08:01:28.877588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.293 [2024-11-19 08:01:28.877623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.293 qpair failed and we were unable to recover it. 00:37:37.293 [2024-11-19 08:01:28.877754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.293 [2024-11-19 08:01:28.877791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.293 qpair failed and we were unable to recover it. 00:37:37.293 [2024-11-19 08:01:28.877932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.293 [2024-11-19 08:01:28.877968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.293 qpair failed and we were unable to recover it. 00:37:37.293 [2024-11-19 08:01:28.878104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.293 [2024-11-19 08:01:28.878139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.293 qpair failed and we were unable to recover it. 00:37:37.293 [2024-11-19 08:01:28.878268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.293 [2024-11-19 08:01:28.878304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.293 qpair failed and we were unable to recover it. 00:37:37.293 [2024-11-19 08:01:28.878443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.293 [2024-11-19 08:01:28.878480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.293 qpair failed and we were unable to recover it. 00:37:37.293 [2024-11-19 08:01:28.878638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.293 [2024-11-19 08:01:28.878700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.293 qpair failed and we were unable to recover it. 00:37:37.293 [2024-11-19 08:01:28.878839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.293 [2024-11-19 08:01:28.878877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.293 qpair failed and we were unable to recover it. 00:37:37.293 [2024-11-19 08:01:28.879095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.293 [2024-11-19 08:01:28.879130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.293 qpair failed and we were unable to recover it. 00:37:37.293 [2024-11-19 08:01:28.879235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.293 [2024-11-19 08:01:28.879270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.293 qpair failed and we were unable to recover it. 00:37:37.293 [2024-11-19 08:01:28.879412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.293 [2024-11-19 08:01:28.879447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.293 qpair failed and we were unable to recover it. 00:37:37.293 [2024-11-19 08:01:28.879563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.293 [2024-11-19 08:01:28.879599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.293 qpair failed and we were unable to recover it. 00:37:37.293 [2024-11-19 08:01:28.879714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.293 [2024-11-19 08:01:28.879751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.293 qpair failed and we were unable to recover it. 00:37:37.293 [2024-11-19 08:01:28.879888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.293 [2024-11-19 08:01:28.879930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.293 qpair failed and we were unable to recover it. 00:37:37.293 [2024-11-19 08:01:28.880077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.293 [2024-11-19 08:01:28.880112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.293 qpair failed and we were unable to recover it. 00:37:37.293 [2024-11-19 08:01:28.880253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.293 [2024-11-19 08:01:28.880288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.293 qpair failed and we were unable to recover it. 00:37:37.293 [2024-11-19 08:01:28.880400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.293 [2024-11-19 08:01:28.880436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.293 qpair failed and we were unable to recover it. 00:37:37.293 [2024-11-19 08:01:28.880580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.293 [2024-11-19 08:01:28.880630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.293 qpair failed and we were unable to recover it. 00:37:37.293 [2024-11-19 08:01:28.880752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.293 [2024-11-19 08:01:28.880790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.293 qpair failed and we were unable to recover it. 00:37:37.293 [2024-11-19 08:01:28.880982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.293 [2024-11-19 08:01:28.881031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.293 qpair failed and we were unable to recover it. 00:37:37.293 [2024-11-19 08:01:28.881170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.293 [2024-11-19 08:01:28.881205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.293 qpair failed and we were unable to recover it. 00:37:37.293 [2024-11-19 08:01:28.881319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.293 [2024-11-19 08:01:28.881355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.293 qpair failed and we were unable to recover it. 00:37:37.293 [2024-11-19 08:01:28.881469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.293 [2024-11-19 08:01:28.881506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.293 qpair failed and we were unable to recover it. 00:37:37.293 [2024-11-19 08:01:28.881639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.293 [2024-11-19 08:01:28.881674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.293 qpair failed and we were unable to recover it. 00:37:37.293 [2024-11-19 08:01:28.881800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.293 [2024-11-19 08:01:28.881839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.293 qpair failed and we were unable to recover it. 00:37:37.293 [2024-11-19 08:01:28.882006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.293 [2024-11-19 08:01:28.882042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.293 qpair failed and we were unable to recover it. 00:37:37.293 [2024-11-19 08:01:28.882177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.293 [2024-11-19 08:01:28.882212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.293 qpair failed and we were unable to recover it. 00:37:37.293 [2024-11-19 08:01:28.882353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.293 [2024-11-19 08:01:28.882387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.293 qpair failed and we were unable to recover it. 00:37:37.293 [2024-11-19 08:01:28.882522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.293 [2024-11-19 08:01:28.882559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.293 qpair failed and we were unable to recover it. 00:37:37.293 [2024-11-19 08:01:28.882700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.293 [2024-11-19 08:01:28.882736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.293 qpair failed and we were unable to recover it. 00:37:37.293 [2024-11-19 08:01:28.882845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.293 [2024-11-19 08:01:28.882880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.293 qpair failed and we were unable to recover it. 00:37:37.293 [2024-11-19 08:01:28.882988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.293 [2024-11-19 08:01:28.883024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.293 qpair failed and we were unable to recover it. 00:37:37.293 [2024-11-19 08:01:28.883129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.293 [2024-11-19 08:01:28.883164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.293 qpair failed and we were unable to recover it. 00:37:37.293 [2024-11-19 08:01:28.883309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.293 [2024-11-19 08:01:28.883346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.293 qpair failed and we were unable to recover it. 00:37:37.293 [2024-11-19 08:01:28.883487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.293 [2024-11-19 08:01:28.883523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.293 qpair failed and we were unable to recover it. 00:37:37.293 [2024-11-19 08:01:28.883684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.293 [2024-11-19 08:01:28.883740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.293 qpair failed and we were unable to recover it. 00:37:37.293 [2024-11-19 08:01:28.883885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.293 [2024-11-19 08:01:28.883921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.293 qpair failed and we were unable to recover it. 00:37:37.294 [2024-11-19 08:01:28.884060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.294 [2024-11-19 08:01:28.884096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.294 qpair failed and we were unable to recover it. 00:37:37.294 [2024-11-19 08:01:28.884259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.294 [2024-11-19 08:01:28.884294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.294 qpair failed and we were unable to recover it. 00:37:37.294 [2024-11-19 08:01:28.884398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.294 [2024-11-19 08:01:28.884433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.294 qpair failed and we were unable to recover it. 00:37:37.294 [2024-11-19 08:01:28.884578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.294 [2024-11-19 08:01:28.884613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.294 qpair failed and we were unable to recover it. 00:37:37.294 [2024-11-19 08:01:28.884762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.294 [2024-11-19 08:01:28.884799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.294 qpair failed and we were unable to recover it. 00:37:37.294 [2024-11-19 08:01:28.884906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.294 [2024-11-19 08:01:28.884940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.294 qpair failed and we were unable to recover it. 00:37:37.294 [2024-11-19 08:01:28.885062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.294 [2024-11-19 08:01:28.885097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.294 qpair failed and we were unable to recover it. 00:37:37.294 [2024-11-19 08:01:28.885229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.294 [2024-11-19 08:01:28.885264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.294 qpair failed and we were unable to recover it. 00:37:37.294 [2024-11-19 08:01:28.885375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.294 [2024-11-19 08:01:28.885410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.294 qpair failed and we were unable to recover it. 00:37:37.294 [2024-11-19 08:01:28.885547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.294 [2024-11-19 08:01:28.885582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.294 qpair failed and we were unable to recover it. 00:37:37.294 [2024-11-19 08:01:28.885713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.294 [2024-11-19 08:01:28.885749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.294 qpair failed and we were unable to recover it. 00:37:37.294 [2024-11-19 08:01:28.885879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.294 [2024-11-19 08:01:28.885930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.294 qpair failed and we were unable to recover it. 00:37:37.294 [2024-11-19 08:01:28.886145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.294 [2024-11-19 08:01:28.886194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.294 qpair failed and we were unable to recover it. 00:37:37.294 [2024-11-19 08:01:28.886335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.294 [2024-11-19 08:01:28.886372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.294 qpair failed and we were unable to recover it. 00:37:37.294 [2024-11-19 08:01:28.886514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.294 [2024-11-19 08:01:28.886551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.294 qpair failed and we were unable to recover it. 00:37:37.294 [2024-11-19 08:01:28.886709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.294 [2024-11-19 08:01:28.886745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.294 qpair failed and we were unable to recover it. 00:37:37.294 [2024-11-19 08:01:28.886856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.294 [2024-11-19 08:01:28.886897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.294 qpair failed and we were unable to recover it. 00:37:37.294 [2024-11-19 08:01:28.887036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.294 [2024-11-19 08:01:28.887071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.294 qpair failed and we were unable to recover it. 00:37:37.294 [2024-11-19 08:01:28.887216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.294 [2024-11-19 08:01:28.887252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.294 qpair failed and we were unable to recover it. 00:37:37.294 [2024-11-19 08:01:28.887363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.294 [2024-11-19 08:01:28.887397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.294 qpair failed and we were unable to recover it. 00:37:37.294 [2024-11-19 08:01:28.887529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.294 [2024-11-19 08:01:28.887564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.294 qpair failed and we were unable to recover it. 00:37:37.294 [2024-11-19 08:01:28.887681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.294 [2024-11-19 08:01:28.887741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.294 qpair failed and we were unable to recover it. 00:37:37.294 [2024-11-19 08:01:28.887921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.294 [2024-11-19 08:01:28.887959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.294 qpair failed and we were unable to recover it. 00:37:37.294 [2024-11-19 08:01:28.888084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.294 [2024-11-19 08:01:28.888121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.294 qpair failed and we were unable to recover it. 00:37:37.294 [2024-11-19 08:01:28.888260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.294 [2024-11-19 08:01:28.888296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.294 qpair failed and we were unable to recover it. 00:37:37.294 [2024-11-19 08:01:28.888463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.294 [2024-11-19 08:01:28.888501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.294 qpair failed and we were unable to recover it. 00:37:37.294 [2024-11-19 08:01:28.888655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.294 [2024-11-19 08:01:28.888716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.294 qpair failed and we were unable to recover it. 00:37:37.294 [2024-11-19 08:01:28.888834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.294 [2024-11-19 08:01:28.888870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.294 qpair failed and we were unable to recover it. 00:37:37.294 [2024-11-19 08:01:28.889017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.294 [2024-11-19 08:01:28.889052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.294 qpair failed and we were unable to recover it. 00:37:37.294 [2024-11-19 08:01:28.889162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.294 [2024-11-19 08:01:28.889197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.294 qpair failed and we were unable to recover it. 00:37:37.294 [2024-11-19 08:01:28.889343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.294 [2024-11-19 08:01:28.889378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.294 qpair failed and we were unable to recover it. 00:37:37.294 [2024-11-19 08:01:28.889549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.294 [2024-11-19 08:01:28.889587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.294 qpair failed and we were unable to recover it. 00:37:37.294 [2024-11-19 08:01:28.889750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.294 [2024-11-19 08:01:28.889787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.294 qpair failed and we were unable to recover it. 00:37:37.294 [2024-11-19 08:01:28.889940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.294 [2024-11-19 08:01:28.889978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.294 qpair failed and we were unable to recover it. 00:37:37.294 [2024-11-19 08:01:28.890131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.294 [2024-11-19 08:01:28.890166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.294 qpair failed and we were unable to recover it. 00:37:37.294 [2024-11-19 08:01:28.890279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.294 [2024-11-19 08:01:28.890314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.294 qpair failed and we were unable to recover it. 00:37:37.294 [2024-11-19 08:01:28.890464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.294 [2024-11-19 08:01:28.890500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.294 qpair failed and we were unable to recover it. 00:37:37.294 [2024-11-19 08:01:28.890608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.295 [2024-11-19 08:01:28.890644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.295 qpair failed and we were unable to recover it. 00:37:37.295 [2024-11-19 08:01:28.890796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.295 [2024-11-19 08:01:28.890831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.295 qpair failed and we were unable to recover it. 00:37:37.295 [2024-11-19 08:01:28.890943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.295 [2024-11-19 08:01:28.890990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.295 qpair failed and we were unable to recover it. 00:37:37.295 [2024-11-19 08:01:28.891090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.295 [2024-11-19 08:01:28.891125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.295 qpair failed and we were unable to recover it. 00:37:37.295 [2024-11-19 08:01:28.891275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.295 [2024-11-19 08:01:28.891311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.295 qpair failed and we were unable to recover it. 00:37:37.295 [2024-11-19 08:01:28.891476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.295 [2024-11-19 08:01:28.891512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.295 qpair failed and we were unable to recover it. 00:37:37.295 [2024-11-19 08:01:28.891640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.295 [2024-11-19 08:01:28.891705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.295 qpair failed and we were unable to recover it. 00:37:37.295 [2024-11-19 08:01:28.891833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.295 [2024-11-19 08:01:28.891882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.295 qpair failed and we were unable to recover it. 00:37:37.295 [2024-11-19 08:01:28.892003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.295 [2024-11-19 08:01:28.892039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.295 qpair failed and we were unable to recover it. 00:37:37.295 [2024-11-19 08:01:28.892208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.295 [2024-11-19 08:01:28.892242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.295 qpair failed and we were unable to recover it. 00:37:37.295 [2024-11-19 08:01:28.892353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.295 [2024-11-19 08:01:28.892400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.295 qpair failed and we were unable to recover it. 00:37:37.295 [2024-11-19 08:01:28.892537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.295 [2024-11-19 08:01:28.892572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.295 qpair failed and we were unable to recover it. 00:37:37.295 [2024-11-19 08:01:28.892720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.295 [2024-11-19 08:01:28.892758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.295 qpair failed and we were unable to recover it. 00:37:37.295 [2024-11-19 08:01:28.892900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.295 [2024-11-19 08:01:28.892935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.295 qpair failed and we were unable to recover it. 00:37:37.295 [2024-11-19 08:01:28.893058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.295 [2024-11-19 08:01:28.893093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.295 qpair failed and we were unable to recover it. 00:37:37.295 [2024-11-19 08:01:28.893238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.295 [2024-11-19 08:01:28.893273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.295 qpair failed and we were unable to recover it. 00:37:37.295 [2024-11-19 08:01:28.893408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.295 [2024-11-19 08:01:28.893444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.295 qpair failed and we were unable to recover it. 00:37:37.295 [2024-11-19 08:01:28.893578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.295 [2024-11-19 08:01:28.893614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.295 qpair failed and we were unable to recover it. 00:37:37.295 [2024-11-19 08:01:28.893761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.295 [2024-11-19 08:01:28.893797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.295 qpair failed and we were unable to recover it. 00:37:37.295 [2024-11-19 08:01:28.893961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.295 [2024-11-19 08:01:28.894010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.295 qpair failed and we were unable to recover it. 00:37:37.295 [2024-11-19 08:01:28.894145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.295 [2024-11-19 08:01:28.894184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.295 qpair failed and we were unable to recover it. 00:37:37.295 [2024-11-19 08:01:28.894285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.295 [2024-11-19 08:01:28.894320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.295 qpair failed and we were unable to recover it. 00:37:37.295 [2024-11-19 08:01:28.894437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.295 [2024-11-19 08:01:28.894475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.295 qpair failed and we were unable to recover it. 00:37:37.295 [2024-11-19 08:01:28.894626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.295 [2024-11-19 08:01:28.894666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.295 qpair failed and we were unable to recover it. 00:37:37.295 [2024-11-19 08:01:28.894793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.295 [2024-11-19 08:01:28.894830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.295 qpair failed and we were unable to recover it. 00:37:37.295 [2024-11-19 08:01:28.894970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.295 [2024-11-19 08:01:28.895005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.295 qpair failed and we were unable to recover it. 00:37:37.295 [2024-11-19 08:01:28.895136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.295 [2024-11-19 08:01:28.895170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.295 qpair failed and we were unable to recover it. 00:37:37.295 [2024-11-19 08:01:28.895291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.295 [2024-11-19 08:01:28.895339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.295 qpair failed and we were unable to recover it. 00:37:37.295 [2024-11-19 08:01:28.895476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.295 [2024-11-19 08:01:28.895512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.295 qpair failed and we were unable to recover it. 00:37:37.295 [2024-11-19 08:01:28.895687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.295 [2024-11-19 08:01:28.895747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.295 qpair failed and we were unable to recover it. 00:37:37.295 [2024-11-19 08:01:28.895884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.295 [2024-11-19 08:01:28.895921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.295 qpair failed and we were unable to recover it. 00:37:37.295 [2024-11-19 08:01:28.896076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.295 [2024-11-19 08:01:28.896112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.295 qpair failed and we were unable to recover it. 00:37:37.295 [2024-11-19 08:01:28.896266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.295 [2024-11-19 08:01:28.896301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.295 qpair failed and we were unable to recover it. 00:37:37.295 [2024-11-19 08:01:28.896443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.295 [2024-11-19 08:01:28.896478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.295 qpair failed and we were unable to recover it. 00:37:37.295 [2024-11-19 08:01:28.896630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.295 [2024-11-19 08:01:28.896665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.295 qpair failed and we were unable to recover it. 00:37:37.295 [2024-11-19 08:01:28.896777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.295 [2024-11-19 08:01:28.896813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.295 qpair failed and we were unable to recover it. 00:37:37.295 [2024-11-19 08:01:28.896956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.295 [2024-11-19 08:01:28.897001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.295 qpair failed and we were unable to recover it. 00:37:37.295 [2024-11-19 08:01:28.897132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.295 [2024-11-19 08:01:28.897181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.295 qpair failed and we were unable to recover it. 00:37:37.296 [2024-11-19 08:01:28.897332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.296 [2024-11-19 08:01:28.897368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.296 qpair failed and we were unable to recover it. 00:37:37.296 [2024-11-19 08:01:28.897507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.296 [2024-11-19 08:01:28.897541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.296 qpair failed and we were unable to recover it. 00:37:37.296 [2024-11-19 08:01:28.897658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.296 [2024-11-19 08:01:28.897709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.296 qpair failed and we were unable to recover it. 00:37:37.296 [2024-11-19 08:01:28.897854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.296 [2024-11-19 08:01:28.897888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.296 qpair failed and we were unable to recover it. 00:37:37.296 [2024-11-19 08:01:28.898007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.296 [2024-11-19 08:01:28.898041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.296 qpair failed and we were unable to recover it. 00:37:37.296 [2024-11-19 08:01:28.898140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.296 [2024-11-19 08:01:28.898174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.296 qpair failed and we were unable to recover it. 00:37:37.296 [2024-11-19 08:01:28.898305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.296 [2024-11-19 08:01:28.898339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.296 qpair failed and we were unable to recover it. 00:37:37.296 [2024-11-19 08:01:28.898477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.296 [2024-11-19 08:01:28.898511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.296 qpair failed and we were unable to recover it. 00:37:37.296 [2024-11-19 08:01:28.898649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.296 [2024-11-19 08:01:28.898696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.296 qpair failed and we were unable to recover it. 00:37:37.296 [2024-11-19 08:01:28.898855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.296 [2024-11-19 08:01:28.898890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.296 qpair failed and we were unable to recover it. 00:37:37.296 [2024-11-19 08:01:28.898993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.296 [2024-11-19 08:01:28.899027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.296 qpair failed and we were unable to recover it. 00:37:37.296 [2024-11-19 08:01:28.899161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.296 [2024-11-19 08:01:28.899195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.296 qpair failed and we were unable to recover it. 00:37:37.296 [2024-11-19 08:01:28.899328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.296 [2024-11-19 08:01:28.899362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.296 qpair failed and we were unable to recover it. 00:37:37.296 [2024-11-19 08:01:28.899460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.296 [2024-11-19 08:01:28.899494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.296 qpair failed and we were unable to recover it. 00:37:37.296 [2024-11-19 08:01:28.899633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.296 [2024-11-19 08:01:28.899678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.296 qpair failed and we were unable to recover it. 00:37:37.296 [2024-11-19 08:01:28.899801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.296 [2024-11-19 08:01:28.899836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.296 qpair failed and we were unable to recover it. 00:37:37.296 [2024-11-19 08:01:28.899981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.296 [2024-11-19 08:01:28.900016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.296 qpair failed and we were unable to recover it. 00:37:37.296 [2024-11-19 08:01:28.900121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.296 [2024-11-19 08:01:28.900155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.296 qpair failed and we were unable to recover it. 00:37:37.296 [2024-11-19 08:01:28.900271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.296 [2024-11-19 08:01:28.900305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.296 qpair failed and we were unable to recover it. 00:37:37.296 [2024-11-19 08:01:28.900416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.296 [2024-11-19 08:01:28.900450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.296 qpair failed and we were unable to recover it. 00:37:37.296 [2024-11-19 08:01:28.900552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.296 [2024-11-19 08:01:28.900586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.296 qpair failed and we were unable to recover it. 00:37:37.296 [2024-11-19 08:01:28.900726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.296 [2024-11-19 08:01:28.900767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.296 qpair failed and we were unable to recover it. 00:37:37.296 [2024-11-19 08:01:28.900877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.296 [2024-11-19 08:01:28.900910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.296 qpair failed and we were unable to recover it. 00:37:37.296 [2024-11-19 08:01:28.901045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.296 [2024-11-19 08:01:28.901079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.296 qpair failed and we were unable to recover it. 00:37:37.296 [2024-11-19 08:01:28.901216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.296 [2024-11-19 08:01:28.901250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.296 qpair failed and we were unable to recover it. 00:37:37.296 [2024-11-19 08:01:28.901399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.296 [2024-11-19 08:01:28.901435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.296 qpair failed and we were unable to recover it. 00:37:37.296 [2024-11-19 08:01:28.901587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.296 [2024-11-19 08:01:28.901634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.296 qpair failed and we were unable to recover it. 00:37:37.296 [2024-11-19 08:01:28.901826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.296 [2024-11-19 08:01:28.901865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.296 qpair failed and we were unable to recover it. 00:37:37.296 [2024-11-19 08:01:28.902028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.296 [2024-11-19 08:01:28.902064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.296 qpair failed and we were unable to recover it. 00:37:37.296 [2024-11-19 08:01:28.902242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.296 [2024-11-19 08:01:28.902279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.296 qpair failed and we were unable to recover it. 00:37:37.296 [2024-11-19 08:01:28.902415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.296 [2024-11-19 08:01:28.902462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.296 qpair failed and we were unable to recover it. 00:37:37.296 [2024-11-19 08:01:28.902603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.296 [2024-11-19 08:01:28.902639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.296 qpair failed and we were unable to recover it. 00:37:37.296 [2024-11-19 08:01:28.902807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.296 [2024-11-19 08:01:28.902855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.296 qpair failed and we were unable to recover it. 00:37:37.296 [2024-11-19 08:01:28.903032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.296 [2024-11-19 08:01:28.903076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.296 qpair failed and we were unable to recover it. 00:37:37.296 [2024-11-19 08:01:28.903210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.296 [2024-11-19 08:01:28.903245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.296 qpair failed and we were unable to recover it. 00:37:37.296 [2024-11-19 08:01:28.903387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.296 [2024-11-19 08:01:28.903420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.296 qpair failed and we were unable to recover it. 00:37:37.296 [2024-11-19 08:01:28.903541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.296 [2024-11-19 08:01:28.903600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.296 qpair failed and we were unable to recover it. 00:37:37.296 [2024-11-19 08:01:28.903746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.297 [2024-11-19 08:01:28.903795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.297 qpair failed and we were unable to recover it. 00:37:37.297 [2024-11-19 08:01:28.903942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.297 [2024-11-19 08:01:28.903985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.297 qpair failed and we were unable to recover it. 00:37:37.297 [2024-11-19 08:01:28.904157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.297 [2024-11-19 08:01:28.904192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.297 qpair failed and we were unable to recover it. 00:37:37.297 [2024-11-19 08:01:28.904336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.297 [2024-11-19 08:01:28.904370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.297 qpair failed and we were unable to recover it. 00:37:37.297 [2024-11-19 08:01:28.904500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.297 [2024-11-19 08:01:28.904534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.297 qpair failed and we were unable to recover it. 00:37:37.297 [2024-11-19 08:01:28.904686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.297 [2024-11-19 08:01:28.904742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.297 qpair failed and we were unable to recover it. 00:37:37.297 [2024-11-19 08:01:28.904868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.297 [2024-11-19 08:01:28.904910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.297 qpair failed and we were unable to recover it. 00:37:37.297 [2024-11-19 08:01:28.905071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.297 [2024-11-19 08:01:28.905110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.297 qpair failed and we were unable to recover it. 00:37:37.297 [2024-11-19 08:01:28.905221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.297 [2024-11-19 08:01:28.905256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.297 qpair failed and we were unable to recover it. 00:37:37.297 [2024-11-19 08:01:28.905371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.297 [2024-11-19 08:01:28.905406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.297 qpair failed and we were unable to recover it. 00:37:37.297 [2024-11-19 08:01:28.905521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.297 [2024-11-19 08:01:28.905555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.297 qpair failed and we were unable to recover it. 00:37:37.297 [2024-11-19 08:01:28.905740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.297 [2024-11-19 08:01:28.905778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.297 qpair failed and we were unable to recover it. 00:37:37.297 [2024-11-19 08:01:28.905894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.297 [2024-11-19 08:01:28.905939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.297 qpair failed and we were unable to recover it. 00:37:37.297 [2024-11-19 08:01:28.906064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.297 [2024-11-19 08:01:28.906098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.297 qpair failed and we were unable to recover it. 00:37:37.297 [2024-11-19 08:01:28.906230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.297 [2024-11-19 08:01:28.906264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.297 qpair failed and we were unable to recover it. 00:37:37.297 [2024-11-19 08:01:28.906418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.297 [2024-11-19 08:01:28.906467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.297 qpair failed and we were unable to recover it. 00:37:37.297 [2024-11-19 08:01:28.906610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.297 [2024-11-19 08:01:28.906648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.297 qpair failed and we were unable to recover it. 00:37:37.297 [2024-11-19 08:01:28.906809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.297 [2024-11-19 08:01:28.906845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.297 qpair failed and we were unable to recover it. 00:37:37.297 [2024-11-19 08:01:28.906955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.297 [2024-11-19 08:01:28.907000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.297 qpair failed and we were unable to recover it. 00:37:37.297 [2024-11-19 08:01:28.907106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.297 [2024-11-19 08:01:28.907140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.297 qpair failed and we were unable to recover it. 00:37:37.297 [2024-11-19 08:01:28.907249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.297 [2024-11-19 08:01:28.907283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.297 qpair failed and we were unable to recover it. 00:37:37.297 [2024-11-19 08:01:28.907422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.297 [2024-11-19 08:01:28.907457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.297 qpair failed and we were unable to recover it. 00:37:37.297 [2024-11-19 08:01:28.907575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.297 [2024-11-19 08:01:28.907613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.297 qpair failed and we were unable to recover it. 00:37:37.297 [2024-11-19 08:01:28.907764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.297 [2024-11-19 08:01:28.907801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.297 qpair failed and we were unable to recover it. 00:37:37.297 [2024-11-19 08:01:28.907915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.297 [2024-11-19 08:01:28.907959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.297 qpair failed and we were unable to recover it. 00:37:37.297 [2024-11-19 08:01:28.908082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.297 [2024-11-19 08:01:28.908117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.297 qpair failed and we were unable to recover it. 00:37:37.297 [2024-11-19 08:01:28.908229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.297 [2024-11-19 08:01:28.908264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.297 qpair failed and we were unable to recover it. 00:37:37.297 [2024-11-19 08:01:28.908381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.297 [2024-11-19 08:01:28.908415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.297 qpair failed and we were unable to recover it. 00:37:37.297 [2024-11-19 08:01:28.908552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.297 [2024-11-19 08:01:28.908586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.297 qpair failed and we were unable to recover it. 00:37:37.297 [2024-11-19 08:01:28.908722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.297 [2024-11-19 08:01:28.908757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.297 qpair failed and we were unable to recover it. 00:37:37.297 [2024-11-19 08:01:28.908864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.297 [2024-11-19 08:01:28.908899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.297 qpair failed and we were unable to recover it. 00:37:37.297 [2024-11-19 08:01:28.909010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.297 [2024-11-19 08:01:28.909044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.297 qpair failed and we were unable to recover it. 00:37:37.297 [2024-11-19 08:01:28.909184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.297 [2024-11-19 08:01:28.909219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.298 qpair failed and we were unable to recover it. 00:37:37.298 [2024-11-19 08:01:28.909360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.298 [2024-11-19 08:01:28.909394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.298 qpair failed and we were unable to recover it. 00:37:37.298 [2024-11-19 08:01:28.909522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.298 [2024-11-19 08:01:28.909558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.298 qpair failed and we were unable to recover it. 00:37:37.298 [2024-11-19 08:01:28.909682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.298 [2024-11-19 08:01:28.909741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.298 qpair failed and we were unable to recover it. 00:37:37.298 [2024-11-19 08:01:28.909890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.298 [2024-11-19 08:01:28.909929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.298 qpair failed and we were unable to recover it. 00:37:37.298 [2024-11-19 08:01:28.910045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.298 [2024-11-19 08:01:28.910081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.298 qpair failed and we were unable to recover it. 00:37:37.298 [2024-11-19 08:01:28.910202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.298 [2024-11-19 08:01:28.910237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.298 qpair failed and we were unable to recover it. 00:37:37.298 [2024-11-19 08:01:28.910387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.298 [2024-11-19 08:01:28.910423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.298 qpair failed and we were unable to recover it. 00:37:37.298 [2024-11-19 08:01:28.910538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.298 [2024-11-19 08:01:28.910573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.298 qpair failed and we were unable to recover it. 00:37:37.298 [2024-11-19 08:01:28.910699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.298 [2024-11-19 08:01:28.910736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.298 qpair failed and we were unable to recover it. 00:37:37.298 [2024-11-19 08:01:28.910878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.298 [2024-11-19 08:01:28.910913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.298 qpair failed and we were unable to recover it. 00:37:37.298 [2024-11-19 08:01:28.911055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.298 [2024-11-19 08:01:28.911089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.298 qpair failed and we were unable to recover it. 00:37:37.298 [2024-11-19 08:01:28.911258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.298 [2024-11-19 08:01:28.911293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.298 qpair failed and we were unable to recover it. 00:37:37.298 [2024-11-19 08:01:28.911426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.298 [2024-11-19 08:01:28.911463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.298 qpair failed and we were unable to recover it. 00:37:37.298 [2024-11-19 08:01:28.911567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.298 [2024-11-19 08:01:28.911601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.298 qpair failed and we were unable to recover it. 00:37:37.298 [2024-11-19 08:01:28.911768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.298 [2024-11-19 08:01:28.911803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.298 qpair failed and we were unable to recover it. 00:37:37.298 [2024-11-19 08:01:28.911951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.298 [2024-11-19 08:01:28.912000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.298 qpair failed and we were unable to recover it. 00:37:37.298 [2024-11-19 08:01:28.912130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.298 [2024-11-19 08:01:28.912167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.298 qpair failed and we were unable to recover it. 00:37:37.298 [2024-11-19 08:01:28.912308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.298 [2024-11-19 08:01:28.912343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.298 qpair failed and we were unable to recover it. 00:37:37.298 [2024-11-19 08:01:28.912467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.298 [2024-11-19 08:01:28.912503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.298 qpair failed and we were unable to recover it. 00:37:37.298 [2024-11-19 08:01:28.912641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.298 [2024-11-19 08:01:28.912683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.298 qpair failed and we were unable to recover it. 00:37:37.298 [2024-11-19 08:01:28.912802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.298 [2024-11-19 08:01:28.912837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.298 qpair failed and we were unable to recover it. 00:37:37.298 [2024-11-19 08:01:28.912984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.298 [2024-11-19 08:01:28.913019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.298 qpair failed and we were unable to recover it. 00:37:37.298 [2024-11-19 08:01:28.913133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.298 [2024-11-19 08:01:28.913172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.298 qpair failed and we were unable to recover it. 00:37:37.298 [2024-11-19 08:01:28.913285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.298 [2024-11-19 08:01:28.913320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.298 qpair failed and we were unable to recover it. 00:37:37.298 [2024-11-19 08:01:28.913460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.298 [2024-11-19 08:01:28.913497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.298 qpair failed and we were unable to recover it. 00:37:37.298 [2024-11-19 08:01:28.913637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.298 [2024-11-19 08:01:28.913681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.298 qpair failed and we were unable to recover it. 00:37:37.298 [2024-11-19 08:01:28.913810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.298 [2024-11-19 08:01:28.913845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.298 qpair failed and we were unable to recover it. 00:37:37.298 [2024-11-19 08:01:28.914012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.298 [2024-11-19 08:01:28.914046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.298 qpair failed and we were unable to recover it. 00:37:37.298 [2024-11-19 08:01:28.914180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.298 [2024-11-19 08:01:28.914214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.298 qpair failed and we were unable to recover it. 00:37:37.298 [2024-11-19 08:01:28.914363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.298 [2024-11-19 08:01:28.914411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.298 qpair failed and we were unable to recover it. 00:37:37.298 [2024-11-19 08:01:28.914531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.298 [2024-11-19 08:01:28.914567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.298 qpair failed and we were unable to recover it. 00:37:37.298 [2024-11-19 08:01:28.914723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.298 [2024-11-19 08:01:28.914766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.298 qpair failed and we were unable to recover it. 00:37:37.298 [2024-11-19 08:01:28.914907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.298 [2024-11-19 08:01:28.914941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.298 qpair failed and we were unable to recover it. 00:37:37.298 [2024-11-19 08:01:28.915058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.298 [2024-11-19 08:01:28.915093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.298 qpair failed and we were unable to recover it. 00:37:37.298 [2024-11-19 08:01:28.915205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.298 [2024-11-19 08:01:28.915239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.298 qpair failed and we were unable to recover it. 00:37:37.298 [2024-11-19 08:01:28.915376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.298 [2024-11-19 08:01:28.915411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.298 qpair failed and we were unable to recover it. 00:37:37.298 [2024-11-19 08:01:28.915533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.298 [2024-11-19 08:01:28.915582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.298 qpair failed and we were unable to recover it. 00:37:37.298 [2024-11-19 08:01:28.915730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.299 [2024-11-19 08:01:28.915768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.299 qpair failed and we were unable to recover it. 00:37:37.299 [2024-11-19 08:01:28.915881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.299 [2024-11-19 08:01:28.915917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.299 qpair failed and we were unable to recover it. 00:37:37.299 [2024-11-19 08:01:28.916087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.299 [2024-11-19 08:01:28.916122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.299 qpair failed and we were unable to recover it. 00:37:37.299 [2024-11-19 08:01:28.916275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.299 [2024-11-19 08:01:28.916310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.299 qpair failed and we were unable to recover it. 00:37:37.299 [2024-11-19 08:01:28.916420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.299 [2024-11-19 08:01:28.916467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.299 qpair failed and we were unable to recover it. 00:37:37.299 [2024-11-19 08:01:28.916634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.299 [2024-11-19 08:01:28.916669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.299 qpair failed and we were unable to recover it. 00:37:37.299 [2024-11-19 08:01:28.916830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.299 [2024-11-19 08:01:28.916878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.299 qpair failed and we were unable to recover it. 00:37:37.299 [2024-11-19 08:01:28.917002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.299 [2024-11-19 08:01:28.917038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.299 qpair failed and we were unable to recover it. 00:37:37.299 [2024-11-19 08:01:28.917173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.299 [2024-11-19 08:01:28.917207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.299 qpair failed and we were unable to recover it. 00:37:37.299 [2024-11-19 08:01:28.917312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.299 [2024-11-19 08:01:28.917346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.299 qpair failed and we were unable to recover it. 00:37:37.299 [2024-11-19 08:01:28.917515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.299 [2024-11-19 08:01:28.917549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.299 qpair failed and we were unable to recover it. 00:37:37.299 [2024-11-19 08:01:28.917662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.299 [2024-11-19 08:01:28.917716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.299 qpair failed and we were unable to recover it. 00:37:37.299 [2024-11-19 08:01:28.917854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.299 [2024-11-19 08:01:28.917889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.299 qpair failed and we were unable to recover it. 00:37:37.299 [2024-11-19 08:01:28.918037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.299 [2024-11-19 08:01:28.918074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.299 qpair failed and we were unable to recover it. 00:37:37.299 [2024-11-19 08:01:28.918236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.299 [2024-11-19 08:01:28.918271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.299 qpair failed and we were unable to recover it. 00:37:37.299 [2024-11-19 08:01:28.918408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.299 [2024-11-19 08:01:28.918442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.299 qpair failed and we were unable to recover it. 00:37:37.299 [2024-11-19 08:01:28.918545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.299 [2024-11-19 08:01:28.918579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.299 qpair failed and we were unable to recover it. 00:37:37.299 [2024-11-19 08:01:28.918725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.299 [2024-11-19 08:01:28.918760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.299 qpair failed and we were unable to recover it. 00:37:37.299 [2024-11-19 08:01:28.918877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.299 [2024-11-19 08:01:28.918913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.299 qpair failed and we were unable to recover it. 00:37:37.299 [2024-11-19 08:01:28.919053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.299 [2024-11-19 08:01:28.919088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.299 qpair failed and we were unable to recover it. 00:37:37.299 [2024-11-19 08:01:28.919229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.299 [2024-11-19 08:01:28.919272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.299 qpair failed and we were unable to recover it. 00:37:37.299 [2024-11-19 08:01:28.919387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.299 [2024-11-19 08:01:28.919424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.299 qpair failed and we were unable to recover it. 00:37:37.299 [2024-11-19 08:01:28.919545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.299 [2024-11-19 08:01:28.919581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.299 qpair failed and we were unable to recover it. 00:37:37.299 [2024-11-19 08:01:28.919727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.299 [2024-11-19 08:01:28.919762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.299 qpair failed and we were unable to recover it. 00:37:37.299 [2024-11-19 08:01:28.919864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.299 [2024-11-19 08:01:28.919898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.299 qpair failed and we were unable to recover it. 00:37:37.299 [2024-11-19 08:01:28.920021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.299 [2024-11-19 08:01:28.920056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.299 qpair failed and we were unable to recover it. 00:37:37.299 [2024-11-19 08:01:28.920197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.299 [2024-11-19 08:01:28.920231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.299 qpair failed and we were unable to recover it. 00:37:37.299 [2024-11-19 08:01:28.920359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.299 [2024-11-19 08:01:28.920393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.299 qpair failed and we were unable to recover it. 00:37:37.299 [2024-11-19 08:01:28.920559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.299 [2024-11-19 08:01:28.920594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.299 qpair failed and we were unable to recover it. 00:37:37.299 [2024-11-19 08:01:28.920719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.299 [2024-11-19 08:01:28.920756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.299 qpair failed and we were unable to recover it. 00:37:37.299 [2024-11-19 08:01:28.920871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.299 [2024-11-19 08:01:28.920907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.299 qpair failed and we were unable to recover it. 00:37:37.299 [2024-11-19 08:01:28.921116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.299 [2024-11-19 08:01:28.921151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.299 qpair failed and we were unable to recover it. 00:37:37.299 [2024-11-19 08:01:28.921284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.299 [2024-11-19 08:01:28.921318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.299 qpair failed and we were unable to recover it. 00:37:37.299 [2024-11-19 08:01:28.921437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.299 [2024-11-19 08:01:28.921472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.299 qpair failed and we were unable to recover it. 00:37:37.299 [2024-11-19 08:01:28.921608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.299 [2024-11-19 08:01:28.921648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.299 qpair failed and we were unable to recover it. 00:37:37.299 [2024-11-19 08:01:28.921796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.299 [2024-11-19 08:01:28.921845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.299 qpair failed and we were unable to recover it. 00:37:37.299 [2024-11-19 08:01:28.921995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.299 [2024-11-19 08:01:28.922032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.299 qpair failed and we were unable to recover it. 00:37:37.299 [2024-11-19 08:01:28.922212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.299 [2024-11-19 08:01:28.922249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.300 qpair failed and we were unable to recover it. 00:37:37.300 [2024-11-19 08:01:28.922356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.300 [2024-11-19 08:01:28.922392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.300 qpair failed and we were unable to recover it. 00:37:37.300 [2024-11-19 08:01:28.922512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.300 [2024-11-19 08:01:28.922548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.300 qpair failed and we were unable to recover it. 00:37:37.300 [2024-11-19 08:01:28.922664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.300 [2024-11-19 08:01:28.922713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.300 qpair failed and we were unable to recover it. 00:37:37.300 [2024-11-19 08:01:28.922823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.300 [2024-11-19 08:01:28.922858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.300 qpair failed and we were unable to recover it. 00:37:37.300 [2024-11-19 08:01:28.923018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.300 [2024-11-19 08:01:28.923053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.300 qpair failed and we were unable to recover it. 00:37:37.300 [2024-11-19 08:01:28.923156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.300 [2024-11-19 08:01:28.923191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.300 qpair failed and we were unable to recover it. 00:37:37.300 [2024-11-19 08:01:28.923294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.300 [2024-11-19 08:01:28.923329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.300 qpair failed and we were unable to recover it. 00:37:37.300 [2024-11-19 08:01:28.923429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.300 [2024-11-19 08:01:28.923463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.300 qpair failed and we were unable to recover it. 00:37:37.300 [2024-11-19 08:01:28.923572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.300 [2024-11-19 08:01:28.923607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.300 qpair failed and we were unable to recover it. 00:37:37.300 [2024-11-19 08:01:28.923745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.300 [2024-11-19 08:01:28.923780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.300 qpair failed and we were unable to recover it. 00:37:37.300 [2024-11-19 08:01:28.923895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.300 [2024-11-19 08:01:28.923930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.300 qpair failed and we were unable to recover it. 00:37:37.300 [2024-11-19 08:01:28.924082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.300 [2024-11-19 08:01:28.924117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.300 qpair failed and we were unable to recover it. 00:37:37.300 [2024-11-19 08:01:28.924222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.300 [2024-11-19 08:01:28.924258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.300 qpair failed and we were unable to recover it. 00:37:37.300 [2024-11-19 08:01:28.924410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.300 [2024-11-19 08:01:28.924447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.300 qpair failed and we were unable to recover it. 00:37:37.300 [2024-11-19 08:01:28.924587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.300 [2024-11-19 08:01:28.924622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.300 qpair failed and we were unable to recover it. 00:37:37.300 [2024-11-19 08:01:28.924817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.300 [2024-11-19 08:01:28.924852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.300 qpair failed and we were unable to recover it. 00:37:37.300 [2024-11-19 08:01:28.924995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.300 [2024-11-19 08:01:28.925030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.300 qpair failed and we were unable to recover it. 00:37:37.300 [2024-11-19 08:01:28.925156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.300 [2024-11-19 08:01:28.925191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.300 qpair failed and we were unable to recover it. 00:37:37.300 [2024-11-19 08:01:28.925307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.300 [2024-11-19 08:01:28.925342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.300 qpair failed and we were unable to recover it. 00:37:37.300 [2024-11-19 08:01:28.925452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.300 [2024-11-19 08:01:28.925488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.300 qpair failed and we were unable to recover it. 00:37:37.300 [2024-11-19 08:01:28.925620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.300 [2024-11-19 08:01:28.925668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.300 qpair failed and we were unable to recover it. 00:37:37.300 [2024-11-19 08:01:28.925801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.300 [2024-11-19 08:01:28.925835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.300 qpair failed and we were unable to recover it. 00:37:37.300 [2024-11-19 08:01:28.925946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.300 [2024-11-19 08:01:28.925980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.300 qpair failed and we were unable to recover it. 00:37:37.300 [2024-11-19 08:01:28.926137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.300 [2024-11-19 08:01:28.926172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.300 qpair failed and we were unable to recover it. 00:37:37.300 [2024-11-19 08:01:28.926281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.300 [2024-11-19 08:01:28.926314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.300 qpair failed and we were unable to recover it. 00:37:37.300 [2024-11-19 08:01:28.926479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.300 [2024-11-19 08:01:28.926518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.300 qpair failed and we were unable to recover it. 00:37:37.300 [2024-11-19 08:01:28.926627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.300 [2024-11-19 08:01:28.926661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.300 qpair failed and we were unable to recover it. 00:37:37.300 [2024-11-19 08:01:28.926794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.300 [2024-11-19 08:01:28.926842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.300 qpair failed and we were unable to recover it. 00:37:37.300 [2024-11-19 08:01:28.926960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.300 [2024-11-19 08:01:28.926998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.300 qpair failed and we were unable to recover it. 00:37:37.300 [2024-11-19 08:01:28.927191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.300 [2024-11-19 08:01:28.927240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.300 qpair failed and we were unable to recover it. 00:37:37.300 [2024-11-19 08:01:28.927351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.300 [2024-11-19 08:01:28.927386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.300 qpair failed and we were unable to recover it. 00:37:37.300 [2024-11-19 08:01:28.928192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.300 [2024-11-19 08:01:28.928231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.300 qpair failed and we were unable to recover it. 00:37:37.300 [2024-11-19 08:01:28.928384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.300 [2024-11-19 08:01:28.928419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.300 qpair failed and we were unable to recover it. 00:37:37.300 [2024-11-19 08:01:28.928524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.300 [2024-11-19 08:01:28.928559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.300 qpair failed and we were unable to recover it. 00:37:37.300 [2024-11-19 08:01:28.928683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.300 [2024-11-19 08:01:28.928736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.300 qpair failed and we were unable to recover it. 00:37:37.300 [2024-11-19 08:01:28.928839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.300 [2024-11-19 08:01:28.928873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.300 qpair failed and we were unable to recover it. 00:37:37.300 [2024-11-19 08:01:28.928986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.300 [2024-11-19 08:01:28.929034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.300 qpair failed and we were unable to recover it. 00:37:37.300 [2024-11-19 08:01:28.929185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.301 [2024-11-19 08:01:28.929221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.301 qpair failed and we were unable to recover it. 00:37:37.301 [2024-11-19 08:01:28.929368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.301 [2024-11-19 08:01:28.929402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.301 qpair failed and we were unable to recover it. 00:37:37.301 [2024-11-19 08:01:28.929537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.301 [2024-11-19 08:01:28.929571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.301 qpair failed and we were unable to recover it. 00:37:37.301 [2024-11-19 08:01:28.929725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.301 [2024-11-19 08:01:28.929760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.301 qpair failed and we were unable to recover it. 00:37:37.301 [2024-11-19 08:01:28.929872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.301 [2024-11-19 08:01:28.929906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.301 qpair failed and we were unable to recover it. 00:37:37.301 [2024-11-19 08:01:28.930043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.301 [2024-11-19 08:01:28.930077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.301 qpair failed and we were unable to recover it. 00:37:37.301 [2024-11-19 08:01:28.930215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.301 [2024-11-19 08:01:28.930250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.301 qpair failed and we were unable to recover it. 00:37:37.301 [2024-11-19 08:01:28.930358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.301 [2024-11-19 08:01:28.930393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.301 qpair failed and we were unable to recover it. 00:37:37.301 [2024-11-19 08:01:28.930563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.301 [2024-11-19 08:01:28.930597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.301 qpair failed and we were unable to recover it. 00:37:37.301 [2024-11-19 08:01:28.930724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.301 [2024-11-19 08:01:28.930760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.301 qpair failed and we were unable to recover it. 00:37:37.301 [2024-11-19 08:01:28.930875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.301 [2024-11-19 08:01:28.930910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.301 qpair failed and we were unable to recover it. 00:37:37.301 [2024-11-19 08:01:28.931031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.301 [2024-11-19 08:01:28.931067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.301 qpair failed and we were unable to recover it. 00:37:37.301 [2024-11-19 08:01:28.931200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.301 [2024-11-19 08:01:28.931245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.301 qpair failed and we were unable to recover it. 00:37:37.301 [2024-11-19 08:01:28.931395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.301 [2024-11-19 08:01:28.931429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.301 qpair failed and we were unable to recover it. 00:37:37.301 [2024-11-19 08:01:28.931564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.301 [2024-11-19 08:01:28.931601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.301 qpair failed and we were unable to recover it. 00:37:37.301 [2024-11-19 08:01:28.931751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.301 [2024-11-19 08:01:28.931799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.301 qpair failed and we were unable to recover it. 00:37:37.301 [2024-11-19 08:01:28.931938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.301 [2024-11-19 08:01:28.931993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.301 qpair failed and we were unable to recover it. 00:37:37.301 [2024-11-19 08:01:28.932136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.301 [2024-11-19 08:01:28.932171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.301 qpair failed and we were unable to recover it. 00:37:37.301 [2024-11-19 08:01:28.932289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.301 [2024-11-19 08:01:28.932324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.301 qpair failed and we were unable to recover it. 00:37:37.301 [2024-11-19 08:01:28.932497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.301 [2024-11-19 08:01:28.932532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.301 qpair failed and we were unable to recover it. 00:37:37.301 [2024-11-19 08:01:28.932696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.301 [2024-11-19 08:01:28.932740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.301 qpair failed and we were unable to recover it. 00:37:37.301 [2024-11-19 08:01:28.933680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.301 [2024-11-19 08:01:28.933736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.301 qpair failed and we were unable to recover it. 00:37:37.301 [2024-11-19 08:01:28.933883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.301 [2024-11-19 08:01:28.933918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.301 qpair failed and we were unable to recover it. 00:37:37.301 [2024-11-19 08:01:28.934034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.301 [2024-11-19 08:01:28.934080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.301 qpair failed and we were unable to recover it. 00:37:37.301 [2024-11-19 08:01:28.934184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.301 [2024-11-19 08:01:28.934220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.301 qpair failed and we were unable to recover it. 00:37:37.301 [2024-11-19 08:01:28.934362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.301 [2024-11-19 08:01:28.934401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.301 qpair failed and we were unable to recover it. 00:37:37.301 [2024-11-19 08:01:28.934531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.301 [2024-11-19 08:01:28.934581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.301 qpair failed and we were unable to recover it. 00:37:37.301 [2024-11-19 08:01:28.934733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.301 [2024-11-19 08:01:28.934772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.301 qpair failed and we were unable to recover it. 00:37:37.301 [2024-11-19 08:01:28.934900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.301 [2024-11-19 08:01:28.934948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.301 qpair failed and we were unable to recover it. 00:37:37.301 [2024-11-19 08:01:28.935087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.301 [2024-11-19 08:01:28.935122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.301 qpair failed and we were unable to recover it. 00:37:37.301 [2024-11-19 08:01:28.935239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.301 [2024-11-19 08:01:28.935274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.301 qpair failed and we were unable to recover it. 00:37:37.301 [2024-11-19 08:01:28.935438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.301 [2024-11-19 08:01:28.935473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.301 qpair failed and we were unable to recover it. 00:37:37.301 [2024-11-19 08:01:28.935608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.301 [2024-11-19 08:01:28.935642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.301 qpair failed and we were unable to recover it. 00:37:37.301 [2024-11-19 08:01:28.935782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.301 [2024-11-19 08:01:28.935831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.301 qpair failed and we were unable to recover it. 00:37:37.301 [2024-11-19 08:01:28.935955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.301 [2024-11-19 08:01:28.935991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.301 qpair failed and we were unable to recover it. 00:37:37.301 [2024-11-19 08:01:28.936109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.301 [2024-11-19 08:01:28.936145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.301 qpair failed and we were unable to recover it. 00:37:37.301 [2024-11-19 08:01:28.936321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.301 [2024-11-19 08:01:28.936356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.301 qpair failed and we were unable to recover it. 00:37:37.301 [2024-11-19 08:01:28.936500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.302 [2024-11-19 08:01:28.936538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.302 qpair failed and we were unable to recover it. 00:37:37.302 [2024-11-19 08:01:28.936664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.302 [2024-11-19 08:01:28.936710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.302 qpair failed and we were unable to recover it. 00:37:37.302 [2024-11-19 08:01:28.936850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.302 [2024-11-19 08:01:28.936891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.302 qpair failed and we were unable to recover it. 00:37:37.302 [2024-11-19 08:01:28.937007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.302 [2024-11-19 08:01:28.937042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.302 qpair failed and we were unable to recover it. 00:37:37.302 [2024-11-19 08:01:28.937201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.302 [2024-11-19 08:01:28.937235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.302 qpair failed and we were unable to recover it. 00:37:37.302 [2024-11-19 08:01:28.937373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.302 [2024-11-19 08:01:28.937407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.302 qpair failed and we were unable to recover it. 00:37:37.302 [2024-11-19 08:01:28.937565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.302 [2024-11-19 08:01:28.937600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.302 qpair failed and we were unable to recover it. 00:37:37.302 [2024-11-19 08:01:28.937752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.302 [2024-11-19 08:01:28.937802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.302 qpair failed and we were unable to recover it. 00:37:37.302 [2024-11-19 08:01:28.937937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.302 [2024-11-19 08:01:28.937996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.302 qpair failed and we were unable to recover it. 00:37:37.302 [2024-11-19 08:01:28.938153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.302 [2024-11-19 08:01:28.938198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.302 qpair failed and we were unable to recover it. 00:37:37.302 [2024-11-19 08:01:28.938347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.302 [2024-11-19 08:01:28.938383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.302 qpair failed and we were unable to recover it. 00:37:37.302 [2024-11-19 08:01:28.938524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.302 [2024-11-19 08:01:28.938558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.302 qpair failed and we were unable to recover it. 00:37:37.302 [2024-11-19 08:01:28.938703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.302 [2024-11-19 08:01:28.938738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.302 qpair failed and we were unable to recover it. 00:37:37.302 [2024-11-19 08:01:28.938875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.302 [2024-11-19 08:01:28.938909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.302 qpair failed and we were unable to recover it. 00:37:37.302 [2024-11-19 08:01:28.939017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.302 [2024-11-19 08:01:28.939051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.302 qpair failed and we were unable to recover it. 00:37:37.302 [2024-11-19 08:01:28.939182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.302 [2024-11-19 08:01:28.939216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.302 qpair failed and we were unable to recover it. 00:37:37.302 [2024-11-19 08:01:28.939340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.302 [2024-11-19 08:01:28.939377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.302 qpair failed and we were unable to recover it. 00:37:37.302 [2024-11-19 08:01:28.939522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.302 [2024-11-19 08:01:28.939556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.302 qpair failed and we were unable to recover it. 00:37:37.302 [2024-11-19 08:01:28.939664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.302 [2024-11-19 08:01:28.939707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.302 qpair failed and we were unable to recover it. 00:37:37.302 [2024-11-19 08:01:28.939875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.302 [2024-11-19 08:01:28.939909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.302 qpair failed and we were unable to recover it. 00:37:37.302 [2024-11-19 08:01:28.940022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.302 [2024-11-19 08:01:28.940057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.302 qpair failed and we were unable to recover it. 00:37:37.302 [2024-11-19 08:01:28.940195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.302 [2024-11-19 08:01:28.940230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.302 qpair failed and we were unable to recover it. 00:37:37.302 [2024-11-19 08:01:28.940345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.302 [2024-11-19 08:01:28.940379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.302 qpair failed and we were unable to recover it. 00:37:37.302 [2024-11-19 08:01:28.940481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.302 [2024-11-19 08:01:28.940516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.302 qpair failed and we were unable to recover it. 00:37:37.302 [2024-11-19 08:01:28.940652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.302 [2024-11-19 08:01:28.940687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.302 qpair failed and we were unable to recover it. 00:37:37.302 [2024-11-19 08:01:28.940821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.302 [2024-11-19 08:01:28.940870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.302 qpair failed and we were unable to recover it. 00:37:37.302 [2024-11-19 08:01:28.941027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.302 [2024-11-19 08:01:28.941066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.302 qpair failed and we were unable to recover it. 00:37:37.302 [2024-11-19 08:01:28.941178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.302 [2024-11-19 08:01:28.941213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.302 qpair failed and we were unable to recover it. 00:37:37.302 [2024-11-19 08:01:28.941330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.302 [2024-11-19 08:01:28.941365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.302 qpair failed and we were unable to recover it. 00:37:37.302 [2024-11-19 08:01:28.941512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.302 [2024-11-19 08:01:28.941549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.302 qpair failed and we were unable to recover it. 00:37:37.302 [2024-11-19 08:01:28.941696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.302 [2024-11-19 08:01:28.941732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.302 qpair failed and we were unable to recover it. 00:37:37.302 [2024-11-19 08:01:28.941873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.302 [2024-11-19 08:01:28.941908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.302 qpair failed and we were unable to recover it. 00:37:37.302 [2024-11-19 08:01:28.942018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.302 [2024-11-19 08:01:28.942063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.302 qpair failed and we were unable to recover it. 00:37:37.302 [2024-11-19 08:01:28.942195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.303 [2024-11-19 08:01:28.942245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.303 qpair failed and we were unable to recover it. 00:37:37.303 [2024-11-19 08:01:28.942392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.303 [2024-11-19 08:01:28.942430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.303 qpair failed and we were unable to recover it. 00:37:37.303 [2024-11-19 08:01:28.942540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.303 [2024-11-19 08:01:28.942578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.303 qpair failed and we were unable to recover it. 00:37:37.303 [2024-11-19 08:01:28.942714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.303 [2024-11-19 08:01:28.942751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.303 qpair failed and we were unable to recover it. 00:37:37.303 [2024-11-19 08:01:28.942858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.303 [2024-11-19 08:01:28.942894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.303 qpair failed and we were unable to recover it. 00:37:37.303 [2024-11-19 08:01:28.943001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.303 [2024-11-19 08:01:28.943037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.303 qpair failed and we were unable to recover it. 00:37:37.303 [2024-11-19 08:01:28.943147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.303 [2024-11-19 08:01:28.943181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.303 qpair failed and we were unable to recover it. 00:37:37.303 [2024-11-19 08:01:28.943292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.303 [2024-11-19 08:01:28.943326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.303 qpair failed and we were unable to recover it. 00:37:37.303 [2024-11-19 08:01:28.943466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.303 [2024-11-19 08:01:28.943509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.303 qpair failed and we were unable to recover it. 00:37:37.303 [2024-11-19 08:01:28.943624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.303 [2024-11-19 08:01:28.943665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.303 qpair failed and we were unable to recover it. 00:37:37.303 [2024-11-19 08:01:28.943802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.303 [2024-11-19 08:01:28.943838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.303 qpair failed and we were unable to recover it. 00:37:37.303 [2024-11-19 08:01:28.943986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.303 [2024-11-19 08:01:28.944021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.303 qpair failed and we were unable to recover it. 00:37:37.303 [2024-11-19 08:01:28.944171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.303 [2024-11-19 08:01:28.944207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.303 qpair failed and we were unable to recover it. 00:37:37.303 [2024-11-19 08:01:28.944346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.303 [2024-11-19 08:01:28.944382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.303 qpair failed and we were unable to recover it. 00:37:37.303 [2024-11-19 08:01:28.944497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.303 [2024-11-19 08:01:28.944533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.303 qpair failed and we were unable to recover it. 00:37:37.303 [2024-11-19 08:01:28.944641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.303 [2024-11-19 08:01:28.944680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.303 qpair failed and we were unable to recover it. 00:37:37.303 [2024-11-19 08:01:28.944806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.303 [2024-11-19 08:01:28.944843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.303 qpair failed and we were unable to recover it. 00:37:37.303 [2024-11-19 08:01:28.944957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.303 [2024-11-19 08:01:28.944993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.303 qpair failed and we were unable to recover it. 00:37:37.303 [2024-11-19 08:01:28.945138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.303 [2024-11-19 08:01:28.945173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.303 qpair failed and we were unable to recover it. 00:37:37.303 [2024-11-19 08:01:28.945308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.303 [2024-11-19 08:01:28.945342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.303 qpair failed and we were unable to recover it. 00:37:37.303 [2024-11-19 08:01:28.945475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.303 [2024-11-19 08:01:28.945509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.303 qpair failed and we were unable to recover it. 00:37:37.303 [2024-11-19 08:01:28.945616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.303 [2024-11-19 08:01:28.945651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.303 qpair failed and we were unable to recover it. 00:37:37.303 [2024-11-19 08:01:28.945779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.303 [2024-11-19 08:01:28.945815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.303 qpair failed and we were unable to recover it. 00:37:37.303 [2024-11-19 08:01:28.945957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.303 [2024-11-19 08:01:28.945992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.303 qpair failed and we were unable to recover it. 00:37:37.303 [2024-11-19 08:01:28.946162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.303 [2024-11-19 08:01:28.946196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.303 qpair failed and we were unable to recover it. 00:37:37.303 [2024-11-19 08:01:28.946306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.303 [2024-11-19 08:01:28.946340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.303 qpair failed and we were unable to recover it. 00:37:37.303 [2024-11-19 08:01:28.946459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.303 [2024-11-19 08:01:28.946500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.303 qpair failed and we were unable to recover it. 00:37:37.303 [2024-11-19 08:01:28.946608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.303 [2024-11-19 08:01:28.946643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.303 qpair failed and we were unable to recover it. 00:37:37.303 [2024-11-19 08:01:28.946795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.303 [2024-11-19 08:01:28.946830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.303 qpair failed and we were unable to recover it. 00:37:37.303 [2024-11-19 08:01:28.946929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.303 [2024-11-19 08:01:28.946964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.303 qpair failed and we were unable to recover it. 00:37:37.303 [2024-11-19 08:01:28.947094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.303 [2024-11-19 08:01:28.947129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.303 qpair failed and we were unable to recover it. 00:37:37.303 [2024-11-19 08:01:28.947261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.303 [2024-11-19 08:01:28.947296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.303 qpair failed and we were unable to recover it. 00:37:37.303 [2024-11-19 08:01:28.947439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.303 [2024-11-19 08:01:28.947475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.303 qpair failed and we were unable to recover it. 00:37:37.303 [2024-11-19 08:01:28.947613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.303 [2024-11-19 08:01:28.947648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.303 qpair failed and we were unable to recover it. 00:37:37.303 [2024-11-19 08:01:28.947791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.303 [2024-11-19 08:01:28.947840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.303 qpair failed and we were unable to recover it. 00:37:37.303 [2024-11-19 08:01:28.947965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.303 [2024-11-19 08:01:28.948011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.303 qpair failed and we were unable to recover it. 00:37:37.303 [2024-11-19 08:01:28.948170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.303 [2024-11-19 08:01:28.948206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.303 qpair failed and we were unable to recover it. 00:37:37.303 [2024-11-19 08:01:28.948311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.303 [2024-11-19 08:01:28.948344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.303 qpair failed and we were unable to recover it. 00:37:37.303 [2024-11-19 08:01:28.948475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.304 [2024-11-19 08:01:28.948514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.304 qpair failed and we were unable to recover it. 00:37:37.304 [2024-11-19 08:01:28.948629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.304 [2024-11-19 08:01:28.948664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.304 qpair failed and we were unable to recover it. 00:37:37.304 [2024-11-19 08:01:28.948823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.304 [2024-11-19 08:01:28.948858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.304 qpair failed and we were unable to recover it. 00:37:37.304 [2024-11-19 08:01:28.949003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.304 [2024-11-19 08:01:28.949038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.304 qpair failed and we were unable to recover it. 00:37:37.304 [2024-11-19 08:01:28.949182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.304 [2024-11-19 08:01:28.949228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.304 qpair failed and we were unable to recover it. 00:37:37.304 [2024-11-19 08:01:28.949378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.304 [2024-11-19 08:01:28.949415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.304 qpair failed and we were unable to recover it. 00:37:37.304 [2024-11-19 08:01:28.949557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.304 [2024-11-19 08:01:28.949593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.304 qpair failed and we were unable to recover it. 00:37:37.304 [2024-11-19 08:01:28.949735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.304 [2024-11-19 08:01:28.949769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.304 qpair failed and we were unable to recover it. 00:37:37.304 [2024-11-19 08:01:28.949874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.304 [2024-11-19 08:01:28.949909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.304 qpair failed and we were unable to recover it. 00:37:37.304 [2024-11-19 08:01:28.950028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.304 [2024-11-19 08:01:28.950063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.304 qpair failed and we were unable to recover it. 00:37:37.304 [2024-11-19 08:01:28.950212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.304 [2024-11-19 08:01:28.950246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.304 qpair failed and we were unable to recover it. 00:37:37.304 [2024-11-19 08:01:28.950361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.304 [2024-11-19 08:01:28.950403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.304 qpair failed and we were unable to recover it. 00:37:37.304 [2024-11-19 08:01:28.950547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.304 [2024-11-19 08:01:28.950583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.304 qpair failed and we were unable to recover it. 00:37:37.304 [2024-11-19 08:01:28.950721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.304 [2024-11-19 08:01:28.950756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.304 qpair failed and we were unable to recover it. 00:37:37.304 [2024-11-19 08:01:28.950865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.304 [2024-11-19 08:01:28.950900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.304 qpair failed and we were unable to recover it. 00:37:37.304 [2024-11-19 08:01:28.951045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.304 [2024-11-19 08:01:28.951080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.304 qpair failed and we were unable to recover it. 00:37:37.304 [2024-11-19 08:01:28.951216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.304 [2024-11-19 08:01:28.951251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.304 qpair failed and we were unable to recover it. 00:37:37.304 [2024-11-19 08:01:28.951361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.304 [2024-11-19 08:01:28.951395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.304 qpair failed and we were unable to recover it. 00:37:37.304 [2024-11-19 08:01:28.951539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.304 [2024-11-19 08:01:28.951573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.304 qpair failed and we were unable to recover it. 00:37:37.304 [2024-11-19 08:01:28.951704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.304 [2024-11-19 08:01:28.951744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.304 qpair failed and we were unable to recover it. 00:37:37.304 [2024-11-19 08:01:28.951869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.304 [2024-11-19 08:01:28.951917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.304 qpair failed and we were unable to recover it. 00:37:37.304 [2024-11-19 08:01:28.952042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.304 [2024-11-19 08:01:28.952079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.304 qpair failed and we were unable to recover it. 00:37:37.304 [2024-11-19 08:01:28.952185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.304 [2024-11-19 08:01:28.952220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.304 qpair failed and we were unable to recover it. 00:37:37.304 [2024-11-19 08:01:28.952335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.304 [2024-11-19 08:01:28.952369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.304 qpair failed and we were unable to recover it. 00:37:37.304 [2024-11-19 08:01:28.952494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.304 [2024-11-19 08:01:28.952543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.304 qpair failed and we were unable to recover it. 00:37:37.304 [2024-11-19 08:01:28.952703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.304 [2024-11-19 08:01:28.952739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.304 qpair failed and we were unable to recover it. 00:37:37.304 [2024-11-19 08:01:28.952880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.304 [2024-11-19 08:01:28.952914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.304 qpair failed and we were unable to recover it. 00:37:37.304 [2024-11-19 08:01:28.953055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.304 [2024-11-19 08:01:28.953094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.304 qpair failed and we were unable to recover it. 00:37:37.304 [2024-11-19 08:01:28.953198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.304 [2024-11-19 08:01:28.953233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.304 qpair failed and we were unable to recover it. 00:37:37.304 [2024-11-19 08:01:28.953367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.304 [2024-11-19 08:01:28.953401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.304 qpair failed and we were unable to recover it. 00:37:37.304 [2024-11-19 08:01:28.953562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.304 [2024-11-19 08:01:28.953596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.304 qpair failed and we were unable to recover it. 00:37:37.304 [2024-11-19 08:01:28.954361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.304 [2024-11-19 08:01:28.954411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.304 qpair failed and we were unable to recover it. 00:37:37.304 [2024-11-19 08:01:28.954620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.304 [2024-11-19 08:01:28.954655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.304 qpair failed and we were unable to recover it. 00:37:37.304 [2024-11-19 08:01:28.954792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.304 [2024-11-19 08:01:28.954827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.304 qpair failed and we were unable to recover it. 00:37:37.304 [2024-11-19 08:01:28.954960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.304 [2024-11-19 08:01:28.954998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.304 qpair failed and we were unable to recover it. 00:37:37.304 [2024-11-19 08:01:28.955140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.304 [2024-11-19 08:01:28.955175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.304 qpair failed and we were unable to recover it. 00:37:37.304 [2024-11-19 08:01:28.955335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.304 [2024-11-19 08:01:28.955324] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:37:37.304 [2024-11-19 08:01:28.955370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.304 qpair failed and we were unable to recover it. 00:37:37.305 [2024-11-19 08:01:28.955447] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:37.305 [2024-11-19 08:01:28.955510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.305 [2024-11-19 08:01:28.955543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.305 qpair failed and we were unable to recover it. 00:37:37.305 [2024-11-19 08:01:28.955667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.305 [2024-11-19 08:01:28.955718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.305 qpair failed and we were unable to recover it. 00:37:37.305 [2024-11-19 08:01:28.955858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.305 [2024-11-19 08:01:28.955891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.305 qpair failed and we were unable to recover it. 00:37:37.305 [2024-11-19 08:01:28.956038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.305 [2024-11-19 08:01:28.956081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.305 qpair failed and we were unable to recover it. 00:37:37.305 [2024-11-19 08:01:28.956213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.305 [2024-11-19 08:01:28.956249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.305 qpair failed and we were unable to recover it. 00:37:37.305 [2024-11-19 08:01:28.956374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.305 [2024-11-19 08:01:28.956424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.305 qpair failed and we were unable to recover it. 00:37:37.305 [2024-11-19 08:01:28.956564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.305 [2024-11-19 08:01:28.956613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.305 qpair failed and we were unable to recover it. 00:37:37.305 [2024-11-19 08:01:28.956755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.305 [2024-11-19 08:01:28.956804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.305 qpair failed and we were unable to recover it. 00:37:37.305 [2024-11-19 08:01:28.956950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.305 [2024-11-19 08:01:28.956986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.305 qpair failed and we were unable to recover it. 00:37:37.305 [2024-11-19 08:01:28.957109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.305 [2024-11-19 08:01:28.957143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.305 qpair failed and we were unable to recover it. 00:37:37.305 [2024-11-19 08:01:28.957276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.305 [2024-11-19 08:01:28.957311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.305 qpair failed and we were unable to recover it. 00:37:37.305 [2024-11-19 08:01:28.957427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.305 [2024-11-19 08:01:28.957462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.305 qpair failed and we were unable to recover it. 00:37:37.305 [2024-11-19 08:01:28.957567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.305 [2024-11-19 08:01:28.957601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.305 qpair failed and we were unable to recover it. 00:37:37.305 [2024-11-19 08:01:28.957740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.305 [2024-11-19 08:01:28.957780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.305 qpair failed and we were unable to recover it. 00:37:37.305 [2024-11-19 08:01:28.957886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.305 [2024-11-19 08:01:28.957921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.305 qpair failed and we were unable to recover it. 00:37:37.305 [2024-11-19 08:01:28.958075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.305 [2024-11-19 08:01:28.958109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.305 qpair failed and we were unable to recover it. 00:37:37.305 [2024-11-19 08:01:28.958252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.305 [2024-11-19 08:01:28.958286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.305 qpair failed and we were unable to recover it. 00:37:37.305 [2024-11-19 08:01:28.958404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.305 [2024-11-19 08:01:28.958439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.305 qpair failed and we were unable to recover it. 00:37:37.305 [2024-11-19 08:01:28.958604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.305 [2024-11-19 08:01:28.958638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.305 qpair failed and we were unable to recover it. 00:37:37.305 [2024-11-19 08:01:28.958769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.305 [2024-11-19 08:01:28.958816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.305 qpair failed and we were unable to recover it. 00:37:37.305 [2024-11-19 08:01:28.958927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.305 [2024-11-19 08:01:28.958962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.305 qpair failed and we were unable to recover it. 00:37:37.305 [2024-11-19 08:01:28.959118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.305 [2024-11-19 08:01:28.959152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.305 qpair failed and we were unable to recover it. 00:37:37.305 [2024-11-19 08:01:28.959269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.305 [2024-11-19 08:01:28.959304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.305 qpair failed and we were unable to recover it. 00:37:37.305 [2024-11-19 08:01:28.959455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.305 [2024-11-19 08:01:28.959503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.305 qpair failed and we were unable to recover it. 00:37:37.305 [2024-11-19 08:01:28.959664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.305 [2024-11-19 08:01:28.959732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.305 qpair failed and we were unable to recover it. 00:37:37.305 [2024-11-19 08:01:28.959843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.305 [2024-11-19 08:01:28.959879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.305 qpair failed and we were unable to recover it. 00:37:37.305 [2024-11-19 08:01:28.960000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.305 [2024-11-19 08:01:28.960035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.305 qpair failed and we were unable to recover it. 00:37:37.305 [2024-11-19 08:01:28.960166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.305 [2024-11-19 08:01:28.960200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.305 qpair failed and we were unable to recover it. 00:37:37.305 [2024-11-19 08:01:28.960304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.305 [2024-11-19 08:01:28.960338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.305 qpair failed and we were unable to recover it. 00:37:37.305 [2024-11-19 08:01:28.960453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.305 [2024-11-19 08:01:28.960488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.305 qpair failed and we were unable to recover it. 00:37:37.305 [2024-11-19 08:01:28.960610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.305 [2024-11-19 08:01:28.960649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.305 qpair failed and we were unable to recover it. 00:37:37.305 [2024-11-19 08:01:28.960823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.305 [2024-11-19 08:01:28.960872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.305 qpair failed and we were unable to recover it. 00:37:37.305 [2024-11-19 08:01:28.961028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.305 [2024-11-19 08:01:28.961075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.305 qpair failed and we were unable to recover it. 00:37:37.305 [2024-11-19 08:01:28.961211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.305 [2024-11-19 08:01:28.961245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.305 qpair failed and we were unable to recover it. 00:37:37.305 [2024-11-19 08:01:28.961354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.305 [2024-11-19 08:01:28.961388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.305 qpair failed and we were unable to recover it. 00:37:37.305 [2024-11-19 08:01:28.961499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.305 [2024-11-19 08:01:28.961532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.305 qpair failed and we were unable to recover it. 00:37:37.305 [2024-11-19 08:01:28.961665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.305 [2024-11-19 08:01:28.961715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.305 qpair failed and we were unable to recover it. 00:37:37.305 [2024-11-19 08:01:28.961839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.306 [2024-11-19 08:01:28.961873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.306 qpair failed and we were unable to recover it. 00:37:37.306 [2024-11-19 08:01:28.962014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.306 [2024-11-19 08:01:28.962057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.306 qpair failed and we were unable to recover it. 00:37:37.306 [2024-11-19 08:01:28.962197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.306 [2024-11-19 08:01:28.962236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.306 qpair failed and we were unable to recover it. 00:37:37.306 [2024-11-19 08:01:28.962372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.306 [2024-11-19 08:01:28.962405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.306 qpair failed and we were unable to recover it. 00:37:37.306 [2024-11-19 08:01:28.962539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.306 [2024-11-19 08:01:28.962589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.306 qpair failed and we were unable to recover it. 00:37:37.306 [2024-11-19 08:01:28.962743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.306 [2024-11-19 08:01:28.962793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.306 qpair failed and we were unable to recover it. 00:37:37.306 [2024-11-19 08:01:28.962918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.306 [2024-11-19 08:01:28.962955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.306 qpair failed and we were unable to recover it. 00:37:37.306 [2024-11-19 08:01:28.963109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.306 [2024-11-19 08:01:28.963155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.306 qpair failed and we were unable to recover it. 00:37:37.306 [2024-11-19 08:01:28.963297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.306 [2024-11-19 08:01:28.963332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.306 qpair failed and we were unable to recover it. 00:37:37.306 [2024-11-19 08:01:28.963448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.306 [2024-11-19 08:01:28.963484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.306 qpair failed and we were unable to recover it. 00:37:37.306 [2024-11-19 08:01:28.963625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.306 [2024-11-19 08:01:28.963660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.306 qpair failed and we were unable to recover it. 00:37:37.306 [2024-11-19 08:01:28.963811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.306 [2024-11-19 08:01:28.963846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.306 qpair failed and we were unable to recover it. 00:37:37.306 [2024-11-19 08:01:28.963993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.306 [2024-11-19 08:01:28.964028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.306 qpair failed and we were unable to recover it. 00:37:37.306 [2024-11-19 08:01:28.964129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.306 [2024-11-19 08:01:28.964164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.306 qpair failed and we were unable to recover it. 00:37:37.306 [2024-11-19 08:01:28.964327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.306 [2024-11-19 08:01:28.964363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.306 qpair failed and we were unable to recover it. 00:37:37.306 [2024-11-19 08:01:28.964501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.306 [2024-11-19 08:01:28.964540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.306 qpair failed and we were unable to recover it. 00:37:37.306 [2024-11-19 08:01:28.964767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.306 [2024-11-19 08:01:28.964806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.306 qpair failed and we were unable to recover it. 00:37:37.306 [2024-11-19 08:01:28.964945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.306 [2024-11-19 08:01:28.964979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.306 qpair failed and we were unable to recover it. 00:37:37.306 [2024-11-19 08:01:28.965090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.306 [2024-11-19 08:01:28.965125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.306 qpair failed and we were unable to recover it. 00:37:37.306 [2024-11-19 08:01:28.965240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.306 [2024-11-19 08:01:28.965274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.306 qpair failed and we were unable to recover it. 00:37:37.306 [2024-11-19 08:01:28.965454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.306 [2024-11-19 08:01:28.965503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.306 qpair failed and we were unable to recover it. 00:37:37.306 [2024-11-19 08:01:28.965674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.306 [2024-11-19 08:01:28.965734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.306 qpair failed and we were unable to recover it. 00:37:37.306 [2024-11-19 08:01:28.965869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.306 [2024-11-19 08:01:28.965918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.306 qpair failed and we were unable to recover it. 00:37:37.306 [2024-11-19 08:01:28.966039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.306 [2024-11-19 08:01:28.966080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.306 qpair failed and we were unable to recover it. 00:37:37.306 [2024-11-19 08:01:28.966219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.306 [2024-11-19 08:01:28.966254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.306 qpair failed and we were unable to recover it. 00:37:37.306 [2024-11-19 08:01:28.966389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.306 [2024-11-19 08:01:28.966424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.306 qpair failed and we were unable to recover it. 00:37:37.306 [2024-11-19 08:01:28.966572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.306 [2024-11-19 08:01:28.966605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.306 qpair failed and we were unable to recover it. 00:37:37.306 [2024-11-19 08:01:28.966740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.306 [2024-11-19 08:01:28.966789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.306 qpair failed and we were unable to recover it. 00:37:37.306 [2024-11-19 08:01:28.966943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.306 [2024-11-19 08:01:28.966993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.306 qpair failed and we were unable to recover it. 00:37:37.306 [2024-11-19 08:01:28.967164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.306 [2024-11-19 08:01:28.967210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.306 qpair failed and we were unable to recover it. 00:37:37.306 [2024-11-19 08:01:28.967332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.306 [2024-11-19 08:01:28.967367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.306 qpair failed and we were unable to recover it. 00:37:37.306 [2024-11-19 08:01:28.967483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.306 [2024-11-19 08:01:28.967519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.306 qpair failed and we were unable to recover it. 00:37:37.306 [2024-11-19 08:01:28.967702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.306 [2024-11-19 08:01:28.967738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.306 qpair failed and we were unable to recover it. 00:37:37.306 [2024-11-19 08:01:28.967877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.307 [2024-11-19 08:01:28.967913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.307 qpair failed and we were unable to recover it. 00:37:37.307 [2024-11-19 08:01:28.968025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.307 [2024-11-19 08:01:28.968059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.307 qpair failed and we were unable to recover it. 00:37:37.307 [2024-11-19 08:01:28.968183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.307 [2024-11-19 08:01:28.968216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.307 qpair failed and we were unable to recover it. 00:37:37.307 [2024-11-19 08:01:28.968323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.307 [2024-11-19 08:01:28.968357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.307 qpair failed and we were unable to recover it. 00:37:37.307 [2024-11-19 08:01:28.968464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.307 [2024-11-19 08:01:28.968499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.307 qpair failed and we were unable to recover it. 00:37:37.307 [2024-11-19 08:01:28.968626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.307 [2024-11-19 08:01:28.968669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.307 qpair failed and we were unable to recover it. 00:37:37.307 [2024-11-19 08:01:28.968789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.307 [2024-11-19 08:01:28.968823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.307 qpair failed and we were unable to recover it. 00:37:37.307 [2024-11-19 08:01:28.968922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.307 [2024-11-19 08:01:28.968956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.307 qpair failed and we were unable to recover it. 00:37:37.307 [2024-11-19 08:01:28.969101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.307 [2024-11-19 08:01:28.969135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.307 qpair failed and we were unable to recover it. 00:37:37.307 [2024-11-19 08:01:28.969236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.307 [2024-11-19 08:01:28.969271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.307 qpair failed and we were unable to recover it. 00:37:37.307 [2024-11-19 08:01:28.969401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.307 [2024-11-19 08:01:28.969450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.307 qpair failed and we were unable to recover it. 00:37:37.307 [2024-11-19 08:01:28.969599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.307 [2024-11-19 08:01:28.969635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.307 qpair failed and we were unable to recover it. 00:37:37.307 [2024-11-19 08:01:28.969784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.307 [2024-11-19 08:01:28.969833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.307 qpair failed and we were unable to recover it. 00:37:37.307 [2024-11-19 08:01:28.969946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.307 [2024-11-19 08:01:28.969980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.307 qpair failed and we were unable to recover it. 00:37:37.307 [2024-11-19 08:01:28.970110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.307 [2024-11-19 08:01:28.970152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.307 qpair failed and we were unable to recover it. 00:37:37.307 [2024-11-19 08:01:28.970309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.307 [2024-11-19 08:01:28.970343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.307 qpair failed and we were unable to recover it. 00:37:37.307 [2024-11-19 08:01:28.970490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.307 [2024-11-19 08:01:28.970525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.307 qpair failed and we were unable to recover it. 00:37:37.307 [2024-11-19 08:01:28.970637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.307 [2024-11-19 08:01:28.970670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.307 qpair failed and we were unable to recover it. 00:37:37.307 [2024-11-19 08:01:28.970797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.307 [2024-11-19 08:01:28.970834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.307 qpair failed and we were unable to recover it. 00:37:37.307 [2024-11-19 08:01:28.971014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.307 [2024-11-19 08:01:28.971062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.307 qpair failed and we were unable to recover it. 00:37:37.307 [2024-11-19 08:01:28.971181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.307 [2024-11-19 08:01:28.971217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.307 qpair failed and we were unable to recover it. 00:37:37.307 [2024-11-19 08:01:28.971325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.307 [2024-11-19 08:01:28.971360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.307 qpair failed and we were unable to recover it. 00:37:37.307 [2024-11-19 08:01:28.971495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.307 [2024-11-19 08:01:28.971530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.307 qpair failed and we were unable to recover it. 00:37:37.307 [2024-11-19 08:01:28.971658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.307 [2024-11-19 08:01:28.971721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.307 qpair failed and we were unable to recover it. 00:37:37.307 [2024-11-19 08:01:28.971841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.307 [2024-11-19 08:01:28.971877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.307 qpair failed and we were unable to recover it. 00:37:37.307 [2024-11-19 08:01:28.972022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.307 [2024-11-19 08:01:28.972056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.307 qpair failed and we were unable to recover it. 00:37:37.307 [2024-11-19 08:01:28.972164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.307 [2024-11-19 08:01:28.972197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.307 qpair failed and we were unable to recover it. 00:37:37.307 [2024-11-19 08:01:28.972335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.307 [2024-11-19 08:01:28.972369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.307 qpair failed and we were unable to recover it. 00:37:37.307 [2024-11-19 08:01:28.972475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.307 [2024-11-19 08:01:28.972509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.307 qpair failed and we were unable to recover it. 00:37:37.307 [2024-11-19 08:01:28.972726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.307 [2024-11-19 08:01:28.972762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.307 qpair failed and we were unable to recover it. 00:37:37.307 [2024-11-19 08:01:28.972995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.307 [2024-11-19 08:01:28.973056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.307 qpair failed and we were unable to recover it. 00:37:37.307 [2024-11-19 08:01:28.973215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.307 [2024-11-19 08:01:28.973252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.307 qpair failed and we were unable to recover it. 00:37:37.307 [2024-11-19 08:01:28.973356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.307 [2024-11-19 08:01:28.973391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.307 qpair failed and we were unable to recover it. 00:37:37.307 [2024-11-19 08:01:28.973536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.307 [2024-11-19 08:01:28.973574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.307 qpair failed and we were unable to recover it. 00:37:37.307 [2024-11-19 08:01:28.973784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.307 [2024-11-19 08:01:28.973820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.307 qpair failed and we were unable to recover it. 00:37:37.307 [2024-11-19 08:01:28.973925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.307 [2024-11-19 08:01:28.973959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.307 qpair failed and we were unable to recover it. 00:37:37.307 [2024-11-19 08:01:28.974126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.307 [2024-11-19 08:01:28.974161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.307 qpair failed and we were unable to recover it. 00:37:37.307 [2024-11-19 08:01:28.974277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.307 [2024-11-19 08:01:28.974313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.307 qpair failed and we were unable to recover it. 00:37:37.308 [2024-11-19 08:01:28.974461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.308 [2024-11-19 08:01:28.974497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.308 qpair failed and we were unable to recover it. 00:37:37.308 [2024-11-19 08:01:28.974630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.308 [2024-11-19 08:01:28.974665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.308 qpair failed and we were unable to recover it. 00:37:37.308 [2024-11-19 08:01:28.974807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.308 [2024-11-19 08:01:28.974842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.308 qpair failed and we were unable to recover it. 00:37:37.308 [2024-11-19 08:01:28.974960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.308 [2024-11-19 08:01:28.975007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.308 qpair failed and we were unable to recover it. 00:37:37.308 [2024-11-19 08:01:28.975150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.308 [2024-11-19 08:01:28.975185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.308 qpair failed and we were unable to recover it. 00:37:37.308 [2024-11-19 08:01:28.975322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.308 [2024-11-19 08:01:28.975357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.308 qpair failed and we were unable to recover it. 00:37:37.308 [2024-11-19 08:01:28.975499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.308 [2024-11-19 08:01:28.975535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.308 qpair failed and we were unable to recover it. 00:37:37.308 [2024-11-19 08:01:28.975670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.308 [2024-11-19 08:01:28.975715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.308 qpair failed and we were unable to recover it. 00:37:37.308 [2024-11-19 08:01:28.975835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.308 [2024-11-19 08:01:28.975874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.308 qpair failed and we were unable to recover it. 00:37:37.308 [2024-11-19 08:01:28.975985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.308 [2024-11-19 08:01:28.976029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.308 qpair failed and we were unable to recover it. 00:37:37.308 [2024-11-19 08:01:28.976153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.308 [2024-11-19 08:01:28.976204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.308 qpair failed and we were unable to recover it. 00:37:37.308 [2024-11-19 08:01:28.976371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.308 [2024-11-19 08:01:28.976408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.308 qpair failed and we were unable to recover it. 00:37:37.308 [2024-11-19 08:01:28.976523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.308 [2024-11-19 08:01:28.976559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.308 qpair failed and we were unable to recover it. 00:37:37.308 [2024-11-19 08:01:28.976670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.308 [2024-11-19 08:01:28.976720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.308 qpair failed and we were unable to recover it. 00:37:37.308 [2024-11-19 08:01:28.976835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.308 [2024-11-19 08:01:28.976870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.308 qpair failed and we were unable to recover it. 00:37:37.308 [2024-11-19 08:01:28.976997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.308 [2024-11-19 08:01:28.977053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.308 qpair failed and we were unable to recover it. 00:37:37.308 [2024-11-19 08:01:28.977171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.308 [2024-11-19 08:01:28.977209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.308 qpair failed and we were unable to recover it. 00:37:37.308 [2024-11-19 08:01:28.977352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.308 [2024-11-19 08:01:28.977387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.308 qpair failed and we were unable to recover it. 00:37:37.308 [2024-11-19 08:01:28.977500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.308 [2024-11-19 08:01:28.977534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.308 qpair failed and we were unable to recover it. 00:37:37.308 [2024-11-19 08:01:28.977706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.308 [2024-11-19 08:01:28.977741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.308 qpair failed and we were unable to recover it. 00:37:37.308 [2024-11-19 08:01:28.977875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.308 [2024-11-19 08:01:28.977924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.308 qpair failed and we were unable to recover it. 00:37:37.308 [2024-11-19 08:01:28.978071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.308 [2024-11-19 08:01:28.978106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.308 qpair failed and we were unable to recover it. 00:37:37.308 [2024-11-19 08:01:28.978217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.308 [2024-11-19 08:01:28.978252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.308 qpair failed and we were unable to recover it. 00:37:37.308 [2024-11-19 08:01:28.978355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.308 [2024-11-19 08:01:28.978390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.308 qpair failed and we were unable to recover it. 00:37:37.308 [2024-11-19 08:01:28.978497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.308 [2024-11-19 08:01:28.978531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.308 qpair failed and we were unable to recover it. 00:37:37.308 [2024-11-19 08:01:28.978767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.308 [2024-11-19 08:01:28.978821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.308 qpair failed and we were unable to recover it. 00:37:37.308 [2024-11-19 08:01:28.978932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.308 [2024-11-19 08:01:28.978968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.308 qpair failed and we were unable to recover it. 00:37:37.308 [2024-11-19 08:01:28.979086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.308 [2024-11-19 08:01:28.979124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.308 qpair failed and we were unable to recover it. 00:37:37.308 [2024-11-19 08:01:28.979269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.308 [2024-11-19 08:01:28.979304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.308 qpair failed and we were unable to recover it. 00:37:37.308 [2024-11-19 08:01:28.979423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.308 [2024-11-19 08:01:28.979459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.308 qpair failed and we were unable to recover it. 00:37:37.308 [2024-11-19 08:01:28.979571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.308 [2024-11-19 08:01:28.979606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.308 qpair failed and we were unable to recover it. 00:37:37.308 [2024-11-19 08:01:28.979726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.308 [2024-11-19 08:01:28.979762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.308 qpair failed and we were unable to recover it. 00:37:37.308 [2024-11-19 08:01:28.979877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.308 [2024-11-19 08:01:28.979912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.308 qpair failed and we were unable to recover it. 00:37:37.308 [2024-11-19 08:01:28.980029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.308 [2024-11-19 08:01:28.980066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.308 qpair failed and we were unable to recover it. 00:37:37.308 [2024-11-19 08:01:28.980203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.308 [2024-11-19 08:01:28.980238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.308 qpair failed and we were unable to recover it. 00:37:37.308 [2024-11-19 08:01:28.980385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.308 [2024-11-19 08:01:28.980420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.308 qpair failed and we were unable to recover it. 00:37:37.308 [2024-11-19 08:01:28.980543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.308 [2024-11-19 08:01:28.980592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.308 qpair failed and we were unable to recover it. 00:37:37.308 [2024-11-19 08:01:28.980743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.309 [2024-11-19 08:01:28.980779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.309 qpair failed and we were unable to recover it. 00:37:37.309 [2024-11-19 08:01:28.980914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.309 [2024-11-19 08:01:28.980949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.309 qpair failed and we were unable to recover it. 00:37:37.309 [2024-11-19 08:01:28.981073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.309 [2024-11-19 08:01:28.981107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.309 qpair failed and we were unable to recover it. 00:37:37.309 [2024-11-19 08:01:28.981229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.309 [2024-11-19 08:01:28.981263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.309 qpair failed and we were unable to recover it. 00:37:37.309 [2024-11-19 08:01:28.981389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.309 [2024-11-19 08:01:28.981425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.309 qpair failed and we were unable to recover it. 00:37:37.309 [2024-11-19 08:01:28.981539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.309 [2024-11-19 08:01:28.981573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.309 qpair failed and we were unable to recover it. 00:37:37.309 [2024-11-19 08:01:28.981793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.309 [2024-11-19 08:01:28.981830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.309 qpair failed and we were unable to recover it. 00:37:37.309 [2024-11-19 08:01:28.981940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.309 [2024-11-19 08:01:28.981975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.309 qpair failed and we were unable to recover it. 00:37:37.309 [2024-11-19 08:01:28.982111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.309 [2024-11-19 08:01:28.982145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.309 qpair failed and we were unable to recover it. 00:37:37.309 [2024-11-19 08:01:28.982291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.309 [2024-11-19 08:01:28.982324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.309 qpair failed and we were unable to recover it. 00:37:37.309 [2024-11-19 08:01:28.982477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.309 [2024-11-19 08:01:28.982513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.309 qpair failed and we were unable to recover it. 00:37:37.309 [2024-11-19 08:01:28.982695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.309 [2024-11-19 08:01:28.982732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.309 qpair failed and we were unable to recover it. 00:37:37.309 [2024-11-19 08:01:28.982847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.309 [2024-11-19 08:01:28.982883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.309 qpair failed and we were unable to recover it. 00:37:37.309 [2024-11-19 08:01:28.983041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.309 [2024-11-19 08:01:28.983076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.309 qpair failed and we were unable to recover it. 00:37:37.309 [2024-11-19 08:01:28.983241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.309 [2024-11-19 08:01:28.983280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.309 qpair failed and we were unable to recover it. 00:37:37.309 [2024-11-19 08:01:28.983414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.309 [2024-11-19 08:01:28.983449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.309 qpair failed and we were unable to recover it. 00:37:37.309 [2024-11-19 08:01:28.983561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.309 [2024-11-19 08:01:28.983596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.309 qpair failed and we were unable to recover it. 00:37:37.309 [2024-11-19 08:01:28.983717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.309 [2024-11-19 08:01:28.983752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.309 qpair failed and we were unable to recover it. 00:37:37.309 [2024-11-19 08:01:28.983881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.309 [2024-11-19 08:01:28.983916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.309 qpair failed and we were unable to recover it. 00:37:37.309 [2024-11-19 08:01:28.984019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.309 [2024-11-19 08:01:28.984064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.309 qpair failed and we were unable to recover it. 00:37:37.309 [2024-11-19 08:01:28.984176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.309 [2024-11-19 08:01:28.984211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.309 qpair failed and we were unable to recover it. 00:37:37.309 [2024-11-19 08:01:28.984354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.309 [2024-11-19 08:01:28.984403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.309 qpair failed and we were unable to recover it. 00:37:37.309 [2024-11-19 08:01:28.984526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.309 [2024-11-19 08:01:28.984567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.309 qpair failed and we were unable to recover it. 00:37:37.309 [2024-11-19 08:01:28.984711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.309 [2024-11-19 08:01:28.984759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.309 qpair failed and we were unable to recover it. 00:37:37.309 [2024-11-19 08:01:28.984873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.309 [2024-11-19 08:01:28.984910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.309 qpair failed and we were unable to recover it. 00:37:37.309 [2024-11-19 08:01:28.985064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.309 [2024-11-19 08:01:28.985100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.309 qpair failed and we were unable to recover it. 00:37:37.309 [2024-11-19 08:01:28.985233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.309 [2024-11-19 08:01:28.985267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.309 qpair failed and we were unable to recover it. 00:37:37.309 [2024-11-19 08:01:28.985385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.309 [2024-11-19 08:01:28.985419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.309 qpair failed and we were unable to recover it. 00:37:37.309 [2024-11-19 08:01:28.985561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.309 [2024-11-19 08:01:28.985604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.309 qpair failed and we were unable to recover it. 00:37:37.309 [2024-11-19 08:01:28.985758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.309 [2024-11-19 08:01:28.985796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.309 qpair failed and we were unable to recover it. 00:37:37.309 [2024-11-19 08:01:28.985909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.309 [2024-11-19 08:01:28.985946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.309 qpair failed and we were unable to recover it. 00:37:37.309 [2024-11-19 08:01:28.986074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.309 [2024-11-19 08:01:28.986109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.309 qpair failed and we were unable to recover it. 00:37:37.309 [2024-11-19 08:01:28.986221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.309 [2024-11-19 08:01:28.986257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.309 qpair failed and we were unable to recover it. 00:37:37.309 [2024-11-19 08:01:28.986393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.309 [2024-11-19 08:01:28.986428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.309 qpair failed and we were unable to recover it. 00:37:37.309 [2024-11-19 08:01:28.986528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.309 [2024-11-19 08:01:28.986563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.309 qpair failed and we were unable to recover it. 00:37:37.309 [2024-11-19 08:01:28.986715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.309 [2024-11-19 08:01:28.986765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.309 qpair failed and we were unable to recover it. 00:37:37.309 [2024-11-19 08:01:28.986868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.309 [2024-11-19 08:01:28.986903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.309 qpair failed and we were unable to recover it. 00:37:37.309 [2024-11-19 08:01:28.987072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.309 [2024-11-19 08:01:28.987107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.309 qpair failed and we were unable to recover it. 00:37:37.310 [2024-11-19 08:01:28.987238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.310 [2024-11-19 08:01:28.987278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.310 qpair failed and we were unable to recover it. 00:37:37.310 [2024-11-19 08:01:28.987437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.310 [2024-11-19 08:01:28.987471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.310 qpair failed and we were unable to recover it. 00:37:37.310 [2024-11-19 08:01:28.987592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.310 [2024-11-19 08:01:28.987641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.310 qpair failed and we were unable to recover it. 00:37:37.310 [2024-11-19 08:01:28.987801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.310 [2024-11-19 08:01:28.987838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.310 qpair failed and we were unable to recover it. 00:37:37.310 [2024-11-19 08:01:28.987958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.310 [2024-11-19 08:01:28.987995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.310 qpair failed and we were unable to recover it. 00:37:37.310 [2024-11-19 08:01:28.988163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.310 [2024-11-19 08:01:28.988200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.310 qpair failed and we were unable to recover it. 00:37:37.310 [2024-11-19 08:01:28.988335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.310 [2024-11-19 08:01:28.988370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.310 qpair failed and we were unable to recover it. 00:37:37.310 [2024-11-19 08:01:28.988522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.310 [2024-11-19 08:01:28.988558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.310 qpair failed and we were unable to recover it. 00:37:37.310 [2024-11-19 08:01:28.988706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.310 [2024-11-19 08:01:28.988742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.310 qpair failed and we were unable to recover it. 00:37:37.310 [2024-11-19 08:01:28.988878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.310 [2024-11-19 08:01:28.988927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.310 qpair failed and we were unable to recover it. 00:37:37.310 [2024-11-19 08:01:28.989100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.310 [2024-11-19 08:01:28.989136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.310 qpair failed and we were unable to recover it. 00:37:37.310 [2024-11-19 08:01:28.989282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.310 [2024-11-19 08:01:28.989317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.310 qpair failed and we were unable to recover it. 00:37:37.310 [2024-11-19 08:01:28.989456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.310 [2024-11-19 08:01:28.989490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.310 qpair failed and we were unable to recover it. 00:37:37.310 [2024-11-19 08:01:28.989607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.310 [2024-11-19 08:01:28.989640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.310 qpair failed and we were unable to recover it. 00:37:37.310 [2024-11-19 08:01:28.989789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.310 [2024-11-19 08:01:28.989824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.310 qpair failed and we were unable to recover it. 00:37:37.310 [2024-11-19 08:01:28.989936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.310 [2024-11-19 08:01:28.989970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.310 qpair failed and we were unable to recover it. 00:37:37.310 [2024-11-19 08:01:28.990150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.310 [2024-11-19 08:01:28.990198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.310 qpair failed and we were unable to recover it. 00:37:37.310 [2024-11-19 08:01:28.990337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.310 [2024-11-19 08:01:28.990375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.310 qpair failed and we were unable to recover it. 00:37:37.310 [2024-11-19 08:01:28.990480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.310 [2024-11-19 08:01:28.990516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.310 qpair failed and we were unable to recover it. 00:37:37.310 [2024-11-19 08:01:28.990669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.310 [2024-11-19 08:01:28.990720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.310 qpair failed and we were unable to recover it. 00:37:37.310 [2024-11-19 08:01:28.990860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.310 [2024-11-19 08:01:28.990895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.310 qpair failed and we were unable to recover it. 00:37:37.310 [2024-11-19 08:01:28.991009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.310 [2024-11-19 08:01:28.991044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.310 qpair failed and we were unable to recover it. 00:37:37.310 [2024-11-19 08:01:28.991218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.310 [2024-11-19 08:01:28.991253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.310 qpair failed and we were unable to recover it. 00:37:37.310 [2024-11-19 08:01:28.991387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.310 [2024-11-19 08:01:28.991422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.310 qpair failed and we were unable to recover it. 00:37:37.310 [2024-11-19 08:01:28.991571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.310 [2024-11-19 08:01:28.991607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.310 qpair failed and we were unable to recover it. 00:37:37.310 [2024-11-19 08:01:28.991734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.310 [2024-11-19 08:01:28.991773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.310 qpair failed and we were unable to recover it. 00:37:37.310 [2024-11-19 08:01:28.991903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.310 [2024-11-19 08:01:28.991951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.310 qpair failed and we were unable to recover it. 00:37:37.310 [2024-11-19 08:01:28.992082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.310 [2024-11-19 08:01:28.992117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.310 qpair failed and we were unable to recover it. 00:37:37.310 [2024-11-19 08:01:28.992224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.310 [2024-11-19 08:01:28.992258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.310 qpair failed and we were unable to recover it. 00:37:37.310 [2024-11-19 08:01:28.992433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.310 [2024-11-19 08:01:28.992468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.310 qpair failed and we were unable to recover it. 00:37:37.310 [2024-11-19 08:01:28.992602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.310 [2024-11-19 08:01:28.992642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.310 qpair failed and we were unable to recover it. 00:37:37.310 [2024-11-19 08:01:28.992774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.310 [2024-11-19 08:01:28.992819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.310 qpair failed and we were unable to recover it. 00:37:37.310 [2024-11-19 08:01:28.992960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.310 [2024-11-19 08:01:28.992996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.310 qpair failed and we were unable to recover it. 00:37:37.310 [2024-11-19 08:01:28.993121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.310 [2024-11-19 08:01:28.993156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.310 qpair failed and we were unable to recover it. 00:37:37.310 [2024-11-19 08:01:28.993281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.310 [2024-11-19 08:01:28.993330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.310 qpair failed and we were unable to recover it. 00:37:37.310 [2024-11-19 08:01:28.993507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.310 [2024-11-19 08:01:28.993544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.310 qpair failed and we were unable to recover it. 00:37:37.310 [2024-11-19 08:01:28.993654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.310 [2024-11-19 08:01:28.993700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.310 qpair failed and we were unable to recover it. 00:37:37.310 [2024-11-19 08:01:28.993835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.310 [2024-11-19 08:01:28.993870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.311 qpair failed and we were unable to recover it. 00:37:37.311 [2024-11-19 08:01:28.993972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.311 [2024-11-19 08:01:28.994017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.311 qpair failed and we were unable to recover it. 00:37:37.311 [2024-11-19 08:01:28.994141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.311 [2024-11-19 08:01:28.994189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.311 qpair failed and we were unable to recover it. 00:37:37.311 [2024-11-19 08:01:28.994346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.311 [2024-11-19 08:01:28.994382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.311 qpair failed and we were unable to recover it. 00:37:37.311 [2024-11-19 08:01:28.994522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.311 [2024-11-19 08:01:28.994558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.311 qpair failed and we were unable to recover it. 00:37:37.311 [2024-11-19 08:01:28.994674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.311 [2024-11-19 08:01:28.994753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.311 qpair failed and we were unable to recover it. 00:37:37.311 [2024-11-19 08:01:28.994874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.311 [2024-11-19 08:01:28.994909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.311 qpair failed and we were unable to recover it. 00:37:37.311 [2024-11-19 08:01:28.995030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.311 [2024-11-19 08:01:28.995064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.311 qpair failed and we were unable to recover it. 00:37:37.311 [2024-11-19 08:01:28.995218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.311 [2024-11-19 08:01:28.995253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.311 qpair failed and we were unable to recover it. 00:37:37.311 [2024-11-19 08:01:28.995374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.311 [2024-11-19 08:01:28.995411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.311 qpair failed and we were unable to recover it. 00:37:37.311 [2024-11-19 08:01:28.995524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.311 [2024-11-19 08:01:28.995560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.311 qpair failed and we were unable to recover it. 00:37:37.311 [2024-11-19 08:01:28.995707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.311 [2024-11-19 08:01:28.995742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.311 qpair failed and we were unable to recover it. 00:37:37.311 [2024-11-19 08:01:28.995875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.311 [2024-11-19 08:01:28.995912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.311 qpair failed and we were unable to recover it. 00:37:37.311 [2024-11-19 08:01:28.996056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.311 [2024-11-19 08:01:28.996091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.311 qpair failed and we were unable to recover it. 00:37:37.311 [2024-11-19 08:01:28.996240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.311 [2024-11-19 08:01:28.996275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.311 qpair failed and we were unable to recover it. 00:37:37.311 [2024-11-19 08:01:28.996427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.311 [2024-11-19 08:01:28.996476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.311 qpair failed and we were unable to recover it. 00:37:37.311 [2024-11-19 08:01:28.996594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.311 [2024-11-19 08:01:28.996631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.311 qpair failed and we were unable to recover it. 00:37:37.311 [2024-11-19 08:01:28.996803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.311 [2024-11-19 08:01:28.996840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.311 qpair failed and we were unable to recover it. 00:37:37.311 [2024-11-19 08:01:28.996977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.311 [2024-11-19 08:01:28.997018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.311 qpair failed and we were unable to recover it. 00:37:37.311 [2024-11-19 08:01:28.997127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.311 [2024-11-19 08:01:28.997163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.311 qpair failed and we were unable to recover it. 00:37:37.311 [2024-11-19 08:01:28.997284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.311 [2024-11-19 08:01:28.997319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.311 qpair failed and we were unable to recover it. 00:37:37.311 [2024-11-19 08:01:28.997468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.311 [2024-11-19 08:01:28.997504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.311 qpair failed and we were unable to recover it. 00:37:37.311 [2024-11-19 08:01:28.997643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.311 [2024-11-19 08:01:28.997684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.311 qpair failed and we were unable to recover it. 00:37:37.311 [2024-11-19 08:01:28.997801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.311 [2024-11-19 08:01:28.997836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.311 qpair failed and we were unable to recover it. 00:37:37.311 [2024-11-19 08:01:28.997940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.311 [2024-11-19 08:01:28.997974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.311 qpair failed and we were unable to recover it. 00:37:37.311 [2024-11-19 08:01:28.998090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.311 [2024-11-19 08:01:28.998125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.311 qpair failed and we were unable to recover it. 00:37:37.311 [2024-11-19 08:01:28.998247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.311 [2024-11-19 08:01:28.998291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.311 qpair failed and we were unable to recover it. 00:37:37.311 [2024-11-19 08:01:28.998430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.311 [2024-11-19 08:01:28.998467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.311 qpair failed and we were unable to recover it. 00:37:37.311 [2024-11-19 08:01:28.999582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.311 [2024-11-19 08:01:28.999630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.311 qpair failed and we were unable to recover it. 00:37:37.311 [2024-11-19 08:01:28.999780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.311 [2024-11-19 08:01:28.999829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.311 qpair failed and we were unable to recover it. 00:37:37.311 [2024-11-19 08:01:28.999939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.311 [2024-11-19 08:01:28.999988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.311 qpair failed and we were unable to recover it. 00:37:37.311 [2024-11-19 08:01:29.000128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.311 [2024-11-19 08:01:29.000166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.311 qpair failed and we were unable to recover it. 00:37:37.311 [2024-11-19 08:01:29.000274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.311 [2024-11-19 08:01:29.000308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.311 qpair failed and we were unable to recover it. 00:37:37.311 [2024-11-19 08:01:29.000425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.311 [2024-11-19 08:01:29.000466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.311 qpair failed and we were unable to recover it. 00:37:37.311 [2024-11-19 08:01:29.000603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.312 [2024-11-19 08:01:29.000638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.312 qpair failed and we were unable to recover it. 00:37:37.312 [2024-11-19 08:01:29.000881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.312 [2024-11-19 08:01:29.000921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.312 qpair failed and we were unable to recover it. 00:37:37.312 [2024-11-19 08:01:29.001070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.312 [2024-11-19 08:01:29.001106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.312 qpair failed and we were unable to recover it. 00:37:37.312 [2024-11-19 08:01:29.001220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.312 [2024-11-19 08:01:29.001255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.312 qpair failed and we were unable to recover it. 00:37:37.312 [2024-11-19 08:01:29.001390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.312 [2024-11-19 08:01:29.001425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.312 qpair failed and we were unable to recover it. 00:37:37.312 [2024-11-19 08:01:29.001556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.312 [2024-11-19 08:01:29.001605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.312 qpair failed and we were unable to recover it. 00:37:37.312 [2024-11-19 08:01:29.001763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.312 [2024-11-19 08:01:29.001812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.312 qpair failed and we were unable to recover it. 00:37:37.312 [2024-11-19 08:01:29.001929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.312 [2024-11-19 08:01:29.001965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.312 qpair failed and we were unable to recover it. 00:37:37.312 [2024-11-19 08:01:29.002111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.312 [2024-11-19 08:01:29.002146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.312 qpair failed and we were unable to recover it. 00:37:37.312 [2024-11-19 08:01:29.002280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.312 [2024-11-19 08:01:29.002315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.312 qpair failed and we were unable to recover it. 00:37:37.312 [2024-11-19 08:01:29.002449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.312 [2024-11-19 08:01:29.002484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.312 qpair failed and we were unable to recover it. 00:37:37.312 [2024-11-19 08:01:29.002604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.312 [2024-11-19 08:01:29.002652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.312 qpair failed and we were unable to recover it. 00:37:37.312 [2024-11-19 08:01:29.002802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.312 [2024-11-19 08:01:29.002841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.312 qpair failed and we were unable to recover it. 00:37:37.312 [2024-11-19 08:01:29.002971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.312 [2024-11-19 08:01:29.003019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.312 qpair failed and we were unable to recover it. 00:37:37.312 [2024-11-19 08:01:29.003170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.312 [2024-11-19 08:01:29.003206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.312 qpair failed and we were unable to recover it. 00:37:37.312 [2024-11-19 08:01:29.003346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.312 [2024-11-19 08:01:29.003381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.312 qpair failed and we were unable to recover it. 00:37:37.312 [2024-11-19 08:01:29.003501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.312 [2024-11-19 08:01:29.003537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.312 qpair failed and we were unable to recover it. 00:37:37.312 [2024-11-19 08:01:29.003680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.312 [2024-11-19 08:01:29.003726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.312 qpair failed and we were unable to recover it. 00:37:37.312 [2024-11-19 08:01:29.003834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.312 [2024-11-19 08:01:29.003868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.312 qpair failed and we were unable to recover it. 00:37:37.312 [2024-11-19 08:01:29.003973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.312 [2024-11-19 08:01:29.004015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.312 qpair failed and we were unable to recover it. 00:37:37.312 [2024-11-19 08:01:29.004135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.312 [2024-11-19 08:01:29.004168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.312 qpair failed and we were unable to recover it. 00:37:37.312 [2024-11-19 08:01:29.004283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.312 [2024-11-19 08:01:29.004319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.312 qpair failed and we were unable to recover it. 00:37:37.312 [2024-11-19 08:01:29.004447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.312 [2024-11-19 08:01:29.004483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.312 qpair failed and we were unable to recover it. 00:37:37.312 [2024-11-19 08:01:29.004597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.312 [2024-11-19 08:01:29.004642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.312 qpair failed and we were unable to recover it. 00:37:37.312 [2024-11-19 08:01:29.004783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.312 [2024-11-19 08:01:29.004819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.312 qpair failed and we were unable to recover it. 00:37:37.312 [2024-11-19 08:01:29.004916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.312 [2024-11-19 08:01:29.004951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.312 qpair failed and we were unable to recover it. 00:37:37.312 [2024-11-19 08:01:29.005098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.312 [2024-11-19 08:01:29.005133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.312 qpair failed and we were unable to recover it. 00:37:37.312 [2024-11-19 08:01:29.005261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.312 [2024-11-19 08:01:29.005296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.312 qpair failed and we were unable to recover it. 00:37:37.312 [2024-11-19 08:01:29.005437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.312 [2024-11-19 08:01:29.005472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.312 qpair failed and we were unable to recover it. 00:37:37.312 [2024-11-19 08:01:29.005575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.312 [2024-11-19 08:01:29.005610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.312 qpair failed and we were unable to recover it. 00:37:37.312 [2024-11-19 08:01:29.005742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.312 [2024-11-19 08:01:29.005777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.312 qpair failed and we were unable to recover it. 00:37:37.312 [2024-11-19 08:01:29.005888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.312 [2024-11-19 08:01:29.005925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.312 qpair failed and we were unable to recover it. 00:37:37.312 [2024-11-19 08:01:29.006063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.312 [2024-11-19 08:01:29.006097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.312 qpair failed and we were unable to recover it. 00:37:37.312 [2024-11-19 08:01:29.006250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.312 [2024-11-19 08:01:29.006284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.312 qpair failed and we were unable to recover it. 00:37:37.312 [2024-11-19 08:01:29.006427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.312 [2024-11-19 08:01:29.006462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.312 qpair failed and we were unable to recover it. 00:37:37.312 [2024-11-19 08:01:29.006587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.312 [2024-11-19 08:01:29.006634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.312 qpair failed and we were unable to recover it. 00:37:37.312 [2024-11-19 08:01:29.006805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.312 [2024-11-19 08:01:29.006854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.312 qpair failed and we were unable to recover it. 00:37:37.312 [2024-11-19 08:01:29.006975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.312 [2024-11-19 08:01:29.007012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.313 qpair failed and we were unable to recover it. 00:37:37.313 [2024-11-19 08:01:29.007186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.313 [2024-11-19 08:01:29.007222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.313 qpair failed and we were unable to recover it. 00:37:37.313 [2024-11-19 08:01:29.007373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.313 [2024-11-19 08:01:29.007415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.313 qpair failed and we were unable to recover it. 00:37:37.313 [2024-11-19 08:01:29.007549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.313 [2024-11-19 08:01:29.007584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.313 qpair failed and we were unable to recover it. 00:37:37.313 [2024-11-19 08:01:29.007715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.313 [2024-11-19 08:01:29.007751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.313 qpair failed and we were unable to recover it. 00:37:37.313 [2024-11-19 08:01:29.007882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.313 [2024-11-19 08:01:29.007931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.313 qpair failed and we were unable to recover it. 00:37:37.313 [2024-11-19 08:01:29.008055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.313 [2024-11-19 08:01:29.008091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.313 qpair failed and we were unable to recover it. 00:37:37.313 [2024-11-19 08:01:29.008252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.313 [2024-11-19 08:01:29.008287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.313 qpair failed and we were unable to recover it. 00:37:37.313 [2024-11-19 08:01:29.008412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.313 [2024-11-19 08:01:29.008446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.313 qpair failed and we were unable to recover it. 00:37:37.313 [2024-11-19 08:01:29.008553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.313 [2024-11-19 08:01:29.008587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.313 qpair failed and we were unable to recover it. 00:37:37.313 [2024-11-19 08:01:29.008802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.313 [2024-11-19 08:01:29.008836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.313 qpair failed and we were unable to recover it. 00:37:37.313 [2024-11-19 08:01:29.008949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.313 [2024-11-19 08:01:29.008993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.313 qpair failed and we were unable to recover it. 00:37:37.313 [2024-11-19 08:01:29.009108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.313 [2024-11-19 08:01:29.009142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.313 qpair failed and we were unable to recover it. 00:37:37.313 [2024-11-19 08:01:29.009285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.313 [2024-11-19 08:01:29.009329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.313 qpair failed and we were unable to recover it. 00:37:37.313 [2024-11-19 08:01:29.009441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.313 [2024-11-19 08:01:29.009475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.313 qpair failed and we were unable to recover it. 00:37:37.313 [2024-11-19 08:01:29.009574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.313 [2024-11-19 08:01:29.009615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.313 qpair failed and we were unable to recover it. 00:37:37.313 [2024-11-19 08:01:29.009736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.313 [2024-11-19 08:01:29.009771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.313 qpair failed and we were unable to recover it. 00:37:37.313 [2024-11-19 08:01:29.009882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.313 [2024-11-19 08:01:29.009915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.313 qpair failed and we were unable to recover it. 00:37:37.313 [2024-11-19 08:01:29.010066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.313 [2024-11-19 08:01:29.010100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.313 qpair failed and we were unable to recover it. 00:37:37.313 [2024-11-19 08:01:29.010243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.313 [2024-11-19 08:01:29.010289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.313 qpair failed and we were unable to recover it. 00:37:37.313 [2024-11-19 08:01:29.010403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.313 [2024-11-19 08:01:29.010437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.313 qpair failed and we were unable to recover it. 00:37:37.313 [2024-11-19 08:01:29.010543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.313 [2024-11-19 08:01:29.010577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.313 qpair failed and we were unable to recover it. 00:37:37.313 [2024-11-19 08:01:29.010686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.313 [2024-11-19 08:01:29.010739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.313 qpair failed and we were unable to recover it. 00:37:37.313 [2024-11-19 08:01:29.010851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.313 [2024-11-19 08:01:29.010886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.313 qpair failed and we were unable to recover it. 00:37:37.313 [2024-11-19 08:01:29.011115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.313 [2024-11-19 08:01:29.011156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.313 qpair failed and we were unable to recover it. 00:37:37.313 [2024-11-19 08:01:29.011294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.313 [2024-11-19 08:01:29.011328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.313 qpair failed and we were unable to recover it. 00:37:37.313 [2024-11-19 08:01:29.011466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.313 [2024-11-19 08:01:29.011500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.313 qpair failed and we were unable to recover it. 00:37:37.313 [2024-11-19 08:01:29.011653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.313 [2024-11-19 08:01:29.011704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.313 qpair failed and we were unable to recover it. 00:37:37.313 [2024-11-19 08:01:29.011818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.313 [2024-11-19 08:01:29.011852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.313 qpair failed and we were unable to recover it. 00:37:37.313 [2024-11-19 08:01:29.011959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.313 [2024-11-19 08:01:29.012005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.313 qpair failed and we were unable to recover it. 00:37:37.313 [2024-11-19 08:01:29.012151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.313 [2024-11-19 08:01:29.012185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.313 qpair failed and we were unable to recover it. 00:37:37.313 [2024-11-19 08:01:29.012298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.313 [2024-11-19 08:01:29.012332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.313 qpair failed and we were unable to recover it. 00:37:37.313 [2024-11-19 08:01:29.012444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.313 [2024-11-19 08:01:29.012479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.313 qpair failed and we were unable to recover it. 00:37:37.313 [2024-11-19 08:01:29.012707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.313 [2024-11-19 08:01:29.012743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.313 qpair failed and we were unable to recover it. 00:37:37.313 [2024-11-19 08:01:29.012861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.313 [2024-11-19 08:01:29.012896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.313 qpair failed and we were unable to recover it. 00:37:37.313 [2024-11-19 08:01:29.013102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.313 [2024-11-19 08:01:29.013136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.313 qpair failed and we were unable to recover it. 00:37:37.313 [2024-11-19 08:01:29.013276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.313 [2024-11-19 08:01:29.013310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.313 qpair failed and we were unable to recover it. 00:37:37.313 [2024-11-19 08:01:29.013447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.313 [2024-11-19 08:01:29.013481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.313 qpair failed and we were unable to recover it. 00:37:37.313 [2024-11-19 08:01:29.013591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.314 [2024-11-19 08:01:29.013625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.314 qpair failed and we were unable to recover it. 00:37:37.314 [2024-11-19 08:01:29.013761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.314 [2024-11-19 08:01:29.013811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.314 qpair failed and we were unable to recover it. 00:37:37.314 [2024-11-19 08:01:29.013938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.314 [2024-11-19 08:01:29.013975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.314 qpair failed and we were unable to recover it. 00:37:37.314 [2024-11-19 08:01:29.014151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.314 [2024-11-19 08:01:29.014197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.314 qpair failed and we were unable to recover it. 00:37:37.314 [2024-11-19 08:01:29.014316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.314 [2024-11-19 08:01:29.014351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.314 qpair failed and we were unable to recover it. 00:37:37.314 [2024-11-19 08:01:29.014510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.314 [2024-11-19 08:01:29.014559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.314 qpair failed and we were unable to recover it. 00:37:37.314 [2024-11-19 08:01:29.014686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.314 [2024-11-19 08:01:29.014741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.314 qpair failed and we were unable to recover it. 00:37:37.314 [2024-11-19 08:01:29.014856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.314 [2024-11-19 08:01:29.014892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.314 qpair failed and we were unable to recover it. 00:37:37.314 [2024-11-19 08:01:29.015064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.314 [2024-11-19 08:01:29.015098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.314 qpair failed and we were unable to recover it. 00:37:37.314 [2024-11-19 08:01:29.015215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.314 [2024-11-19 08:01:29.015250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.314 qpair failed and we were unable to recover it. 00:37:37.314 [2024-11-19 08:01:29.015370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.314 [2024-11-19 08:01:29.015406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.314 qpair failed and we were unable to recover it. 00:37:37.314 [2024-11-19 08:01:29.015567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.314 [2024-11-19 08:01:29.015601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.314 qpair failed and we were unable to recover it. 00:37:37.314 [2024-11-19 08:01:29.015727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.314 [2024-11-19 08:01:29.015761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.314 qpair failed and we were unable to recover it. 00:37:37.314 [2024-11-19 08:01:29.015878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.314 [2024-11-19 08:01:29.015926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.314 qpair failed and we were unable to recover it. 00:37:37.314 [2024-11-19 08:01:29.016079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.314 [2024-11-19 08:01:29.016116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.314 qpair failed and we were unable to recover it. 00:37:37.314 [2024-11-19 08:01:29.016263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.314 [2024-11-19 08:01:29.016298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.314 qpair failed and we were unable to recover it. 00:37:37.314 [2024-11-19 08:01:29.016410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.314 [2024-11-19 08:01:29.016444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.314 qpair failed and we were unable to recover it. 00:37:37.314 [2024-11-19 08:01:29.016581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.314 [2024-11-19 08:01:29.016630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.314 qpair failed and we were unable to recover it. 00:37:37.314 [2024-11-19 08:01:29.016783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.314 [2024-11-19 08:01:29.016833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.314 qpair failed and we were unable to recover it. 00:37:37.314 [2024-11-19 08:01:29.016953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.314 [2024-11-19 08:01:29.016987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.314 qpair failed and we were unable to recover it. 00:37:37.314 [2024-11-19 08:01:29.017134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.314 [2024-11-19 08:01:29.017168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.314 qpair failed and we were unable to recover it. 00:37:37.314 [2024-11-19 08:01:29.017278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.314 [2024-11-19 08:01:29.017312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.314 qpair failed and we were unable to recover it. 00:37:37.314 [2024-11-19 08:01:29.017411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.314 [2024-11-19 08:01:29.017445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.314 qpair failed and we were unable to recover it. 00:37:37.314 [2024-11-19 08:01:29.017561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.314 [2024-11-19 08:01:29.017597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.314 qpair failed and we were unable to recover it. 00:37:37.314 [2024-11-19 08:01:29.017737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.314 [2024-11-19 08:01:29.017776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.314 qpair failed and we were unable to recover it. 00:37:37.314 [2024-11-19 08:01:29.017886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.314 [2024-11-19 08:01:29.017922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.314 qpair failed and we were unable to recover it. 00:37:37.314 [2024-11-19 08:01:29.018069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.314 [2024-11-19 08:01:29.018110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.314 qpair failed and we were unable to recover it. 00:37:37.314 [2024-11-19 08:01:29.018250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.314 [2024-11-19 08:01:29.018287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.314 qpair failed and we were unable to recover it. 00:37:37.314 [2024-11-19 08:01:29.018443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.314 [2024-11-19 08:01:29.018493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.314 qpair failed and we were unable to recover it. 00:37:37.314 [2024-11-19 08:01:29.018661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.314 [2024-11-19 08:01:29.018713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.314 qpair failed and we were unable to recover it. 00:37:37.314 [2024-11-19 08:01:29.018831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.314 [2024-11-19 08:01:29.018867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.314 qpair failed and we were unable to recover it. 00:37:37.314 [2024-11-19 08:01:29.019021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.314 [2024-11-19 08:01:29.019064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.314 qpair failed and we were unable to recover it. 00:37:37.314 [2024-11-19 08:01:29.019175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.314 [2024-11-19 08:01:29.019211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.314 qpair failed and we were unable to recover it. 00:37:37.314 [2024-11-19 08:01:29.019374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.314 [2024-11-19 08:01:29.019408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.314 qpair failed and we were unable to recover it. 00:37:37.314 [2024-11-19 08:01:29.019549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.314 [2024-11-19 08:01:29.019585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.314 qpair failed and we were unable to recover it. 00:37:37.314 [2024-11-19 08:01:29.019720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.314 [2024-11-19 08:01:29.019770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.314 qpair failed and we were unable to recover it. 00:37:37.314 [2024-11-19 08:01:29.019886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.314 [2024-11-19 08:01:29.019922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.314 qpair failed and we were unable to recover it. 00:37:37.314 [2024-11-19 08:01:29.020039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.314 [2024-11-19 08:01:29.020074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.314 qpair failed and we were unable to recover it. 00:37:37.315 [2024-11-19 08:01:29.020178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.315 [2024-11-19 08:01:29.020214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.315 qpair failed and we were unable to recover it. 00:37:37.315 [2024-11-19 08:01:29.020324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.315 [2024-11-19 08:01:29.020360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.315 qpair failed and we were unable to recover it. 00:37:37.315 [2024-11-19 08:01:29.020498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.315 [2024-11-19 08:01:29.020542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.315 qpair failed and we were unable to recover it. 00:37:37.315 [2024-11-19 08:01:29.020645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.315 [2024-11-19 08:01:29.020694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.315 qpair failed and we were unable to recover it. 00:37:37.315 [2024-11-19 08:01:29.020809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.315 [2024-11-19 08:01:29.020848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.315 qpair failed and we were unable to recover it. 00:37:37.315 [2024-11-19 08:01:29.020960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.315 [2024-11-19 08:01:29.020996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.315 qpair failed and we were unable to recover it. 00:37:37.315 [2024-11-19 08:01:29.021162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.315 [2024-11-19 08:01:29.021200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.315 qpair failed and we were unable to recover it. 00:37:37.315 [2024-11-19 08:01:29.021320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.315 [2024-11-19 08:01:29.021356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.315 qpair failed and we were unable to recover it. 00:37:37.315 [2024-11-19 08:01:29.021520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.315 [2024-11-19 08:01:29.021555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.315 qpair failed and we were unable to recover it. 00:37:37.315 [2024-11-19 08:01:29.021683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.315 [2024-11-19 08:01:29.021730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.315 qpair failed and we were unable to recover it. 00:37:37.315 [2024-11-19 08:01:29.021847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.315 [2024-11-19 08:01:29.021881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.315 qpair failed and we were unable to recover it. 00:37:37.315 [2024-11-19 08:01:29.021981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.315 [2024-11-19 08:01:29.022025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.315 qpair failed and we were unable to recover it. 00:37:37.315 [2024-11-19 08:01:29.022167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.315 [2024-11-19 08:01:29.022202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.315 qpair failed and we were unable to recover it. 00:37:37.315 [2024-11-19 08:01:29.022312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.315 [2024-11-19 08:01:29.022346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.315 qpair failed and we were unable to recover it. 00:37:37.315 [2024-11-19 08:01:29.022519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.315 [2024-11-19 08:01:29.022554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.315 qpair failed and we were unable to recover it. 00:37:37.315 [2024-11-19 08:01:29.022665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.315 [2024-11-19 08:01:29.022716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.315 qpair failed and we were unable to recover it. 00:37:37.315 [2024-11-19 08:01:29.022848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.315 [2024-11-19 08:01:29.022897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.315 qpair failed and we were unable to recover it. 00:37:37.315 [2024-11-19 08:01:29.023018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.315 [2024-11-19 08:01:29.023054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.315 qpair failed and we were unable to recover it. 00:37:37.315 [2024-11-19 08:01:29.023167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.315 [2024-11-19 08:01:29.023203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.315 qpair failed and we were unable to recover it. 00:37:37.315 [2024-11-19 08:01:29.023311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.315 [2024-11-19 08:01:29.023347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.315 qpair failed and we were unable to recover it. 00:37:37.315 [2024-11-19 08:01:29.023477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.315 [2024-11-19 08:01:29.023523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.315 qpair failed and we were unable to recover it. 00:37:37.315 [2024-11-19 08:01:29.023659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.315 [2024-11-19 08:01:29.023701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.315 qpair failed and we were unable to recover it. 00:37:37.315 [2024-11-19 08:01:29.023810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.315 [2024-11-19 08:01:29.023845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.315 qpair failed and we were unable to recover it. 00:37:37.315 [2024-11-19 08:01:29.023949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.315 [2024-11-19 08:01:29.023983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.315 qpair failed and we were unable to recover it. 00:37:37.315 [2024-11-19 08:01:29.024809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.315 [2024-11-19 08:01:29.024848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.315 qpair failed and we were unable to recover it. 00:37:37.315 [2024-11-19 08:01:29.024966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.315 [2024-11-19 08:01:29.025001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.315 qpair failed and we were unable to recover it. 00:37:37.315 [2024-11-19 08:01:29.025737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.315 [2024-11-19 08:01:29.025778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.315 qpair failed and we were unable to recover it. 00:37:37.315 [2024-11-19 08:01:29.025905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.315 [2024-11-19 08:01:29.025940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.315 qpair failed and we were unable to recover it. 00:37:37.315 [2024-11-19 08:01:29.026065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.315 [2024-11-19 08:01:29.026099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.315 qpair failed and we were unable to recover it. 00:37:37.315 [2024-11-19 08:01:29.027079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.315 [2024-11-19 08:01:29.027115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.315 qpair failed and we were unable to recover it. 00:37:37.315 [2024-11-19 08:01:29.027319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.315 [2024-11-19 08:01:29.027354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.315 qpair failed and we were unable to recover it. 00:37:37.315 [2024-11-19 08:01:29.028102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.315 [2024-11-19 08:01:29.028154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.315 qpair failed and we were unable to recover it. 00:37:37.315 [2024-11-19 08:01:29.028316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.315 [2024-11-19 08:01:29.028351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.315 qpair failed and we were unable to recover it. 00:37:37.315 [2024-11-19 08:01:29.029125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.315 [2024-11-19 08:01:29.029185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.315 qpair failed and we were unable to recover it. 00:37:37.315 [2024-11-19 08:01:29.029372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.315 [2024-11-19 08:01:29.029406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.315 qpair failed and we were unable to recover it. 00:37:37.315 [2024-11-19 08:01:29.030133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.315 [2024-11-19 08:01:29.030189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.315 qpair failed and we were unable to recover it. 00:37:37.315 [2024-11-19 08:01:29.030395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.315 [2024-11-19 08:01:29.030430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.316 qpair failed and we were unable to recover it. 00:37:37.316 [2024-11-19 08:01:29.030573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.316 [2024-11-19 08:01:29.030607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.316 qpair failed and we were unable to recover it. 00:37:37.316 [2024-11-19 08:01:29.030745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.316 [2024-11-19 08:01:29.030779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.316 qpair failed and we were unable to recover it. 00:37:37.316 [2024-11-19 08:01:29.030891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.316 [2024-11-19 08:01:29.030925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.316 qpair failed and we were unable to recover it. 00:37:37.316 [2024-11-19 08:01:29.031053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.316 [2024-11-19 08:01:29.031089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.316 qpair failed and we were unable to recover it. 00:37:37.316 [2024-11-19 08:01:29.031204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.316 [2024-11-19 08:01:29.031248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.316 qpair failed and we were unable to recover it. 00:37:37.316 [2024-11-19 08:01:29.031397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.316 [2024-11-19 08:01:29.031432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.316 qpair failed and we were unable to recover it. 00:37:37.316 [2024-11-19 08:01:29.031550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.316 [2024-11-19 08:01:29.031584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.316 qpair failed and we were unable to recover it. 00:37:37.316 [2024-11-19 08:01:29.031685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.316 [2024-11-19 08:01:29.031726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.316 qpair failed and we were unable to recover it. 00:37:37.316 [2024-11-19 08:01:29.031843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.316 [2024-11-19 08:01:29.031877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.316 qpair failed and we were unable to recover it. 00:37:37.316 [2024-11-19 08:01:29.032007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.316 [2024-11-19 08:01:29.032041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.316 qpair failed and we were unable to recover it. 00:37:37.316 [2024-11-19 08:01:29.032158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.316 [2024-11-19 08:01:29.032192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.316 qpair failed and we were unable to recover it. 00:37:37.316 [2024-11-19 08:01:29.032325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.316 [2024-11-19 08:01:29.032359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.316 qpair failed and we were unable to recover it. 00:37:37.316 [2024-11-19 08:01:29.032470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.316 [2024-11-19 08:01:29.032504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.316 qpair failed and we were unable to recover it. 00:37:37.316 [2024-11-19 08:01:29.032696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.316 [2024-11-19 08:01:29.032747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.316 qpair failed and we were unable to recover it. 00:37:37.316 [2024-11-19 08:01:29.032874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.316 [2024-11-19 08:01:29.032923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.316 qpair failed and we were unable to recover it. 00:37:37.316 [2024-11-19 08:01:29.033058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.316 [2024-11-19 08:01:29.033096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.316 qpair failed and we were unable to recover it. 00:37:37.316 [2024-11-19 08:01:29.033208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.316 [2024-11-19 08:01:29.033243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.316 qpair failed and we were unable to recover it. 00:37:37.316 [2024-11-19 08:01:29.033375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.316 [2024-11-19 08:01:29.033410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.316 qpair failed and we were unable to recover it. 00:37:37.316 [2024-11-19 08:01:29.033520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.316 [2024-11-19 08:01:29.033555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.316 qpair failed and we were unable to recover it. 00:37:37.316 [2024-11-19 08:01:29.033707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.316 [2024-11-19 08:01:29.033744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.316 qpair failed and we were unable to recover it. 00:37:37.316 [2024-11-19 08:01:29.033850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.316 [2024-11-19 08:01:29.033884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.316 qpair failed and we were unable to recover it. 00:37:37.316 [2024-11-19 08:01:29.034102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.316 [2024-11-19 08:01:29.034135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.316 qpair failed and we were unable to recover it. 00:37:37.316 [2024-11-19 08:01:29.034232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.316 [2024-11-19 08:01:29.034266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.316 qpair failed and we were unable to recover it. 00:37:37.316 [2024-11-19 08:01:29.034400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.316 [2024-11-19 08:01:29.034433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.316 qpair failed and we were unable to recover it. 00:37:37.316 [2024-11-19 08:01:29.034548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.316 [2024-11-19 08:01:29.034597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.316 qpair failed and we were unable to recover it. 00:37:37.316 [2024-11-19 08:01:29.034747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.316 [2024-11-19 08:01:29.034784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.316 qpair failed and we were unable to recover it. 00:37:37.316 [2024-11-19 08:01:29.034916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.316 [2024-11-19 08:01:29.034965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.316 qpair failed and we were unable to recover it. 00:37:37.316 [2024-11-19 08:01:29.035137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.316 [2024-11-19 08:01:29.035173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.316 qpair failed and we were unable to recover it. 00:37:37.316 [2024-11-19 08:01:29.035311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.316 [2024-11-19 08:01:29.035344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.316 qpair failed and we were unable to recover it. 00:37:37.316 [2024-11-19 08:01:29.035478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.316 [2024-11-19 08:01:29.035511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.316 qpair failed and we were unable to recover it. 00:37:37.316 [2024-11-19 08:01:29.035647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.316 [2024-11-19 08:01:29.035682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.316 qpair failed and we were unable to recover it. 00:37:37.316 [2024-11-19 08:01:29.035823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.316 [2024-11-19 08:01:29.035872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.316 qpair failed and we were unable to recover it. 00:37:37.316 [2024-11-19 08:01:29.036030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.316 [2024-11-19 08:01:29.036066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.316 qpair failed and we were unable to recover it. 00:37:37.317 [2024-11-19 08:01:29.036208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.317 [2024-11-19 08:01:29.036243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.317 qpair failed and we were unable to recover it. 00:37:37.317 [2024-11-19 08:01:29.036357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.317 [2024-11-19 08:01:29.036391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.317 qpair failed and we were unable to recover it. 00:37:37.317 [2024-11-19 08:01:29.036526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.317 [2024-11-19 08:01:29.036560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.317 qpair failed and we were unable to recover it. 00:37:37.317 [2024-11-19 08:01:29.036663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.317 [2024-11-19 08:01:29.036715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.317 qpair failed and we were unable to recover it. 00:37:37.317 [2024-11-19 08:01:29.036823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.317 [2024-11-19 08:01:29.036857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.317 qpair failed and we were unable to recover it. 00:37:37.317 [2024-11-19 08:01:29.037022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.317 [2024-11-19 08:01:29.037057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.317 qpair failed and we were unable to recover it. 00:37:37.317 [2024-11-19 08:01:29.037197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.317 [2024-11-19 08:01:29.037230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.317 qpair failed and we were unable to recover it. 00:37:37.317 [2024-11-19 08:01:29.037366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.317 [2024-11-19 08:01:29.037400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.317 qpair failed and we were unable to recover it. 00:37:37.317 [2024-11-19 08:01:29.037514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.317 [2024-11-19 08:01:29.037553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.317 qpair failed and we were unable to recover it. 00:37:37.317 [2024-11-19 08:01:29.037661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.317 [2024-11-19 08:01:29.037703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.317 qpair failed and we were unable to recover it. 00:37:37.317 [2024-11-19 08:01:29.037830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.317 [2024-11-19 08:01:29.037887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.317 qpair failed and we were unable to recover it. 00:37:37.317 [2024-11-19 08:01:29.038003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.317 [2024-11-19 08:01:29.038047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.317 qpair failed and we were unable to recover it. 00:37:37.317 [2024-11-19 08:01:29.038209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.317 [2024-11-19 08:01:29.038242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.317 qpair failed and we were unable to recover it. 00:37:37.317 [2024-11-19 08:01:29.038377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.317 [2024-11-19 08:01:29.038411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.317 qpair failed and we were unable to recover it. 00:37:37.317 [2024-11-19 08:01:29.038517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.317 [2024-11-19 08:01:29.038551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.317 qpair failed and we were unable to recover it. 00:37:37.317 [2024-11-19 08:01:29.038664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.317 [2024-11-19 08:01:29.038715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.317 qpair failed and we were unable to recover it. 00:37:37.317 [2024-11-19 08:01:29.038853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.317 [2024-11-19 08:01:29.038887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.317 qpair failed and we were unable to recover it. 00:37:37.317 [2024-11-19 08:01:29.039042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.317 [2024-11-19 08:01:29.039081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.317 qpair failed and we were unable to recover it. 00:37:37.317 [2024-11-19 08:01:29.039218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.317 [2024-11-19 08:01:29.039254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.317 qpair failed and we were unable to recover it. 00:37:37.317 [2024-11-19 08:01:29.039407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.317 [2024-11-19 08:01:29.039456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.317 qpair failed and we were unable to recover it. 00:37:37.317 [2024-11-19 08:01:29.039685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.317 [2024-11-19 08:01:29.039727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.317 qpair failed and we were unable to recover it. 00:37:37.317 [2024-11-19 08:01:29.039868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.317 [2024-11-19 08:01:29.039902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.317 qpair failed and we were unable to recover it. 00:37:37.317 [2024-11-19 08:01:29.040016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.317 [2024-11-19 08:01:29.040050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.317 qpair failed and we were unable to recover it. 00:37:37.317 [2024-11-19 08:01:29.040182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.317 [2024-11-19 08:01:29.040216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.317 qpair failed and we were unable to recover it. 00:37:37.317 [2024-11-19 08:01:29.040318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.317 [2024-11-19 08:01:29.040351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.317 qpair failed and we were unable to recover it. 00:37:37.317 [2024-11-19 08:01:29.040460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.317 [2024-11-19 08:01:29.040494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.317 qpair failed and we were unable to recover it. 00:37:37.317 [2024-11-19 08:01:29.040600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.317 [2024-11-19 08:01:29.040634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.317 qpair failed and we were unable to recover it. 00:37:37.317 [2024-11-19 08:01:29.040763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.317 [2024-11-19 08:01:29.040797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.317 qpair failed and we were unable to recover it. 00:37:37.317 [2024-11-19 08:01:29.040909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.317 [2024-11-19 08:01:29.040943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.317 qpair failed and we were unable to recover it. 00:37:37.317 [2024-11-19 08:01:29.041072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.317 [2024-11-19 08:01:29.041106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.317 qpair failed and we were unable to recover it. 00:37:37.317 [2024-11-19 08:01:29.041245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.317 [2024-11-19 08:01:29.041278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.317 qpair failed and we were unable to recover it. 00:37:37.317 [2024-11-19 08:01:29.041414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.317 [2024-11-19 08:01:29.041448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.317 qpair failed and we were unable to recover it. 00:37:37.317 [2024-11-19 08:01:29.041593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.317 [2024-11-19 08:01:29.041628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.317 qpair failed and we were unable to recover it. 00:37:37.317 [2024-11-19 08:01:29.041795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.317 [2024-11-19 08:01:29.041844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.317 qpair failed and we were unable to recover it. 00:37:37.317 [2024-11-19 08:01:29.041961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.317 [2024-11-19 08:01:29.041999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.317 qpair failed and we were unable to recover it. 00:37:37.317 [2024-11-19 08:01:29.042122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.317 [2024-11-19 08:01:29.042157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.317 qpair failed and we were unable to recover it. 00:37:37.317 [2024-11-19 08:01:29.042305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.317 [2024-11-19 08:01:29.042351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.317 qpair failed and we were unable to recover it. 00:37:37.317 [2024-11-19 08:01:29.042483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.318 [2024-11-19 08:01:29.042518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.318 qpair failed and we were unable to recover it. 00:37:37.318 [2024-11-19 08:01:29.042682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.318 [2024-11-19 08:01:29.042741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.318 qpair failed and we were unable to recover it. 00:37:37.318 [2024-11-19 08:01:29.042891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.318 [2024-11-19 08:01:29.042926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.318 qpair failed and we were unable to recover it. 00:37:37.318 [2024-11-19 08:01:29.043070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.318 [2024-11-19 08:01:29.043107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.318 qpair failed and we were unable to recover it. 00:37:37.318 [2024-11-19 08:01:29.043242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.318 [2024-11-19 08:01:29.043277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.318 qpair failed and we were unable to recover it. 00:37:37.318 [2024-11-19 08:01:29.043411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.318 [2024-11-19 08:01:29.043446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.318 qpair failed and we were unable to recover it. 00:37:37.318 [2024-11-19 08:01:29.043573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.318 [2024-11-19 08:01:29.043628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.318 qpair failed and we were unable to recover it. 00:37:37.318 [2024-11-19 08:01:29.043775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.318 [2024-11-19 08:01:29.043812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.318 qpair failed and we were unable to recover it. 00:37:37.318 [2024-11-19 08:01:29.043949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.318 [2024-11-19 08:01:29.044000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.318 qpair failed and we were unable to recover it. 00:37:37.318 [2024-11-19 08:01:29.044110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.318 [2024-11-19 08:01:29.044145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.318 qpair failed and we were unable to recover it. 00:37:37.318 [2024-11-19 08:01:29.044317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.318 [2024-11-19 08:01:29.044355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.318 qpair failed and we were unable to recover it. 00:37:37.318 [2024-11-19 08:01:29.044488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.318 [2024-11-19 08:01:29.044523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.318 qpair failed and we were unable to recover it. 00:37:37.318 [2024-11-19 08:01:29.044623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.318 [2024-11-19 08:01:29.044658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.318 qpair failed and we were unable to recover it. 00:37:37.318 [2024-11-19 08:01:29.044795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.318 [2024-11-19 08:01:29.044845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.318 qpair failed and we were unable to recover it. 00:37:37.318 [2024-11-19 08:01:29.045027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.318 [2024-11-19 08:01:29.045065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.318 qpair failed and we were unable to recover it. 00:37:37.318 [2024-11-19 08:01:29.045231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.318 [2024-11-19 08:01:29.045266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.318 qpair failed and we were unable to recover it. 00:37:37.318 [2024-11-19 08:01:29.045374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.318 [2024-11-19 08:01:29.045410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.318 qpair failed and we were unable to recover it. 00:37:37.318 [2024-11-19 08:01:29.045586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.318 [2024-11-19 08:01:29.045625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.318 qpair failed and we were unable to recover it. 00:37:37.318 [2024-11-19 08:01:29.045759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.318 [2024-11-19 08:01:29.045795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.318 qpair failed and we were unable to recover it. 00:37:37.318 [2024-11-19 08:01:29.046028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.318 [2024-11-19 08:01:29.046075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.318 qpair failed and we were unable to recover it. 00:37:37.318 [2024-11-19 08:01:29.046236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.318 [2024-11-19 08:01:29.046272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.318 qpair failed and we were unable to recover it. 00:37:37.318 [2024-11-19 08:01:29.046373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.318 [2024-11-19 08:01:29.046408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.318 qpair failed and we were unable to recover it. 00:37:37.318 [2024-11-19 08:01:29.046546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.318 [2024-11-19 08:01:29.046580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.318 qpair failed and we were unable to recover it. 00:37:37.318 [2024-11-19 08:01:29.046712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.318 [2024-11-19 08:01:29.046761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.318 qpair failed and we were unable to recover it. 00:37:37.318 [2024-11-19 08:01:29.046909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.318 [2024-11-19 08:01:29.046946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.318 qpair failed and we were unable to recover it. 00:37:37.318 [2024-11-19 08:01:29.047088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.318 [2024-11-19 08:01:29.047124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.318 qpair failed and we were unable to recover it. 00:37:37.318 [2024-11-19 08:01:29.047266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.318 [2024-11-19 08:01:29.047302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.318 qpair failed and we were unable to recover it. 00:37:37.318 [2024-11-19 08:01:29.047415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.318 [2024-11-19 08:01:29.047453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.318 qpair failed and we were unable to recover it. 00:37:37.318 [2024-11-19 08:01:29.047620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.318 [2024-11-19 08:01:29.047657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.318 qpair failed and we were unable to recover it. 00:37:37.318 [2024-11-19 08:01:29.047784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.318 [2024-11-19 08:01:29.047821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.318 qpair failed and we were unable to recover it. 00:37:37.318 [2024-11-19 08:01:29.047928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.318 [2024-11-19 08:01:29.047964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.318 qpair failed and we were unable to recover it. 00:37:37.318 [2024-11-19 08:01:29.048102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.318 [2024-11-19 08:01:29.048136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.318 qpair failed and we were unable to recover it. 00:37:37.318 [2024-11-19 08:01:29.048272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.318 [2024-11-19 08:01:29.048306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.318 qpair failed and we were unable to recover it. 00:37:37.318 [2024-11-19 08:01:29.048445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.318 [2024-11-19 08:01:29.048479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.318 qpair failed and we were unable to recover it. 00:37:37.318 [2024-11-19 08:01:29.048634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.318 [2024-11-19 08:01:29.048697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.318 qpair failed and we were unable to recover it. 00:37:37.318 [2024-11-19 08:01:29.048848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.318 [2024-11-19 08:01:29.048884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.318 qpair failed and we were unable to recover it. 00:37:37.318 [2024-11-19 08:01:29.048997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.318 [2024-11-19 08:01:29.049033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.318 qpair failed and we were unable to recover it. 00:37:37.318 [2024-11-19 08:01:29.049174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.318 [2024-11-19 08:01:29.049211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.318 qpair failed and we were unable to recover it. 00:37:37.319 [2024-11-19 08:01:29.049348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.319 [2024-11-19 08:01:29.049384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.319 qpair failed and we were unable to recover it. 00:37:37.319 [2024-11-19 08:01:29.049499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.319 [2024-11-19 08:01:29.049533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.319 qpair failed and we were unable to recover it. 00:37:37.319 [2024-11-19 08:01:29.049715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.319 [2024-11-19 08:01:29.049764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.319 qpair failed and we were unable to recover it. 00:37:37.319 [2024-11-19 08:01:29.049916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.319 [2024-11-19 08:01:29.049955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.319 qpair failed and we were unable to recover it. 00:37:37.319 [2024-11-19 08:01:29.050085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.319 [2024-11-19 08:01:29.050121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.319 qpair failed and we were unable to recover it. 00:37:37.319 [2024-11-19 08:01:29.050261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.319 [2024-11-19 08:01:29.050296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.319 qpair failed and we were unable to recover it. 00:37:37.319 [2024-11-19 08:01:29.050433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.319 [2024-11-19 08:01:29.050469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.319 qpair failed and we were unable to recover it. 00:37:37.319 [2024-11-19 08:01:29.050649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.319 [2024-11-19 08:01:29.050700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.319 qpair failed and we were unable to recover it. 00:37:37.319 [2024-11-19 08:01:29.050808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.319 [2024-11-19 08:01:29.050848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.319 qpair failed and we were unable to recover it. 00:37:37.319 [2024-11-19 08:01:29.050961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.319 [2024-11-19 08:01:29.050995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.319 qpair failed and we were unable to recover it. 00:37:37.319 [2024-11-19 08:01:29.051125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.319 [2024-11-19 08:01:29.051159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.319 qpair failed and we were unable to recover it. 00:37:37.319 [2024-11-19 08:01:29.051264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.319 [2024-11-19 08:01:29.051299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.319 qpair failed and we were unable to recover it. 00:37:37.319 [2024-11-19 08:01:29.051440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.319 [2024-11-19 08:01:29.051478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.319 qpair failed and we were unable to recover it. 00:37:37.319 [2024-11-19 08:01:29.051634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.319 [2024-11-19 08:01:29.051679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.319 qpair failed and we were unable to recover it. 00:37:37.319 [2024-11-19 08:01:29.051794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.319 [2024-11-19 08:01:29.051828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.319 qpair failed and we were unable to recover it. 00:37:37.319 [2024-11-19 08:01:29.051933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.319 [2024-11-19 08:01:29.051968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.319 qpair failed and we were unable to recover it. 00:37:37.319 [2024-11-19 08:01:29.052129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.319 [2024-11-19 08:01:29.052163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.319 qpair failed and we were unable to recover it. 00:37:37.319 [2024-11-19 08:01:29.052279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.319 [2024-11-19 08:01:29.052315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.319 qpair failed and we were unable to recover it. 00:37:37.319 [2024-11-19 08:01:29.052435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.319 [2024-11-19 08:01:29.052469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.319 qpair failed and we were unable to recover it. 00:37:37.319 [2024-11-19 08:01:29.052574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.319 [2024-11-19 08:01:29.052607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.319 qpair failed and we were unable to recover it. 00:37:37.319 [2024-11-19 08:01:29.052757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.319 [2024-11-19 08:01:29.052792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.319 qpair failed and we were unable to recover it. 00:37:37.319 [2024-11-19 08:01:29.052936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.319 [2024-11-19 08:01:29.052973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.319 qpair failed and we were unable to recover it. 00:37:37.319 [2024-11-19 08:01:29.053106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.319 [2024-11-19 08:01:29.053141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.319 qpair failed and we were unable to recover it. 00:37:37.319 [2024-11-19 08:01:29.053281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.319 [2024-11-19 08:01:29.053316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.319 qpair failed and we were unable to recover it. 00:37:37.319 [2024-11-19 08:01:29.053426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.319 [2024-11-19 08:01:29.053462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.319 qpair failed and we were unable to recover it. 00:37:37.319 [2024-11-19 08:01:29.053596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.319 [2024-11-19 08:01:29.053630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.319 qpair failed and we were unable to recover it. 00:37:37.319 [2024-11-19 08:01:29.053768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.319 [2024-11-19 08:01:29.053804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.319 qpair failed and we were unable to recover it. 00:37:37.319 [2024-11-19 08:01:29.053915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.319 [2024-11-19 08:01:29.053950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.319 qpair failed and we were unable to recover it. 00:37:37.319 [2024-11-19 08:01:29.054100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.319 [2024-11-19 08:01:29.054135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.319 qpair failed and we were unable to recover it. 00:37:37.319 [2024-11-19 08:01:29.054272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.319 [2024-11-19 08:01:29.054306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.319 qpair failed and we were unable to recover it. 00:37:37.319 [2024-11-19 08:01:29.054410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.319 [2024-11-19 08:01:29.054444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.319 qpair failed and we were unable to recover it. 00:37:37.319 [2024-11-19 08:01:29.054600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.319 [2024-11-19 08:01:29.054649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.319 qpair failed and we were unable to recover it. 00:37:37.319 [2024-11-19 08:01:29.054812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.319 [2024-11-19 08:01:29.054862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.319 qpair failed and we were unable to recover it. 00:37:37.319 [2024-11-19 08:01:29.054985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.319 [2024-11-19 08:01:29.055021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.319 qpair failed and we were unable to recover it. 00:37:37.319 [2024-11-19 08:01:29.055156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.319 [2024-11-19 08:01:29.055190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.319 qpair failed and we were unable to recover it. 00:37:37.319 [2024-11-19 08:01:29.055334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.319 [2024-11-19 08:01:29.055369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.319 qpair failed and we were unable to recover it. 00:37:37.319 [2024-11-19 08:01:29.055511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.319 [2024-11-19 08:01:29.055546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.319 qpair failed and we were unable to recover it. 00:37:37.319 [2024-11-19 08:01:29.055663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.320 [2024-11-19 08:01:29.055716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.320 qpair failed and we were unable to recover it. 00:37:37.320 [2024-11-19 08:01:29.055830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.320 [2024-11-19 08:01:29.055866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.320 qpair failed and we were unable to recover it. 00:37:37.320 [2024-11-19 08:01:29.055990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.320 [2024-11-19 08:01:29.056030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.320 qpair failed and we were unable to recover it. 00:37:37.320 [2024-11-19 08:01:29.056165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.320 [2024-11-19 08:01:29.056201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.320 qpair failed and we were unable to recover it. 00:37:37.320 [2024-11-19 08:01:29.056310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.320 [2024-11-19 08:01:29.056345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.320 qpair failed and we were unable to recover it. 00:37:37.320 [2024-11-19 08:01:29.056459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.320 [2024-11-19 08:01:29.056493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.320 qpair failed and we were unable to recover it. 00:37:37.320 [2024-11-19 08:01:29.056627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.320 [2024-11-19 08:01:29.056661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.320 qpair failed and we were unable to recover it. 00:37:37.320 [2024-11-19 08:01:29.056777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.320 [2024-11-19 08:01:29.056811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.320 qpair failed and we were unable to recover it. 00:37:37.320 [2024-11-19 08:01:29.056925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.320 [2024-11-19 08:01:29.056960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.320 qpair failed and we were unable to recover it. 00:37:37.320 [2024-11-19 08:01:29.057070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.320 [2024-11-19 08:01:29.057104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.320 qpair failed and we were unable to recover it. 00:37:37.320 [2024-11-19 08:01:29.057217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.320 [2024-11-19 08:01:29.057252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.320 qpair failed and we were unable to recover it. 00:37:37.320 [2024-11-19 08:01:29.057403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.320 [2024-11-19 08:01:29.057444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.320 qpair failed and we were unable to recover it. 00:37:37.320 [2024-11-19 08:01:29.057556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.320 [2024-11-19 08:01:29.057590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.320 qpair failed and we were unable to recover it. 00:37:37.320 [2024-11-19 08:01:29.057709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.320 [2024-11-19 08:01:29.057744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.320 qpair failed and we were unable to recover it. 00:37:37.320 [2024-11-19 08:01:29.057881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.320 [2024-11-19 08:01:29.057915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.320 qpair failed and we were unable to recover it. 00:37:37.320 [2024-11-19 08:01:29.058085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.320 [2024-11-19 08:01:29.058119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.320 qpair failed and we were unable to recover it. 00:37:37.320 [2024-11-19 08:01:29.058259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.320 [2024-11-19 08:01:29.058292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.320 qpair failed and we were unable to recover it. 00:37:37.320 [2024-11-19 08:01:29.058433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.320 [2024-11-19 08:01:29.058468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.320 qpair failed and we were unable to recover it. 00:37:37.320 [2024-11-19 08:01:29.058650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.320 [2024-11-19 08:01:29.058714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.320 qpair failed and we were unable to recover it. 00:37:37.320 [2024-11-19 08:01:29.058846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.320 [2024-11-19 08:01:29.058885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.320 qpair failed and we were unable to recover it. 00:37:37.320 [2024-11-19 08:01:29.059000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.320 [2024-11-19 08:01:29.059047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.320 qpair failed and we were unable to recover it. 00:37:37.320 [2024-11-19 08:01:29.059182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.320 [2024-11-19 08:01:29.059217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.320 qpair failed and we were unable to recover it. 00:37:37.320 [2024-11-19 08:01:29.059330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.320 [2024-11-19 08:01:29.059365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.320 qpair failed and we were unable to recover it. 00:37:37.320 [2024-11-19 08:01:29.059479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.320 [2024-11-19 08:01:29.059514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.320 qpair failed and we were unable to recover it. 00:37:37.320 [2024-11-19 08:01:29.059625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.320 [2024-11-19 08:01:29.059660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.320 qpair failed and we were unable to recover it. 00:37:37.320 [2024-11-19 08:01:29.059803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.320 [2024-11-19 08:01:29.059853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.320 qpair failed and we were unable to recover it. 00:37:37.320 [2024-11-19 08:01:29.059964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.320 [2024-11-19 08:01:29.060010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.320 qpair failed and we were unable to recover it. 00:37:37.320 [2024-11-19 08:01:29.060147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.320 [2024-11-19 08:01:29.060182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.320 qpair failed and we were unable to recover it. 00:37:37.320 [2024-11-19 08:01:29.060292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.320 [2024-11-19 08:01:29.060326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.320 qpair failed and we were unable to recover it. 00:37:37.320 [2024-11-19 08:01:29.060471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.320 [2024-11-19 08:01:29.060507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.320 qpair failed and we were unable to recover it. 00:37:37.320 [2024-11-19 08:01:29.060746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.320 [2024-11-19 08:01:29.060794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.320 qpair failed and we were unable to recover it. 00:37:37.320 [2024-11-19 08:01:29.060917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.320 [2024-11-19 08:01:29.060953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.320 qpair failed and we were unable to recover it. 00:37:37.320 [2024-11-19 08:01:29.061063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.320 [2024-11-19 08:01:29.061097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.320 qpair failed and we were unable to recover it. 00:37:37.320 [2024-11-19 08:01:29.061259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.320 [2024-11-19 08:01:29.061294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.320 qpair failed and we were unable to recover it. 00:37:37.321 [2024-11-19 08:01:29.061426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.321 [2024-11-19 08:01:29.061460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.321 qpair failed and we were unable to recover it. 00:37:37.321 [2024-11-19 08:01:29.061597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.321 [2024-11-19 08:01:29.061633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.321 qpair failed and we were unable to recover it. 00:37:37.321 [2024-11-19 08:01:29.061802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.321 [2024-11-19 08:01:29.061851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.321 qpair failed and we were unable to recover it. 00:37:37.321 [2024-11-19 08:01:29.061974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.321 [2024-11-19 08:01:29.062017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.321 qpair failed and we were unable to recover it. 00:37:37.321 [2024-11-19 08:01:29.062141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.321 [2024-11-19 08:01:29.062175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.321 qpair failed and we were unable to recover it. 00:37:37.321 [2024-11-19 08:01:29.062293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.321 [2024-11-19 08:01:29.062328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.321 qpair failed and we were unable to recover it. 00:37:37.321 [2024-11-19 08:01:29.062459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.321 [2024-11-19 08:01:29.062493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.321 qpair failed and we were unable to recover it. 00:37:37.321 [2024-11-19 08:01:29.062639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.321 [2024-11-19 08:01:29.062680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.321 qpair failed and we were unable to recover it. 00:37:37.321 [2024-11-19 08:01:29.062792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.321 [2024-11-19 08:01:29.062826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.321 qpair failed and we were unable to recover it. 00:37:37.321 [2024-11-19 08:01:29.062922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.321 [2024-11-19 08:01:29.062956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.321 qpair failed and we were unable to recover it. 00:37:37.321 [2024-11-19 08:01:29.063098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.321 [2024-11-19 08:01:29.063132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.321 qpair failed and we were unable to recover it. 00:37:37.321 [2024-11-19 08:01:29.063266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.321 [2024-11-19 08:01:29.063300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.321 qpair failed and we were unable to recover it. 00:37:37.321 [2024-11-19 08:01:29.063415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.321 [2024-11-19 08:01:29.063453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.321 qpair failed and we were unable to recover it. 00:37:37.321 [2024-11-19 08:01:29.063589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.321 [2024-11-19 08:01:29.063638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.321 qpair failed and we were unable to recover it. 00:37:37.321 [2024-11-19 08:01:29.063770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.321 [2024-11-19 08:01:29.063808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.321 qpair failed and we were unable to recover it. 00:37:37.321 [2024-11-19 08:01:29.063908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.321 [2024-11-19 08:01:29.063943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.321 qpair failed and we were unable to recover it. 00:37:37.321 [2024-11-19 08:01:29.064073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.321 [2024-11-19 08:01:29.064111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.321 qpair failed and we were unable to recover it. 00:37:37.321 [2024-11-19 08:01:29.064225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.321 [2024-11-19 08:01:29.064266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.321 qpair failed and we were unable to recover it. 00:37:37.321 [2024-11-19 08:01:29.064402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.321 [2024-11-19 08:01:29.064436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.321 qpair failed and we were unable to recover it. 00:37:37.321 [2024-11-19 08:01:29.064577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.321 [2024-11-19 08:01:29.064612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.321 qpair failed and we were unable to recover it. 00:37:37.321 [2024-11-19 08:01:29.064748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.321 [2024-11-19 08:01:29.064787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.321 qpair failed and we were unable to recover it. 00:37:37.321 [2024-11-19 08:01:29.064901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.321 [2024-11-19 08:01:29.064936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.321 qpair failed and we were unable to recover it. 00:37:37.321 [2024-11-19 08:01:29.065086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.321 [2024-11-19 08:01:29.065121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.321 qpair failed and we were unable to recover it. 00:37:37.321 [2024-11-19 08:01:29.065254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.321 [2024-11-19 08:01:29.065289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.321 qpair failed and we were unable to recover it. 00:37:37.321 [2024-11-19 08:01:29.065401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.321 [2024-11-19 08:01:29.065437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.321 qpair failed and we were unable to recover it. 00:37:37.321 [2024-11-19 08:01:29.065578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.321 [2024-11-19 08:01:29.065613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.321 qpair failed and we were unable to recover it. 00:37:37.321 [2024-11-19 08:01:29.065747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.321 [2024-11-19 08:01:29.065784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.321 qpair failed and we were unable to recover it. 00:37:37.321 [2024-11-19 08:01:29.065892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.321 [2024-11-19 08:01:29.065927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.321 qpair failed and we were unable to recover it. 00:37:37.321 [2024-11-19 08:01:29.066060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.321 [2024-11-19 08:01:29.066096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.321 qpair failed and we were unable to recover it. 00:37:37.321 [2024-11-19 08:01:29.066234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.321 [2024-11-19 08:01:29.066269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.321 qpair failed and we were unable to recover it. 00:37:37.321 [2024-11-19 08:01:29.066404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.321 [2024-11-19 08:01:29.066439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.321 qpair failed and we were unable to recover it. 00:37:37.321 [2024-11-19 08:01:29.066584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.321 [2024-11-19 08:01:29.066618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.321 qpair failed and we were unable to recover it. 00:37:37.321 [2024-11-19 08:01:29.066778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.321 [2024-11-19 08:01:29.066827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.321 qpair failed and we were unable to recover it. 00:37:37.321 [2024-11-19 08:01:29.066942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.321 [2024-11-19 08:01:29.066988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.321 qpair failed and we were unable to recover it. 00:37:37.321 [2024-11-19 08:01:29.067130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.321 [2024-11-19 08:01:29.067164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.321 qpair failed and we were unable to recover it. 00:37:37.321 [2024-11-19 08:01:29.067324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.321 [2024-11-19 08:01:29.067358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.321 qpair failed and we were unable to recover it. 00:37:37.321 [2024-11-19 08:01:29.067465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.321 [2024-11-19 08:01:29.067499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.321 qpair failed and we were unable to recover it. 00:37:37.321 [2024-11-19 08:01:29.067610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.322 [2024-11-19 08:01:29.067644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.322 qpair failed and we were unable to recover it. 00:37:37.322 [2024-11-19 08:01:29.067786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.322 [2024-11-19 08:01:29.067822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.322 qpair failed and we were unable to recover it. 00:37:37.322 [2024-11-19 08:01:29.067935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.322 [2024-11-19 08:01:29.067973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.322 qpair failed and we were unable to recover it. 00:37:37.322 [2024-11-19 08:01:29.068124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.322 [2024-11-19 08:01:29.068159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.322 qpair failed and we were unable to recover it. 00:37:37.322 [2024-11-19 08:01:29.068293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.322 [2024-11-19 08:01:29.068327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.322 qpair failed and we were unable to recover it. 00:37:37.322 [2024-11-19 08:01:29.068464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.322 [2024-11-19 08:01:29.068498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.322 qpair failed and we were unable to recover it. 00:37:37.322 [2024-11-19 08:01:29.068614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.322 [2024-11-19 08:01:29.068648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.322 qpair failed and we were unable to recover it. 00:37:37.322 [2024-11-19 08:01:29.068767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.322 [2024-11-19 08:01:29.068803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.322 qpair failed and we were unable to recover it. 00:37:37.322 [2024-11-19 08:01:29.068944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.322 [2024-11-19 08:01:29.068986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.322 qpair failed and we were unable to recover it. 00:37:37.322 [2024-11-19 08:01:29.069097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.322 [2024-11-19 08:01:29.069132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.322 qpair failed and we were unable to recover it. 00:37:37.322 [2024-11-19 08:01:29.069275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.322 [2024-11-19 08:01:29.069310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.322 qpair failed and we were unable to recover it. 00:37:37.322 [2024-11-19 08:01:29.069436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.322 [2024-11-19 08:01:29.069471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.322 qpair failed and we were unable to recover it. 00:37:37.322 [2024-11-19 08:01:29.069573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.322 [2024-11-19 08:01:29.069608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.322 qpair failed and we were unable to recover it. 00:37:37.322 [2024-11-19 08:01:29.069760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.322 [2024-11-19 08:01:29.069796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.322 qpair failed and we were unable to recover it. 00:37:37.322 [2024-11-19 08:01:29.069902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.322 [2024-11-19 08:01:29.069937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.322 qpair failed and we were unable to recover it. 00:37:37.322 [2024-11-19 08:01:29.070050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.322 [2024-11-19 08:01:29.070084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.322 qpair failed and we were unable to recover it. 00:37:37.322 [2024-11-19 08:01:29.070180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.322 [2024-11-19 08:01:29.070214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.322 qpair failed and we were unable to recover it. 00:37:37.322 [2024-11-19 08:01:29.070322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.322 [2024-11-19 08:01:29.070358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.322 qpair failed and we were unable to recover it. 00:37:37.322 [2024-11-19 08:01:29.070472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.322 [2024-11-19 08:01:29.070521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.322 qpair failed and we were unable to recover it. 00:37:37.322 [2024-11-19 08:01:29.070702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.322 [2024-11-19 08:01:29.070738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.322 qpair failed and we were unable to recover it. 00:37:37.322 [2024-11-19 08:01:29.070848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.322 [2024-11-19 08:01:29.070892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.322 qpair failed and we were unable to recover it. 00:37:37.322 [2024-11-19 08:01:29.071008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.322 [2024-11-19 08:01:29.071052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.322 qpair failed and we were unable to recover it. 00:37:37.322 [2024-11-19 08:01:29.071209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.322 [2024-11-19 08:01:29.071244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.322 qpair failed and we were unable to recover it. 00:37:37.322 [2024-11-19 08:01:29.071353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.322 [2024-11-19 08:01:29.071387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.322 qpair failed and we were unable to recover it. 00:37:37.322 [2024-11-19 08:01:29.071497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.322 [2024-11-19 08:01:29.071532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.322 qpair failed and we were unable to recover it. 00:37:37.322 [2024-11-19 08:01:29.071646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.322 [2024-11-19 08:01:29.071682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.322 qpair failed and we were unable to recover it. 00:37:37.322 [2024-11-19 08:01:29.071799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.322 [2024-11-19 08:01:29.071833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.322 qpair failed and we were unable to recover it. 00:37:37.322 [2024-11-19 08:01:29.072010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.322 [2024-11-19 08:01:29.072044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.322 qpair failed and we were unable to recover it. 00:37:37.322 [2024-11-19 08:01:29.072158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.322 [2024-11-19 08:01:29.072193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.322 qpair failed and we were unable to recover it. 00:37:37.322 [2024-11-19 08:01:29.072302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.322 [2024-11-19 08:01:29.072338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.322 qpair failed and we were unable to recover it. 00:37:37.322 [2024-11-19 08:01:29.072454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.322 [2024-11-19 08:01:29.072489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.322 qpair failed and we were unable to recover it. 00:37:37.322 [2024-11-19 08:01:29.072597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.322 [2024-11-19 08:01:29.072631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.322 qpair failed and we were unable to recover it. 00:37:37.322 [2024-11-19 08:01:29.072758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.322 [2024-11-19 08:01:29.072793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.322 qpair failed and we were unable to recover it. 00:37:37.322 [2024-11-19 08:01:29.072936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.322 [2024-11-19 08:01:29.072971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.322 qpair failed and we were unable to recover it. 00:37:37.322 [2024-11-19 08:01:29.073085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.322 [2024-11-19 08:01:29.073119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.322 qpair failed and we were unable to recover it. 00:37:37.322 [2024-11-19 08:01:29.073222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.322 [2024-11-19 08:01:29.073256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.322 qpair failed and we were unable to recover it. 00:37:37.322 [2024-11-19 08:01:29.073410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.322 [2024-11-19 08:01:29.073447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.322 qpair failed and we were unable to recover it. 00:37:37.322 [2024-11-19 08:01:29.073571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.322 [2024-11-19 08:01:29.073620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.322 qpair failed and we were unable to recover it. 00:37:37.323 [2024-11-19 08:01:29.073755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.323 [2024-11-19 08:01:29.073793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.323 qpair failed and we were unable to recover it. 00:37:37.323 [2024-11-19 08:01:29.073931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.323 [2024-11-19 08:01:29.073967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.323 qpair failed and we were unable to recover it. 00:37:37.323 [2024-11-19 08:01:29.074086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.323 [2024-11-19 08:01:29.074121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.323 qpair failed and we were unable to recover it. 00:37:37.323 [2024-11-19 08:01:29.074284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.323 [2024-11-19 08:01:29.074319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.323 qpair failed and we were unable to recover it. 00:37:37.323 [2024-11-19 08:01:29.074454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.323 [2024-11-19 08:01:29.074490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.323 qpair failed and we were unable to recover it. 00:37:37.323 [2024-11-19 08:01:29.074627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.323 [2024-11-19 08:01:29.074682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.323 qpair failed and we were unable to recover it. 00:37:37.323 [2024-11-19 08:01:29.074804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.323 [2024-11-19 08:01:29.074839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.323 qpair failed and we were unable to recover it. 00:37:37.323 [2024-11-19 08:01:29.074949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.323 [2024-11-19 08:01:29.074984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.323 qpair failed and we were unable to recover it. 00:37:37.323 [2024-11-19 08:01:29.075118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.323 [2024-11-19 08:01:29.075153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.323 qpair failed and we were unable to recover it. 00:37:37.323 [2024-11-19 08:01:29.075278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.323 [2024-11-19 08:01:29.075327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.323 qpair failed and we were unable to recover it. 00:37:37.323 [2024-11-19 08:01:29.075471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.323 [2024-11-19 08:01:29.075508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.323 qpair failed and we were unable to recover it. 00:37:37.323 [2024-11-19 08:01:29.075633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.323 [2024-11-19 08:01:29.075684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.323 qpair failed and we were unable to recover it. 00:37:37.323 [2024-11-19 08:01:29.075813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.323 [2024-11-19 08:01:29.075849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.323 qpair failed and we were unable to recover it. 00:37:37.323 [2024-11-19 08:01:29.075961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.323 [2024-11-19 08:01:29.076003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.323 qpair failed and we were unable to recover it. 00:37:37.323 [2024-11-19 08:01:29.076137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.323 [2024-11-19 08:01:29.076172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.323 qpair failed and we were unable to recover it. 00:37:37.323 [2024-11-19 08:01:29.076306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.323 [2024-11-19 08:01:29.076341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.323 qpair failed and we were unable to recover it. 00:37:37.323 [2024-11-19 08:01:29.076462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.323 [2024-11-19 08:01:29.076501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.323 qpair failed and we were unable to recover it. 00:37:37.323 [2024-11-19 08:01:29.076632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.323 [2024-11-19 08:01:29.076684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.323 qpair failed and we were unable to recover it. 00:37:37.323 [2024-11-19 08:01:29.076823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.323 [2024-11-19 08:01:29.076860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.323 qpair failed and we were unable to recover it. 00:37:37.323 [2024-11-19 08:01:29.076973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.323 [2024-11-19 08:01:29.077017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.323 qpair failed and we were unable to recover it. 00:37:37.323 [2024-11-19 08:01:29.077129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.323 [2024-11-19 08:01:29.077164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.323 qpair failed and we were unable to recover it. 00:37:37.323 [2024-11-19 08:01:29.077285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.323 [2024-11-19 08:01:29.077321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.323 qpair failed and we were unable to recover it. 00:37:37.323 [2024-11-19 08:01:29.077490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.323 [2024-11-19 08:01:29.077531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.323 qpair failed and we were unable to recover it. 00:37:37.323 [2024-11-19 08:01:29.077644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.323 [2024-11-19 08:01:29.077683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.323 qpair failed and we were unable to recover it. 00:37:37.323 [2024-11-19 08:01:29.077819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.323 [2024-11-19 08:01:29.077867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.323 qpair failed and we were unable to recover it. 00:37:37.323 [2024-11-19 08:01:29.077989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.323 [2024-11-19 08:01:29.078025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.323 qpair failed and we were unable to recover it. 00:37:37.323 [2024-11-19 08:01:29.078161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.323 [2024-11-19 08:01:29.078196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.323 qpair failed and we were unable to recover it. 00:37:37.323 [2024-11-19 08:01:29.078305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.323 [2024-11-19 08:01:29.078339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.323 qpair failed and we were unable to recover it. 00:37:37.323 [2024-11-19 08:01:29.078454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.323 [2024-11-19 08:01:29.078491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.323 qpair failed and we were unable to recover it. 00:37:37.323 [2024-11-19 08:01:29.078670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.323 [2024-11-19 08:01:29.078738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.323 qpair failed and we were unable to recover it. 00:37:37.323 [2024-11-19 08:01:29.078883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.323 [2024-11-19 08:01:29.078919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.323 qpair failed and we were unable to recover it. 00:37:37.323 [2024-11-19 08:01:29.079038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.323 [2024-11-19 08:01:29.079073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.323 qpair failed and we were unable to recover it. 00:37:37.323 [2024-11-19 08:01:29.079183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.323 [2024-11-19 08:01:29.079218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.323 qpair failed and we were unable to recover it. 00:37:37.323 [2024-11-19 08:01:29.079329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.323 [2024-11-19 08:01:29.079364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.323 qpair failed and we were unable to recover it. 00:37:37.323 [2024-11-19 08:01:29.079475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.323 [2024-11-19 08:01:29.079511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.323 qpair failed and we were unable to recover it. 00:37:37.323 [2024-11-19 08:01:29.079623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.323 [2024-11-19 08:01:29.079659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.323 qpair failed and we were unable to recover it. 00:37:37.323 [2024-11-19 08:01:29.079784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.323 [2024-11-19 08:01:29.079819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.323 qpair failed and we were unable to recover it. 00:37:37.323 [2024-11-19 08:01:29.079922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.324 [2024-11-19 08:01:29.079956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.324 qpair failed and we were unable to recover it. 00:37:37.324 [2024-11-19 08:01:29.080080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.324 [2024-11-19 08:01:29.080129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.324 qpair failed and we were unable to recover it. 00:37:37.324 [2024-11-19 08:01:29.080250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.324 [2024-11-19 08:01:29.080288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.324 qpair failed and we were unable to recover it. 00:37:37.324 [2024-11-19 08:01:29.080437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.324 [2024-11-19 08:01:29.080473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.324 qpair failed and we were unable to recover it. 00:37:37.324 [2024-11-19 08:01:29.080609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.324 [2024-11-19 08:01:29.080656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.324 qpair failed and we were unable to recover it. 00:37:37.324 [2024-11-19 08:01:29.080788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.324 [2024-11-19 08:01:29.080837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.324 qpair failed and we were unable to recover it. 00:37:37.324 [2024-11-19 08:01:29.081008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.324 [2024-11-19 08:01:29.081045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.324 qpair failed and we were unable to recover it. 00:37:37.324 [2024-11-19 08:01:29.081164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.324 [2024-11-19 08:01:29.081200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.324 qpair failed and we were unable to recover it. 00:37:37.324 [2024-11-19 08:01:29.081311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.324 [2024-11-19 08:01:29.081346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.324 qpair failed and we were unable to recover it. 00:37:37.324 [2024-11-19 08:01:29.081461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.324 [2024-11-19 08:01:29.081498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.324 qpair failed and we were unable to recover it. 00:37:37.324 [2024-11-19 08:01:29.081625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.324 [2024-11-19 08:01:29.081673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.324 qpair failed and we were unable to recover it. 00:37:37.324 [2024-11-19 08:01:29.081849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.324 [2024-11-19 08:01:29.081887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.324 qpair failed and we were unable to recover it. 00:37:37.324 [2024-11-19 08:01:29.082005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.324 [2024-11-19 08:01:29.082041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.324 qpair failed and we were unable to recover it. 00:37:37.324 [2024-11-19 08:01:29.082177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.324 [2024-11-19 08:01:29.082213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.324 qpair failed and we were unable to recover it. 00:37:37.324 [2024-11-19 08:01:29.082352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.324 [2024-11-19 08:01:29.082388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.324 qpair failed and we were unable to recover it. 00:37:37.324 [2024-11-19 08:01:29.082532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.324 [2024-11-19 08:01:29.082567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.324 qpair failed and we were unable to recover it. 00:37:37.324 [2024-11-19 08:01:29.082727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.324 [2024-11-19 08:01:29.082776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.324 qpair failed and we were unable to recover it. 00:37:37.324 [2024-11-19 08:01:29.082890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.324 [2024-11-19 08:01:29.082925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.324 qpair failed and we were unable to recover it. 00:37:37.324 [2024-11-19 08:01:29.083061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.324 [2024-11-19 08:01:29.083096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.324 qpair failed and we were unable to recover it. 00:37:37.324 [2024-11-19 08:01:29.083238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.324 [2024-11-19 08:01:29.083272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.324 qpair failed and we were unable to recover it. 00:37:37.324 [2024-11-19 08:01:29.083406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.324 [2024-11-19 08:01:29.083441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.324 qpair failed and we were unable to recover it. 00:37:37.324 [2024-11-19 08:01:29.083618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.324 [2024-11-19 08:01:29.083666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.324 qpair failed and we were unable to recover it. 00:37:37.324 [2024-11-19 08:01:29.083787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.324 [2024-11-19 08:01:29.083823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.324 qpair failed and we were unable to recover it. 00:37:37.324 [2024-11-19 08:01:29.084001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.324 [2024-11-19 08:01:29.084051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.324 qpair failed and we were unable to recover it. 00:37:37.324 [2024-11-19 08:01:29.084202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.324 [2024-11-19 08:01:29.084239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.324 qpair failed and we were unable to recover it. 00:37:37.324 [2024-11-19 08:01:29.084342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.324 [2024-11-19 08:01:29.084384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.324 qpair failed and we were unable to recover it. 00:37:37.324 [2024-11-19 08:01:29.084491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.324 [2024-11-19 08:01:29.084527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.324 qpair failed and we were unable to recover it. 00:37:37.324 [2024-11-19 08:01:29.084662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.324 [2024-11-19 08:01:29.084704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.324 qpair failed and we were unable to recover it. 00:37:37.324 [2024-11-19 08:01:29.084855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.324 [2024-11-19 08:01:29.084903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.324 qpair failed and we were unable to recover it. 00:37:37.324 [2024-11-19 08:01:29.085047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.324 [2024-11-19 08:01:29.085084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.324 qpair failed and we were unable to recover it. 00:37:37.324 [2024-11-19 08:01:29.085194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.324 [2024-11-19 08:01:29.085229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.324 qpair failed and we were unable to recover it. 00:37:37.324 [2024-11-19 08:01:29.085394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.324 [2024-11-19 08:01:29.085428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.324 qpair failed and we were unable to recover it. 00:37:37.324 [2024-11-19 08:01:29.085534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.324 [2024-11-19 08:01:29.085569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.324 qpair failed and we were unable to recover it. 00:37:37.324 [2024-11-19 08:01:29.085711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.324 [2024-11-19 08:01:29.085745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.324 qpair failed and we were unable to recover it. 00:37:37.324 [2024-11-19 08:01:29.085885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.324 [2024-11-19 08:01:29.085923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.324 qpair failed and we were unable to recover it. 00:37:37.324 [2024-11-19 08:01:29.086059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.324 [2024-11-19 08:01:29.086094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.324 qpair failed and we were unable to recover it. 00:37:37.324 [2024-11-19 08:01:29.086227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.324 [2024-11-19 08:01:29.086262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.324 qpair failed and we were unable to recover it. 00:37:37.324 [2024-11-19 08:01:29.086398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.324 [2024-11-19 08:01:29.086435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.324 qpair failed and we were unable to recover it. 00:37:37.324 [2024-11-19 08:01:29.086544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.325 [2024-11-19 08:01:29.086579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.325 qpair failed and we were unable to recover it. 00:37:37.325 [2024-11-19 08:01:29.086723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.325 [2024-11-19 08:01:29.086758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.325 qpair failed and we were unable to recover it. 00:37:37.325 [2024-11-19 08:01:29.086894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.325 [2024-11-19 08:01:29.086929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.325 qpair failed and we were unable to recover it. 00:37:37.325 [2024-11-19 08:01:29.087066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.325 [2024-11-19 08:01:29.087100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.325 qpair failed and we were unable to recover it. 00:37:37.325 [2024-11-19 08:01:29.087206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.325 [2024-11-19 08:01:29.087240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.325 qpair failed and we were unable to recover it. 00:37:37.325 [2024-11-19 08:01:29.087340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.325 [2024-11-19 08:01:29.087373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.325 qpair failed and we were unable to recover it. 00:37:37.325 [2024-11-19 08:01:29.087486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.325 [2024-11-19 08:01:29.087520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.325 qpair failed and we were unable to recover it. 00:37:37.325 [2024-11-19 08:01:29.087632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.325 [2024-11-19 08:01:29.087666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.325 qpair failed and we were unable to recover it. 00:37:37.325 [2024-11-19 08:01:29.087783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.325 [2024-11-19 08:01:29.087820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.325 qpair failed and we were unable to recover it. 00:37:37.325 [2024-11-19 08:01:29.087948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.325 [2024-11-19 08:01:29.087996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.325 qpair failed and we were unable to recover it. 00:37:37.325 [2024-11-19 08:01:29.088118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.325 [2024-11-19 08:01:29.088154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.325 qpair failed and we were unable to recover it. 00:37:37.325 [2024-11-19 08:01:29.088262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.325 [2024-11-19 08:01:29.088298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.325 qpair failed and we were unable to recover it. 00:37:37.325 [2024-11-19 08:01:29.088437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.325 [2024-11-19 08:01:29.088472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.325 qpair failed and we were unable to recover it. 00:37:37.325 [2024-11-19 08:01:29.088581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.325 [2024-11-19 08:01:29.088615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.325 qpair failed and we were unable to recover it. 00:37:37.325 [2024-11-19 08:01:29.088732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.325 [2024-11-19 08:01:29.088768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.325 qpair failed and we were unable to recover it. 00:37:37.325 [2024-11-19 08:01:29.088902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.325 [2024-11-19 08:01:29.088936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.325 qpair failed and we were unable to recover it. 00:37:37.325 [2024-11-19 08:01:29.089030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.325 [2024-11-19 08:01:29.089064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.325 qpair failed and we were unable to recover it. 00:37:37.325 [2024-11-19 08:01:29.089172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.325 [2024-11-19 08:01:29.089206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.325 qpair failed and we were unable to recover it. 00:37:37.325 [2024-11-19 08:01:29.089344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.325 [2024-11-19 08:01:29.089380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.325 qpair failed and we were unable to recover it. 00:37:37.325 [2024-11-19 08:01:29.089533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.325 [2024-11-19 08:01:29.089582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.325 qpair failed and we were unable to recover it. 00:37:37.325 [2024-11-19 08:01:29.089708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.325 [2024-11-19 08:01:29.089745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.325 qpair failed and we were unable to recover it. 00:37:37.325 [2024-11-19 08:01:29.089858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.325 [2024-11-19 08:01:29.089893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.325 qpair failed and we were unable to recover it. 00:37:37.325 [2024-11-19 08:01:29.090009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.325 [2024-11-19 08:01:29.090043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.325 qpair failed and we were unable to recover it. 00:37:37.325 [2024-11-19 08:01:29.090146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.325 [2024-11-19 08:01:29.090180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.325 qpair failed and we were unable to recover it. 00:37:37.325 [2024-11-19 08:01:29.090318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.325 [2024-11-19 08:01:29.090353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.325 qpair failed and we were unable to recover it. 00:37:37.325 [2024-11-19 08:01:29.090465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.325 [2024-11-19 08:01:29.090501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.325 qpair failed and we were unable to recover it. 00:37:37.325 [2024-11-19 08:01:29.090612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.325 [2024-11-19 08:01:29.090646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.325 qpair failed and we were unable to recover it. 00:37:37.325 [2024-11-19 08:01:29.090764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.325 [2024-11-19 08:01:29.090805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.325 qpair failed and we were unable to recover it. 00:37:37.325 [2024-11-19 08:01:29.090941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.325 [2024-11-19 08:01:29.090976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.325 qpair failed and we were unable to recover it. 00:37:37.325 [2024-11-19 08:01:29.091153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.325 [2024-11-19 08:01:29.091203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.325 qpair failed and we were unable to recover it. 00:37:37.325 [2024-11-19 08:01:29.091319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.325 [2024-11-19 08:01:29.091354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.325 qpair failed and we were unable to recover it. 00:37:37.325 [2024-11-19 08:01:29.091518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.325 [2024-11-19 08:01:29.091554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.325 qpair failed and we were unable to recover it. 00:37:37.325 [2024-11-19 08:01:29.091665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.325 [2024-11-19 08:01:29.091709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.325 qpair failed and we were unable to recover it. 00:37:37.325 [2024-11-19 08:01:29.091822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.326 [2024-11-19 08:01:29.091857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.326 qpair failed and we were unable to recover it. 00:37:37.326 [2024-11-19 08:01:29.091960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.326 [2024-11-19 08:01:29.091995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.326 qpair failed and we were unable to recover it. 00:37:37.326 [2024-11-19 08:01:29.092104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.326 [2024-11-19 08:01:29.092140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.326 qpair failed and we were unable to recover it. 00:37:37.326 [2024-11-19 08:01:29.092252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.326 [2024-11-19 08:01:29.092288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.326 qpair failed and we were unable to recover it. 00:37:37.326 [2024-11-19 08:01:29.092402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.326 [2024-11-19 08:01:29.092437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.326 qpair failed and we were unable to recover it. 00:37:37.326 [2024-11-19 08:01:29.092575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.326 [2024-11-19 08:01:29.092609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.326 qpair failed and we were unable to recover it. 00:37:37.326 [2024-11-19 08:01:29.092726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.326 [2024-11-19 08:01:29.092765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.326 qpair failed and we were unable to recover it. 00:37:37.326 [2024-11-19 08:01:29.092918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.326 [2024-11-19 08:01:29.092955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.326 qpair failed and we were unable to recover it. 00:37:37.326 [2024-11-19 08:01:29.093101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.326 [2024-11-19 08:01:29.093136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.326 qpair failed and we were unable to recover it. 00:37:37.326 [2024-11-19 08:01:29.093275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.326 [2024-11-19 08:01:29.093311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.326 qpair failed and we were unable to recover it. 00:37:37.326 [2024-11-19 08:01:29.093419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.326 [2024-11-19 08:01:29.093454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.326 qpair failed and we were unable to recover it. 00:37:37.326 [2024-11-19 08:01:29.093600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.326 [2024-11-19 08:01:29.093648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.326 qpair failed and we were unable to recover it. 00:37:37.326 [2024-11-19 08:01:29.093768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.326 [2024-11-19 08:01:29.093804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.326 qpair failed and we were unable to recover it. 00:37:37.326 [2024-11-19 08:01:29.093941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.326 [2024-11-19 08:01:29.093976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.326 qpair failed and we were unable to recover it. 00:37:37.326 [2024-11-19 08:01:29.094118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.326 [2024-11-19 08:01:29.094153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.326 qpair failed and we were unable to recover it. 00:37:37.326 [2024-11-19 08:01:29.094256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.326 [2024-11-19 08:01:29.094290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.326 qpair failed and we were unable to recover it. 00:37:37.326 [2024-11-19 08:01:29.094432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.326 [2024-11-19 08:01:29.094469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.326 qpair failed and we were unable to recover it. 00:37:37.326 [2024-11-19 08:01:29.094617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.326 [2024-11-19 08:01:29.094653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.326 qpair failed and we were unable to recover it. 00:37:37.326 [2024-11-19 08:01:29.094782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.326 [2024-11-19 08:01:29.094821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.326 qpair failed and we were unable to recover it. 00:37:37.326 [2024-11-19 08:01:29.094963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.326 [2024-11-19 08:01:29.094999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.326 qpair failed and we were unable to recover it. 00:37:37.326 [2024-11-19 08:01:29.095109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.326 [2024-11-19 08:01:29.095143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.326 qpair failed and we were unable to recover it. 00:37:37.326 [2024-11-19 08:01:29.095285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.326 [2024-11-19 08:01:29.095319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.326 qpair failed and we were unable to recover it. 00:37:37.326 [2024-11-19 08:01:29.095456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.326 [2024-11-19 08:01:29.095491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.326 qpair failed and we were unable to recover it. 00:37:37.326 [2024-11-19 08:01:29.095602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.326 [2024-11-19 08:01:29.095636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.326 qpair failed and we were unable to recover it. 00:37:37.326 [2024-11-19 08:01:29.095763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.326 [2024-11-19 08:01:29.095812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.326 qpair failed and we were unable to recover it. 00:37:37.326 [2024-11-19 08:01:29.095928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.326 [2024-11-19 08:01:29.095964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.326 qpair failed and we were unable to recover it. 00:37:37.326 [2024-11-19 08:01:29.096103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.326 [2024-11-19 08:01:29.096139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.326 qpair failed and we were unable to recover it. 00:37:37.326 [2024-11-19 08:01:29.096281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.326 [2024-11-19 08:01:29.096315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.326 qpair failed and we were unable to recover it. 00:37:37.326 [2024-11-19 08:01:29.096451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.326 [2024-11-19 08:01:29.096485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.326 qpair failed and we were unable to recover it. 00:37:37.326 [2024-11-19 08:01:29.096611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.326 [2024-11-19 08:01:29.096660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.326 qpair failed and we were unable to recover it. 00:37:37.326 [2024-11-19 08:01:29.096811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.326 [2024-11-19 08:01:29.096848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.326 qpair failed and we were unable to recover it. 00:37:37.326 [2024-11-19 08:01:29.096964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.326 [2024-11-19 08:01:29.097003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.326 qpair failed and we were unable to recover it. 00:37:37.326 [2024-11-19 08:01:29.097116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.326 [2024-11-19 08:01:29.097152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.326 qpair failed and we were unable to recover it. 00:37:37.326 [2024-11-19 08:01:29.097293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.326 [2024-11-19 08:01:29.097328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.326 qpair failed and we were unable to recover it. 00:37:37.326 [2024-11-19 08:01:29.097461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.326 [2024-11-19 08:01:29.097502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.326 qpair failed and we were unable to recover it. 00:37:37.326 [2024-11-19 08:01:29.097614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.326 [2024-11-19 08:01:29.097650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.326 qpair failed and we were unable to recover it. 00:37:37.326 [2024-11-19 08:01:29.097773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.326 [2024-11-19 08:01:29.097812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.326 qpair failed and we were unable to recover it. 00:37:37.326 [2024-11-19 08:01:29.097927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.327 [2024-11-19 08:01:29.097962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.327 qpair failed and we were unable to recover it. 00:37:37.327 [2024-11-19 08:01:29.098098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.327 [2024-11-19 08:01:29.098133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.327 qpair failed and we were unable to recover it. 00:37:37.327 [2024-11-19 08:01:29.098237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.327 [2024-11-19 08:01:29.098271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.327 qpair failed and we were unable to recover it. 00:37:37.327 [2024-11-19 08:01:29.098380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.327 [2024-11-19 08:01:29.098415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.327 qpair failed and we were unable to recover it. 00:37:37.327 [2024-11-19 08:01:29.098528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.327 [2024-11-19 08:01:29.098564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.327 qpair failed and we were unable to recover it. 00:37:37.327 [2024-11-19 08:01:29.098714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.327 [2024-11-19 08:01:29.098762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.327 qpair failed and we were unable to recover it. 00:37:37.327 [2024-11-19 08:01:29.098905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.327 [2024-11-19 08:01:29.098941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.327 qpair failed and we were unable to recover it. 00:37:37.327 [2024-11-19 08:01:29.099052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.327 [2024-11-19 08:01:29.099090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.327 qpair failed and we were unable to recover it. 00:37:37.327 [2024-11-19 08:01:29.099200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.327 [2024-11-19 08:01:29.099237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.327 qpair failed and we were unable to recover it. 00:37:37.327 [2024-11-19 08:01:29.099345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.327 [2024-11-19 08:01:29.099379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.327 qpair failed and we were unable to recover it. 00:37:37.327 [2024-11-19 08:01:29.099477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.327 [2024-11-19 08:01:29.099512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.327 qpair failed and we were unable to recover it. 00:37:37.327 [2024-11-19 08:01:29.099683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.327 [2024-11-19 08:01:29.099727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.327 qpair failed and we were unable to recover it. 00:37:37.327 [2024-11-19 08:01:29.099893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.327 [2024-11-19 08:01:29.099941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.327 qpair failed and we were unable to recover it. 00:37:37.327 [2024-11-19 08:01:29.100046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.327 [2024-11-19 08:01:29.100082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.327 qpair failed and we were unable to recover it. 00:37:37.327 [2024-11-19 08:01:29.100196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.327 [2024-11-19 08:01:29.100230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.327 qpair failed and we were unable to recover it. 00:37:37.327 [2024-11-19 08:01:29.100370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.327 [2024-11-19 08:01:29.100404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.327 qpair failed and we were unable to recover it. 00:37:37.327 [2024-11-19 08:01:29.100518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.327 [2024-11-19 08:01:29.100553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.327 qpair failed and we were unable to recover it. 00:37:37.327 [2024-11-19 08:01:29.100664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.327 [2024-11-19 08:01:29.100706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.327 qpair failed and we were unable to recover it. 00:37:37.327 [2024-11-19 08:01:29.100847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.327 [2024-11-19 08:01:29.100884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.327 qpair failed and we were unable to recover it. 00:37:37.327 [2024-11-19 08:01:29.101023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.327 [2024-11-19 08:01:29.101057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.327 qpair failed and we were unable to recover it. 00:37:37.327 [2024-11-19 08:01:29.101188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.327 [2024-11-19 08:01:29.101223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.327 qpair failed and we were unable to recover it. 00:37:37.327 [2024-11-19 08:01:29.101357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.327 [2024-11-19 08:01:29.101391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.327 qpair failed and we were unable to recover it. 00:37:37.327 [2024-11-19 08:01:29.101525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.327 [2024-11-19 08:01:29.101560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.327 qpair failed and we were unable to recover it. 00:37:37.327 [2024-11-19 08:01:29.101662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.327 [2024-11-19 08:01:29.101703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.327 qpair failed and we were unable to recover it. 00:37:37.327 [2024-11-19 08:01:29.101807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.327 [2024-11-19 08:01:29.101846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.327 qpair failed and we were unable to recover it. 00:37:37.327 [2024-11-19 08:01:29.101993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.327 [2024-11-19 08:01:29.102041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.327 qpair failed and we were unable to recover it. 00:37:37.327 [2024-11-19 08:01:29.102157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.327 [2024-11-19 08:01:29.102193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.327 qpair failed and we were unable to recover it. 00:37:37.327 [2024-11-19 08:01:29.102332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.327 [2024-11-19 08:01:29.102368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.327 qpair failed and we were unable to recover it. 00:37:37.327 [2024-11-19 08:01:29.102480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.327 [2024-11-19 08:01:29.102517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.327 qpair failed and we were unable to recover it. 00:37:37.327 [2024-11-19 08:01:29.102679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.327 [2024-11-19 08:01:29.102727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.327 qpair failed and we were unable to recover it. 00:37:37.327 [2024-11-19 08:01:29.102877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.327 [2024-11-19 08:01:29.102913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.327 qpair failed and we were unable to recover it. 00:37:37.327 [2024-11-19 08:01:29.103044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.327 [2024-11-19 08:01:29.103092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.327 qpair failed and we were unable to recover it. 00:37:37.327 [2024-11-19 08:01:29.103236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.327 [2024-11-19 08:01:29.103274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.327 qpair failed and we were unable to recover it. 00:37:37.327 [2024-11-19 08:01:29.103382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.327 [2024-11-19 08:01:29.103418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.327 qpair failed and we were unable to recover it. 00:37:37.327 [2024-11-19 08:01:29.103559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.327 [2024-11-19 08:01:29.103595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.327 qpair failed and we were unable to recover it. 00:37:37.327 [2024-11-19 08:01:29.103706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.327 [2024-11-19 08:01:29.103742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.327 qpair failed and we were unable to recover it. 00:37:37.327 [2024-11-19 08:01:29.103879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.327 [2024-11-19 08:01:29.103912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.327 qpair failed and we were unable to recover it. 00:37:37.327 [2024-11-19 08:01:29.104013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.327 [2024-11-19 08:01:29.104046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.327 qpair failed and we were unable to recover it. 00:37:37.328 [2024-11-19 08:01:29.104188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.328 [2024-11-19 08:01:29.104223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.328 qpair failed and we were unable to recover it. 00:37:37.328 [2024-11-19 08:01:29.104330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.328 [2024-11-19 08:01:29.104366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.328 qpair failed and we were unable to recover it. 00:37:37.328 [2024-11-19 08:01:29.104503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.328 [2024-11-19 08:01:29.104538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.328 qpair failed and we were unable to recover it. 00:37:37.328 [2024-11-19 08:01:29.104649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.328 [2024-11-19 08:01:29.104685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.328 qpair failed and we were unable to recover it. 00:37:37.328 [2024-11-19 08:01:29.104830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.328 [2024-11-19 08:01:29.104866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.328 qpair failed and we were unable to recover it. 00:37:37.328 [2024-11-19 08:01:29.105005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.328 [2024-11-19 08:01:29.105040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.328 qpair failed and we were unable to recover it. 00:37:37.328 [2024-11-19 08:01:29.105204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.328 [2024-11-19 08:01:29.105239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.328 qpair failed and we were unable to recover it. 00:37:37.328 [2024-11-19 08:01:29.105459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.328 [2024-11-19 08:01:29.105494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.328 qpair failed and we were unable to recover it. 00:37:37.328 [2024-11-19 08:01:29.105631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.328 [2024-11-19 08:01:29.105667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.328 qpair failed and we were unable to recover it. 00:37:37.328 [2024-11-19 08:01:29.105783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.328 [2024-11-19 08:01:29.105819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.328 qpair failed and we were unable to recover it. 00:37:37.328 [2024-11-19 08:01:29.105925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.328 [2024-11-19 08:01:29.105960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.328 qpair failed and we were unable to recover it. 00:37:37.328 [2024-11-19 08:01:29.106096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.328 [2024-11-19 08:01:29.106130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.328 qpair failed and we were unable to recover it. 00:37:37.328 [2024-11-19 08:01:29.106265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.328 [2024-11-19 08:01:29.106299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.328 qpair failed and we were unable to recover it. 00:37:37.328 [2024-11-19 08:01:29.106415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.328 [2024-11-19 08:01:29.106451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.328 qpair failed and we were unable to recover it. 00:37:37.328 [2024-11-19 08:01:29.106591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.328 [2024-11-19 08:01:29.106628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.328 qpair failed and we were unable to recover it. 00:37:37.328 [2024-11-19 08:01:29.106743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.328 [2024-11-19 08:01:29.106779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.328 qpair failed and we were unable to recover it. 00:37:37.328 [2024-11-19 08:01:29.106887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.328 [2024-11-19 08:01:29.106921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.328 qpair failed and we were unable to recover it. 00:37:37.328 [2024-11-19 08:01:29.107050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.328 [2024-11-19 08:01:29.107084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.328 qpair failed and we were unable to recover it. 00:37:37.328 [2024-11-19 08:01:29.107188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.328 [2024-11-19 08:01:29.107222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.328 qpair failed and we were unable to recover it. 00:37:37.328 [2024-11-19 08:01:29.107364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.328 [2024-11-19 08:01:29.107399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.328 qpair failed and we were unable to recover it. 00:37:37.328 [2024-11-19 08:01:29.107535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.328 [2024-11-19 08:01:29.107571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.328 qpair failed and we were unable to recover it. 00:37:37.328 [2024-11-19 08:01:29.107704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.328 [2024-11-19 08:01:29.107740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.328 qpair failed and we were unable to recover it. 00:37:37.328 [2024-11-19 08:01:29.107840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.328 [2024-11-19 08:01:29.107874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.328 qpair failed and we were unable to recover it. 00:37:37.328 [2024-11-19 08:01:29.108011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.328 [2024-11-19 08:01:29.108045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.328 qpair failed and we were unable to recover it. 00:37:37.328 [2024-11-19 08:01:29.108143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.328 [2024-11-19 08:01:29.108177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.328 qpair failed and we were unable to recover it. 00:37:37.328 [2024-11-19 08:01:29.108314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.328 [2024-11-19 08:01:29.108350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.328 qpair failed and we were unable to recover it. 00:37:37.328 [2024-11-19 08:01:29.108463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.328 [2024-11-19 08:01:29.108503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.328 qpair failed and we were unable to recover it. 00:37:37.328 [2024-11-19 08:01:29.108627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.328 [2024-11-19 08:01:29.108677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.328 qpair failed and we were unable to recover it. 00:37:37.328 [2024-11-19 08:01:29.108802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.328 [2024-11-19 08:01:29.108837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.328 qpair failed and we were unable to recover it. 00:37:37.328 [2024-11-19 08:01:29.108974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.328 [2024-11-19 08:01:29.109007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.328 qpair failed and we were unable to recover it. 00:37:37.328 [2024-11-19 08:01:29.109115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.328 [2024-11-19 08:01:29.109149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.328 qpair failed and we were unable to recover it. 00:37:37.328 [2024-11-19 08:01:29.109260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.328 [2024-11-19 08:01:29.109294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.328 qpair failed and we were unable to recover it. 00:37:37.328 [2024-11-19 08:01:29.109439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.328 [2024-11-19 08:01:29.109474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.328 qpair failed and we were unable to recover it. 00:37:37.328 [2024-11-19 08:01:29.109583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.328 [2024-11-19 08:01:29.109620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.328 qpair failed and we were unable to recover it. 00:37:37.328 [2024-11-19 08:01:29.109761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.328 [2024-11-19 08:01:29.109797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.328 qpair failed and we were unable to recover it. 00:37:37.328 [2024-11-19 08:01:29.109903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.328 [2024-11-19 08:01:29.109937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.328 qpair failed and we were unable to recover it. 00:37:37.328 [2024-11-19 08:01:29.110047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.328 [2024-11-19 08:01:29.110082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.328 qpair failed and we were unable to recover it. 00:37:37.328 [2024-11-19 08:01:29.110193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.329 [2024-11-19 08:01:29.110227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.329 qpair failed and we were unable to recover it. 00:37:37.329 [2024-11-19 08:01:29.110366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.329 [2024-11-19 08:01:29.110401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.329 qpair failed and we were unable to recover it. 00:37:37.329 [2024-11-19 08:01:29.110511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.329 [2024-11-19 08:01:29.110545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.329 qpair failed and we were unable to recover it. 00:37:37.329 [2024-11-19 08:01:29.110708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.329 [2024-11-19 08:01:29.110773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.329 qpair failed and we were unable to recover it. 00:37:37.329 [2024-11-19 08:01:29.110887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.329 [2024-11-19 08:01:29.110922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.329 qpair failed and we were unable to recover it. 00:37:37.329 [2024-11-19 08:01:29.111086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.329 [2024-11-19 08:01:29.111121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.329 qpair failed and we were unable to recover it. 00:37:37.329 [2024-11-19 08:01:29.111257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.329 [2024-11-19 08:01:29.111291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.329 qpair failed and we were unable to recover it. 00:37:37.329 [2024-11-19 08:01:29.111423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.329 [2024-11-19 08:01:29.111457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.329 qpair failed and we were unable to recover it. 00:37:37.329 [2024-11-19 08:01:29.111574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.329 [2024-11-19 08:01:29.111623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.329 qpair failed and we were unable to recover it. 00:37:37.329 [2024-11-19 08:01:29.111748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.329 [2024-11-19 08:01:29.111784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.329 qpair failed and we were unable to recover it. 00:37:37.329 [2024-11-19 08:01:29.111904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.329 [2024-11-19 08:01:29.111953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.329 qpair failed and we were unable to recover it. 00:37:37.329 [2024-11-19 08:01:29.112126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.329 [2024-11-19 08:01:29.112163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.329 qpair failed and we were unable to recover it. 00:37:37.329 [2024-11-19 08:01:29.112272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.329 [2024-11-19 08:01:29.112308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.329 qpair failed and we were unable to recover it. 00:37:37.329 [2024-11-19 08:01:29.112454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.329 [2024-11-19 08:01:29.112489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.329 qpair failed and we were unable to recover it. 00:37:37.329 [2024-11-19 08:01:29.112624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.329 [2024-11-19 08:01:29.112660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.329 qpair failed and we were unable to recover it. 00:37:37.329 [2024-11-19 08:01:29.112814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.329 [2024-11-19 08:01:29.112863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.329 qpair failed and we were unable to recover it. 00:37:37.329 [2024-11-19 08:01:29.113058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.329 [2024-11-19 08:01:29.113107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.329 qpair failed and we were unable to recover it. 00:37:37.329 [2024-11-19 08:01:29.113216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.329 [2024-11-19 08:01:29.113253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.329 qpair failed and we were unable to recover it. 00:37:37.329 [2024-11-19 08:01:29.113396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.329 [2024-11-19 08:01:29.113431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.329 qpair failed and we were unable to recover it. 00:37:37.329 [2024-11-19 08:01:29.113545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.329 [2024-11-19 08:01:29.113579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.329 qpair failed and we were unable to recover it. 00:37:37.329 [2024-11-19 08:01:29.113716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.329 [2024-11-19 08:01:29.113753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.329 qpair failed and we were unable to recover it. 00:37:37.329 [2024-11-19 08:01:29.113864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.329 [2024-11-19 08:01:29.113909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.329 qpair failed and we were unable to recover it. 00:37:37.329 [2024-11-19 08:01:29.114017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.329 [2024-11-19 08:01:29.114053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.329 qpair failed and we were unable to recover it. 00:37:37.329 [2024-11-19 08:01:29.114196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.329 [2024-11-19 08:01:29.114232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.329 qpair failed and we were unable to recover it. 00:37:37.329 [2024-11-19 08:01:29.114343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.329 [2024-11-19 08:01:29.114378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.329 qpair failed and we were unable to recover it. 00:37:37.329 [2024-11-19 08:01:29.114489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.329 [2024-11-19 08:01:29.114524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.329 qpair failed and we were unable to recover it. 00:37:37.329 [2024-11-19 08:01:29.114658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.329 [2024-11-19 08:01:29.114702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.329 qpair failed and we were unable to recover it. 00:37:37.329 [2024-11-19 08:01:29.114854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.329 [2024-11-19 08:01:29.114903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.329 qpair failed and we were unable to recover it. 00:37:37.329 [2024-11-19 08:01:29.115013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.329 [2024-11-19 08:01:29.115050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.329 qpair failed and we were unable to recover it. 00:37:37.329 [2024-11-19 08:01:29.115215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.329 [2024-11-19 08:01:29.115256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.329 qpair failed and we were unable to recover it. 00:37:37.329 [2024-11-19 08:01:29.115388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.329 [2024-11-19 08:01:29.115423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.329 qpair failed and we were unable to recover it. 00:37:37.329 [2024-11-19 08:01:29.115557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.329 [2024-11-19 08:01:29.115592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.329 qpair failed and we were unable to recover it. 00:37:37.329 [2024-11-19 08:01:29.115724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.329 [2024-11-19 08:01:29.115761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.329 qpair failed and we were unable to recover it. 00:37:37.329 [2024-11-19 08:01:29.115872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.329 [2024-11-19 08:01:29.115908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.329 qpair failed and we were unable to recover it. 00:37:37.329 [2024-11-19 08:01:29.116062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.330 [2024-11-19 08:01:29.116110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.330 qpair failed and we were unable to recover it. 00:37:37.330 [2024-11-19 08:01:29.116222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.330 [2024-11-19 08:01:29.116257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.330 qpair failed and we were unable to recover it. 00:37:37.330 [2024-11-19 08:01:29.116363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.330 [2024-11-19 08:01:29.116397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.330 qpair failed and we were unable to recover it. 00:37:37.330 [2024-11-19 08:01:29.116571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.330 [2024-11-19 08:01:29.116607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.330 qpair failed and we were unable to recover it. 00:37:37.330 [2024-11-19 08:01:29.116743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.330 [2024-11-19 08:01:29.116790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.330 qpair failed and we were unable to recover it. 00:37:37.330 [2024-11-19 08:01:29.116896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.330 [2024-11-19 08:01:29.116932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.330 qpair failed and we were unable to recover it. 00:37:37.330 [2024-11-19 08:01:29.117070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.330 [2024-11-19 08:01:29.117106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.330 qpair failed and we were unable to recover it. 00:37:37.330 [2024-11-19 08:01:29.117245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.330 [2024-11-19 08:01:29.117279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.330 qpair failed and we were unable to recover it. 00:37:37.330 [2024-11-19 08:01:29.117436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.330 [2024-11-19 08:01:29.117473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.330 qpair failed and we were unable to recover it. 00:37:37.330 [2024-11-19 08:01:29.117587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.330 [2024-11-19 08:01:29.117623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.330 qpair failed and we were unable to recover it. 00:37:37.330 [2024-11-19 08:01:29.117671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:37.330 [2024-11-19 08:01:29.117788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.330 [2024-11-19 08:01:29.117835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.330 qpair failed and we were unable to recover it. 00:37:37.330 [2024-11-19 08:01:29.117949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.330 [2024-11-19 08:01:29.117985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.330 qpair failed and we were unable to recover it. 00:37:37.330 [2024-11-19 08:01:29.118125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.330 [2024-11-19 08:01:29.118161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.330 qpair failed and we were unable to recover it. 00:37:37.330 [2024-11-19 08:01:29.118301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.330 [2024-11-19 08:01:29.118336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.330 qpair failed and we were unable to recover it. 00:37:37.330 [2024-11-19 08:01:29.118461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.330 [2024-11-19 08:01:29.118495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.330 qpair failed and we were unable to recover it. 00:37:37.330 [2024-11-19 08:01:29.118628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.330 [2024-11-19 08:01:29.118663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.330 qpair failed and we were unable to recover it. 00:37:37.330 [2024-11-19 08:01:29.118818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.330 [2024-11-19 08:01:29.118853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.330 qpair failed and we were unable to recover it. 00:37:37.330 [2024-11-19 08:01:29.118978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.330 [2024-11-19 08:01:29.119026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.330 qpair failed and we were unable to recover it. 00:37:37.330 [2024-11-19 08:01:29.119160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.330 [2024-11-19 08:01:29.119195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.330 qpair failed and we were unable to recover it. 00:37:37.330 [2024-11-19 08:01:29.119308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.330 [2024-11-19 08:01:29.119342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.330 qpair failed and we were unable to recover it. 00:37:37.330 [2024-11-19 08:01:29.119485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.330 [2024-11-19 08:01:29.119521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.330 qpair failed and we were unable to recover it. 00:37:37.330 [2024-11-19 08:01:29.119638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.330 [2024-11-19 08:01:29.119674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.330 qpair failed and we were unable to recover it. 00:37:37.330 [2024-11-19 08:01:29.119799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.330 [2024-11-19 08:01:29.119835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.330 qpair failed and we were unable to recover it. 00:37:37.330 [2024-11-19 08:01:29.119948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.330 [2024-11-19 08:01:29.119984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.330 qpair failed and we were unable to recover it. 00:37:37.330 [2024-11-19 08:01:29.120147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.330 [2024-11-19 08:01:29.120181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.330 qpair failed and we were unable to recover it. 00:37:37.330 [2024-11-19 08:01:29.120286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.330 [2024-11-19 08:01:29.120321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.330 qpair failed and we were unable to recover it. 00:37:37.330 [2024-11-19 08:01:29.120450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.330 [2024-11-19 08:01:29.120485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.330 qpair failed and we were unable to recover it. 00:37:37.330 [2024-11-19 08:01:29.120606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.330 [2024-11-19 08:01:29.120655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.330 qpair failed and we were unable to recover it. 00:37:37.330 [2024-11-19 08:01:29.120788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.330 [2024-11-19 08:01:29.120837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.330 qpair failed and we were unable to recover it. 00:37:37.330 [2024-11-19 08:01:29.120958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.330 [2024-11-19 08:01:29.120994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.330 qpair failed and we were unable to recover it. 00:37:37.330 [2024-11-19 08:01:29.121097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.330 [2024-11-19 08:01:29.121133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.330 qpair failed and we were unable to recover it. 00:37:37.330 [2024-11-19 08:01:29.121277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.330 [2024-11-19 08:01:29.121312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.330 qpair failed and we were unable to recover it. 00:37:37.330 [2024-11-19 08:01:29.121458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.330 [2024-11-19 08:01:29.121494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.330 qpair failed and we were unable to recover it. 00:37:37.330 [2024-11-19 08:01:29.121594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.330 [2024-11-19 08:01:29.121629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.330 qpair failed and we were unable to recover it. 00:37:37.330 [2024-11-19 08:01:29.121861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.330 [2024-11-19 08:01:29.121899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.330 qpair failed and we were unable to recover it. 00:37:37.330 [2024-11-19 08:01:29.122061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.330 [2024-11-19 08:01:29.122096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.330 qpair failed and we were unable to recover it. 00:37:37.330 [2024-11-19 08:01:29.122256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.330 [2024-11-19 08:01:29.122290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.330 qpair failed and we were unable to recover it. 00:37:37.330 [2024-11-19 08:01:29.122401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.331 [2024-11-19 08:01:29.122435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.331 qpair failed and we were unable to recover it. 00:37:37.331 [2024-11-19 08:01:29.122567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.331 [2024-11-19 08:01:29.122602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.331 qpair failed and we were unable to recover it. 00:37:37.331 [2024-11-19 08:01:29.122736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.331 [2024-11-19 08:01:29.122786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.331 qpair failed and we were unable to recover it. 00:37:37.331 [2024-11-19 08:01:29.122906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.331 [2024-11-19 08:01:29.122941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.331 qpair failed and we were unable to recover it. 00:37:37.331 [2024-11-19 08:01:29.123050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.331 [2024-11-19 08:01:29.123085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.331 qpair failed and we were unable to recover it. 00:37:37.331 [2024-11-19 08:01:29.123216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.331 [2024-11-19 08:01:29.123251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.331 qpair failed and we were unable to recover it. 00:37:37.331 [2024-11-19 08:01:29.123365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.331 [2024-11-19 08:01:29.123399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.331 qpair failed and we were unable to recover it. 00:37:37.331 [2024-11-19 08:01:29.123521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.331 [2024-11-19 08:01:29.123569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.331 qpair failed and we were unable to recover it. 00:37:37.331 [2024-11-19 08:01:29.123720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.331 [2024-11-19 08:01:29.123756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.331 qpair failed and we were unable to recover it. 00:37:37.331 [2024-11-19 08:01:29.123876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.331 [2024-11-19 08:01:29.123915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.331 qpair failed and we were unable to recover it. 00:37:37.331 [2024-11-19 08:01:29.124061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.331 [2024-11-19 08:01:29.124098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.331 qpair failed and we were unable to recover it. 00:37:37.331 [2024-11-19 08:01:29.124237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.331 [2024-11-19 08:01:29.124278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.331 qpair failed and we were unable to recover it. 00:37:37.331 [2024-11-19 08:01:29.124416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.331 [2024-11-19 08:01:29.124451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.331 qpair failed and we were unable to recover it. 00:37:37.331 [2024-11-19 08:01:29.124553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.331 [2024-11-19 08:01:29.124589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.331 qpair failed and we were unable to recover it. 00:37:37.331 [2024-11-19 08:01:29.124774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.331 [2024-11-19 08:01:29.124823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.331 qpair failed and we were unable to recover it. 00:37:37.331 [2024-11-19 08:01:29.125003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.331 [2024-11-19 08:01:29.125039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.331 qpair failed and we were unable to recover it. 00:37:37.331 [2024-11-19 08:01:29.125146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.331 [2024-11-19 08:01:29.125182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.331 qpair failed and we were unable to recover it. 00:37:37.331 [2024-11-19 08:01:29.125283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.331 [2024-11-19 08:01:29.125318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.331 qpair failed and we were unable to recover it. 00:37:37.331 [2024-11-19 08:01:29.125484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.331 [2024-11-19 08:01:29.125520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.331 qpair failed and we were unable to recover it. 00:37:37.331 [2024-11-19 08:01:29.125632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.331 [2024-11-19 08:01:29.125668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.331 qpair failed and we were unable to recover it. 00:37:37.331 [2024-11-19 08:01:29.125848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.331 [2024-11-19 08:01:29.125896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.331 qpair failed and we were unable to recover it. 00:37:37.331 [2024-11-19 08:01:29.126075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.331 [2024-11-19 08:01:29.126112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.331 qpair failed and we were unable to recover it. 00:37:37.331 [2024-11-19 08:01:29.126222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.331 [2024-11-19 08:01:29.126256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.331 qpair failed and we were unable to recover it. 00:37:37.331 [2024-11-19 08:01:29.126387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.331 [2024-11-19 08:01:29.126421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.331 qpair failed and we were unable to recover it. 00:37:37.331 [2024-11-19 08:01:29.126559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.331 [2024-11-19 08:01:29.126594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.331 qpair failed and we were unable to recover it. 00:37:37.331 [2024-11-19 08:01:29.126740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.331 [2024-11-19 08:01:29.126775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.331 qpair failed and we were unable to recover it. 00:37:37.331 [2024-11-19 08:01:29.126935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.331 [2024-11-19 08:01:29.126986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.331 qpair failed and we were unable to recover it. 00:37:37.331 [2024-11-19 08:01:29.127105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.331 [2024-11-19 08:01:29.127142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.331 qpair failed and we were unable to recover it. 00:37:37.331 [2024-11-19 08:01:29.127278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.331 [2024-11-19 08:01:29.127314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.331 qpair failed and we were unable to recover it. 00:37:37.331 [2024-11-19 08:01:29.127456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.331 [2024-11-19 08:01:29.127490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.331 qpair failed and we were unable to recover it. 00:37:37.331 [2024-11-19 08:01:29.127602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.331 [2024-11-19 08:01:29.127636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.331 qpair failed and we were unable to recover it. 00:37:37.331 [2024-11-19 08:01:29.127941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.331 [2024-11-19 08:01:29.127989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.331 qpair failed and we were unable to recover it. 00:37:37.331 [2024-11-19 08:01:29.128105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.331 [2024-11-19 08:01:29.128140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.331 qpair failed and we were unable to recover it. 00:37:37.331 [2024-11-19 08:01:29.128302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.331 [2024-11-19 08:01:29.128337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.331 qpair failed and we were unable to recover it. 00:37:37.331 [2024-11-19 08:01:29.128443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.331 [2024-11-19 08:01:29.128477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.331 qpair failed and we were unable to recover it. 00:37:37.331 [2024-11-19 08:01:29.128633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.331 [2024-11-19 08:01:29.128681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.331 qpair failed and we were unable to recover it. 00:37:37.331 [2024-11-19 08:01:29.128876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.331 [2024-11-19 08:01:29.128924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.331 qpair failed and we were unable to recover it. 00:37:37.331 [2024-11-19 08:01:29.129070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.331 [2024-11-19 08:01:29.129107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.332 qpair failed and we were unable to recover it. 00:37:37.332 [2024-11-19 08:01:29.129258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.332 [2024-11-19 08:01:29.129294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.332 qpair failed and we were unable to recover it. 00:37:37.332 [2024-11-19 08:01:29.129402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.332 [2024-11-19 08:01:29.129437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.332 qpair failed and we were unable to recover it. 00:37:37.332 [2024-11-19 08:01:29.129570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.332 [2024-11-19 08:01:29.129620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.332 qpair failed and we were unable to recover it. 00:37:37.332 [2024-11-19 08:01:29.129791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.332 [2024-11-19 08:01:29.129839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.332 qpair failed and we were unable to recover it. 00:37:37.332 [2024-11-19 08:01:29.130016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.332 [2024-11-19 08:01:29.130054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.332 qpair failed and we were unable to recover it. 00:37:37.332 [2024-11-19 08:01:29.130161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.332 [2024-11-19 08:01:29.130196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.332 qpair failed and we were unable to recover it. 00:37:37.332 [2024-11-19 08:01:29.130304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.332 [2024-11-19 08:01:29.130339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.332 qpair failed and we were unable to recover it. 00:37:37.332 [2024-11-19 08:01:29.130476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.332 [2024-11-19 08:01:29.130511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.332 qpair failed and we were unable to recover it. 00:37:37.332 [2024-11-19 08:01:29.130642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.332 [2024-11-19 08:01:29.130675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.332 qpair failed and we were unable to recover it. 00:37:37.332 [2024-11-19 08:01:29.130845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.332 [2024-11-19 08:01:29.130879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.332 qpair failed and we were unable to recover it. 00:37:37.332 [2024-11-19 08:01:29.130981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.332 [2024-11-19 08:01:29.131016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.332 qpair failed and we were unable to recover it. 00:37:37.332 [2024-11-19 08:01:29.131119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.332 [2024-11-19 08:01:29.131153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.332 qpair failed and we were unable to recover it. 00:37:37.332 [2024-11-19 08:01:29.131281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.332 [2024-11-19 08:01:29.131316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.332 qpair failed and we were unable to recover it. 00:37:37.332 [2024-11-19 08:01:29.131459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.332 [2024-11-19 08:01:29.131503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.332 qpair failed and we were unable to recover it. 00:37:37.332 [2024-11-19 08:01:29.131668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.332 [2024-11-19 08:01:29.131711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.332 qpair failed and we were unable to recover it. 00:37:37.332 [2024-11-19 08:01:29.131844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.332 [2024-11-19 08:01:29.131878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.332 qpair failed and we were unable to recover it. 00:37:37.332 [2024-11-19 08:01:29.132015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.332 [2024-11-19 08:01:29.132049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.332 qpair failed and we were unable to recover it. 00:37:37.332 [2024-11-19 08:01:29.132160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.332 [2024-11-19 08:01:29.132194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.332 qpair failed and we were unable to recover it. 00:37:37.332 [2024-11-19 08:01:29.132379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.332 [2024-11-19 08:01:29.132428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.332 qpair failed and we were unable to recover it. 00:37:37.332 [2024-11-19 08:01:29.132541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.332 [2024-11-19 08:01:29.132589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.332 qpair failed and we were unable to recover it. 00:37:37.332 [2024-11-19 08:01:29.132723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.332 [2024-11-19 08:01:29.132758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.332 qpair failed and we were unable to recover it. 00:37:37.332 [2024-11-19 08:01:29.132865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.332 [2024-11-19 08:01:29.132899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.332 qpair failed and we were unable to recover it. 00:37:37.332 [2024-11-19 08:01:29.133036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.332 [2024-11-19 08:01:29.133070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.332 qpair failed and we were unable to recover it. 00:37:37.332 [2024-11-19 08:01:29.133200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.332 [2024-11-19 08:01:29.133233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.332 qpair failed and we were unable to recover it. 00:37:37.332 [2024-11-19 08:01:29.133375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.332 [2024-11-19 08:01:29.133410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.332 qpair failed and we were unable to recover it. 00:37:37.332 [2024-11-19 08:01:29.133540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.332 [2024-11-19 08:01:29.133579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.332 qpair failed and we were unable to recover it. 00:37:37.332 [2024-11-19 08:01:29.133706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.332 [2024-11-19 08:01:29.133755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.332 qpair failed and we were unable to recover it. 00:37:37.332 [2024-11-19 08:01:29.133920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.332 [2024-11-19 08:01:29.133956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.332 qpair failed and we were unable to recover it. 00:37:37.332 [2024-11-19 08:01:29.134064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.332 [2024-11-19 08:01:29.134098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.332 qpair failed and we were unable to recover it. 00:37:37.332 [2024-11-19 08:01:29.134235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.332 [2024-11-19 08:01:29.134269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.332 qpair failed and we were unable to recover it. 00:37:37.332 [2024-11-19 08:01:29.134375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.332 [2024-11-19 08:01:29.134409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.332 qpair failed and we were unable to recover it. 00:37:37.332 [2024-11-19 08:01:29.134545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.332 [2024-11-19 08:01:29.134580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.332 qpair failed and we were unable to recover it. 00:37:37.332 [2024-11-19 08:01:29.134717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.332 [2024-11-19 08:01:29.134775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.332 qpair failed and we were unable to recover it. 00:37:37.332 [2024-11-19 08:01:29.134904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.332 [2024-11-19 08:01:29.134944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.332 qpair failed and we were unable to recover it. 00:37:37.332 [2024-11-19 08:01:29.135057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.332 [2024-11-19 08:01:29.135099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.332 qpair failed and we were unable to recover it. 00:37:37.332 [2024-11-19 08:01:29.135263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.332 [2024-11-19 08:01:29.135298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.332 qpair failed and we were unable to recover it. 00:37:37.332 [2024-11-19 08:01:29.135406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.332 [2024-11-19 08:01:29.135440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.332 qpair failed and we were unable to recover it. 00:37:37.332 [2024-11-19 08:01:29.135577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.332 [2024-11-19 08:01:29.135610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.332 qpair failed and we were unable to recover it. 00:37:37.332 [2024-11-19 08:01:29.135747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.332 [2024-11-19 08:01:29.135782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.332 qpair failed and we were unable to recover it. 00:37:37.333 [2024-11-19 08:01:29.135916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.333 [2024-11-19 08:01:29.135951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.333 qpair failed and we were unable to recover it. 00:37:37.333 [2024-11-19 08:01:29.136086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.333 [2024-11-19 08:01:29.136120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.333 qpair failed and we were unable to recover it. 00:37:37.333 [2024-11-19 08:01:29.136237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.333 [2024-11-19 08:01:29.136280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.333 qpair failed and we were unable to recover it. 00:37:37.333 [2024-11-19 08:01:29.136402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.333 [2024-11-19 08:01:29.136451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.333 qpair failed and we were unable to recover it. 00:37:37.333 [2024-11-19 08:01:29.136595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.333 [2024-11-19 08:01:29.136643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.333 qpair failed and we were unable to recover it. 00:37:37.333 [2024-11-19 08:01:29.136793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.333 [2024-11-19 08:01:29.136830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.333 qpair failed and we were unable to recover it. 00:37:37.333 [2024-11-19 08:01:29.136971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.333 [2024-11-19 08:01:29.137017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.333 qpair failed and we were unable to recover it. 00:37:37.333 [2024-11-19 08:01:29.137148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.333 [2024-11-19 08:01:29.137183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.333 qpair failed and we were unable to recover it. 00:37:37.333 [2024-11-19 08:01:29.137317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.333 [2024-11-19 08:01:29.137352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.333 qpair failed and we were unable to recover it. 00:37:37.333 [2024-11-19 08:01:29.137485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.333 [2024-11-19 08:01:29.137519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:37.333 qpair failed and we were unable to recover it. 00:37:37.333 A controller has encountered a failure and is being reset. 00:37:37.333 [2024-11-19 08:01:29.137678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.333 [2024-11-19 08:01:29.137721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:37.333 qpair failed and we were unable to recover it. 00:37:37.333 [2024-11-19 08:01:29.137847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.333 [2024-11-19 08:01:29.137893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:37:37.333 qpair failed and we were unable to recover it. 00:37:37.333 [2024-11-19 08:01:29.138024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.333 [2024-11-19 08:01:29.138074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.333 qpair failed and we were unable to recover it. 00:37:37.333 [2024-11-19 08:01:29.138243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.333 [2024-11-19 08:01:29.138280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:37.333 qpair failed and we were unable to recover it. 00:37:37.333 [2024-11-19 08:01:29.138464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.333 [2024-11-19 08:01:29.138510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:37.333 [2024-11-19 08:01:29.138539] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2780 is same with the state(6) to be set 00:37:37.333 [2024-11-19 08:01:29.138580] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2780 (9): Bad file descriptor 00:37:37.333 [2024-11-19 08:01:29.138609] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:37:37.333 [2024-11-19 08:01:29.138636] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:37:37.333 [2024-11-19 08:01:29.138666] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:37:37.333 Unable to reset the controller. 00:37:37.591 [2024-11-19 08:01:29.250540] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:37.591 [2024-11-19 08:01:29.250623] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:37.591 [2024-11-19 08:01:29.250648] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:37.591 [2024-11-19 08:01:29.250671] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:37.591 [2024-11-19 08:01:29.250698] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:37.591 [2024-11-19 08:01:29.253593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:37:37.591 [2024-11-19 08:01:29.253630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:37:37.591 [2024-11-19 08:01:29.253677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:37:37.591 [2024-11-19 08:01:29.253683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:37:38.157 08:01:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:38.158 08:01:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:37:38.158 08:01:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:38.158 08:01:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:38.158 08:01:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:38.158 08:01:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:38.158 08:01:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:38.158 08:01:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:38.158 08:01:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:38.158 Malloc0 00:37:38.158 08:01:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:38.158 08:01:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:37:38.158 08:01:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:38.158 08:01:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:38.418 [2024-11-19 08:01:30.091928] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:38.418 08:01:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:38.418 08:01:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:38.418 08:01:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:38.418 08:01:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:38.418 08:01:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:38.418 08:01:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:38.418 08:01:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:38.418 08:01:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:38.418 08:01:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:38.418 08:01:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:38.418 08:01:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:38.418 08:01:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:38.418 [2024-11-19 08:01:30.122086] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:38.418 08:01:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:38.418 08:01:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:38.418 08:01:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:38.418 08:01:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:38.418 08:01:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:38.418 08:01:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3138709 00:37:38.418 Controller properly reset. 00:37:43.692 Initializing NVMe Controllers 00:37:43.692 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:43.692 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:43.692 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:37:43.692 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:37:43.692 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:37:43.692 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:37:43.692 Initialization complete. Launching workers. 00:37:43.692 Starting thread on core 1 00:37:43.692 Starting thread on core 2 00:37:43.692 Starting thread on core 3 00:37:43.692 Starting thread on core 0 00:37:43.692 08:01:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:37:43.692 00:37:43.692 real 0m11.577s 00:37:43.692 user 0m37.249s 00:37:43.692 sys 0m7.585s 00:37:43.693 08:01:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:43.693 08:01:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:43.693 ************************************ 00:37:43.693 END TEST nvmf_target_disconnect_tc2 00:37:43.693 ************************************ 00:37:43.693 08:01:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:37:43.693 08:01:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:37:43.693 08:01:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:37:43.693 08:01:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:43.693 08:01:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:37:43.693 08:01:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:43.693 08:01:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:37:43.693 08:01:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:43.693 08:01:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:43.693 rmmod nvme_tcp 00:37:43.693 rmmod nvme_fabrics 00:37:43.693 rmmod nvme_keyring 00:37:43.693 08:01:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:43.693 08:01:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:37:43.693 08:01:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:37:43.693 08:01:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 3139121 ']' 00:37:43.693 08:01:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 3139121 00:37:43.693 08:01:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 3139121 ']' 00:37:43.693 08:01:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 3139121 00:37:43.693 08:01:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:37:43.693 08:01:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:43.693 08:01:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3139121 00:37:43.693 08:01:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:37:43.693 08:01:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:37:43.693 08:01:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3139121' 00:37:43.693 killing process with pid 3139121 00:37:43.693 08:01:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 3139121 00:37:43.693 08:01:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 3139121 00:37:44.632 08:01:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:44.632 08:01:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:44.632 08:01:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:44.632 08:01:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:37:44.632 08:01:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:37:44.632 08:01:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:44.632 08:01:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:37:44.632 08:01:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:44.632 08:01:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:44.632 08:01:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:44.632 08:01:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:44.632 08:01:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:47.169 08:01:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:47.169 00:37:47.169 real 0m17.566s 00:37:47.169 user 1m5.267s 00:37:47.169 sys 0m10.304s 00:37:47.169 08:01:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:47.169 08:01:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:47.169 ************************************ 00:37:47.169 END TEST nvmf_target_disconnect 00:37:47.169 ************************************ 00:37:47.169 08:01:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:37:47.169 00:37:47.169 real 7m38.470s 00:37:47.169 user 19m49.893s 00:37:47.169 sys 1m33.928s 00:37:47.169 08:01:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:47.169 08:01:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:37:47.169 ************************************ 00:37:47.169 END TEST nvmf_host 00:37:47.169 ************************************ 00:37:47.169 08:01:38 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:37:47.169 08:01:38 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:37:47.169 08:01:38 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:37:47.169 08:01:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:47.169 08:01:38 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:47.169 08:01:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:47.169 ************************************ 00:37:47.169 START TEST nvmf_target_core_interrupt_mode 00:37:47.169 ************************************ 00:37:47.169 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:37:47.169 * Looking for test storage... 00:37:47.169 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:37:47.169 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:47.169 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:37:47.169 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:47.169 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:47.169 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:47.169 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:47.169 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:47.169 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:37:47.169 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:37:47.169 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:37:47.169 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:37:47.169 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:37:47.169 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:37:47.169 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:37:47.169 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:47.169 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:37:47.169 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:37:47.169 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:47.169 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:47.169 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:37:47.169 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:37:47.169 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:47.169 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:37:47.169 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:37:47.169 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:37:47.169 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:37:47.169 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:47.169 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:37:47.169 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:37:47.169 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:47.169 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:47.169 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:37:47.169 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:47.169 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:47.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:47.169 --rc genhtml_branch_coverage=1 00:37:47.169 --rc genhtml_function_coverage=1 00:37:47.169 --rc genhtml_legend=1 00:37:47.169 --rc geninfo_all_blocks=1 00:37:47.169 --rc geninfo_unexecuted_blocks=1 00:37:47.169 00:37:47.169 ' 00:37:47.169 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:47.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:47.169 --rc genhtml_branch_coverage=1 00:37:47.169 --rc genhtml_function_coverage=1 00:37:47.169 --rc genhtml_legend=1 00:37:47.169 --rc geninfo_all_blocks=1 00:37:47.169 --rc geninfo_unexecuted_blocks=1 00:37:47.169 00:37:47.169 ' 00:37:47.169 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:47.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:47.169 --rc genhtml_branch_coverage=1 00:37:47.169 --rc genhtml_function_coverage=1 00:37:47.169 --rc genhtml_legend=1 00:37:47.169 --rc geninfo_all_blocks=1 00:37:47.169 --rc geninfo_unexecuted_blocks=1 00:37:47.169 00:37:47.169 ' 00:37:47.169 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:47.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:47.169 --rc genhtml_branch_coverage=1 00:37:47.169 --rc genhtml_function_coverage=1 00:37:47.169 --rc genhtml_legend=1 00:37:47.169 --rc geninfo_all_blocks=1 00:37:47.169 --rc geninfo_unexecuted_blocks=1 00:37:47.169 00:37:47.169 ' 00:37:47.169 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:37:47.169 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:37:47.169 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:47.169 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:37:47.169 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:47.169 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:47.169 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:47.169 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:47.169 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:47.169 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:47.169 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:47.169 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:47.169 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:47.169 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:47.169 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:47.169 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:47.169 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:47.169 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:47.169 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:47.169 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:47.169 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:47.169 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:37:47.169 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:47.169 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:47.170 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:47.170 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:47.170 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:47.170 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:47.170 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:37:47.170 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:47.170 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:37:47.170 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:47.170 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:47.170 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:47.170 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:47.170 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:47.170 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:47.170 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:47.170 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:47.170 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:47.170 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:47.170 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:37:47.170 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:37:47.170 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:37:47.170 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:37:47.170 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:47.170 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:47.170 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:47.170 ************************************ 00:37:47.170 START TEST nvmf_abort 00:37:47.170 ************************************ 00:37:47.170 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:37:47.170 * Looking for test storage... 00:37:47.170 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:47.170 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:47.170 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:37:47.170 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:47.170 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:47.170 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:47.170 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:47.170 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:47.170 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:37:47.170 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:37:47.170 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:37:47.170 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:37:47.170 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:37:47.170 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:37:47.170 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:37:47.170 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:47.170 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:37:47.170 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:37:47.170 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:47.170 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:47.170 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:37:47.170 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:37:47.170 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:47.170 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:37:47.170 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:37:47.170 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:37:47.170 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:37:47.170 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:47.170 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:37:47.170 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:37:47.170 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:47.170 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:47.170 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:37:47.170 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:47.170 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:47.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:47.170 --rc genhtml_branch_coverage=1 00:37:47.170 --rc genhtml_function_coverage=1 00:37:47.170 --rc genhtml_legend=1 00:37:47.170 --rc geninfo_all_blocks=1 00:37:47.170 --rc geninfo_unexecuted_blocks=1 00:37:47.170 00:37:47.170 ' 00:37:47.170 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:47.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:47.170 --rc genhtml_branch_coverage=1 00:37:47.170 --rc genhtml_function_coverage=1 00:37:47.170 --rc genhtml_legend=1 00:37:47.170 --rc geninfo_all_blocks=1 00:37:47.170 --rc geninfo_unexecuted_blocks=1 00:37:47.170 00:37:47.170 ' 00:37:47.170 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:47.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:47.170 --rc genhtml_branch_coverage=1 00:37:47.170 --rc genhtml_function_coverage=1 00:37:47.170 --rc genhtml_legend=1 00:37:47.170 --rc geninfo_all_blocks=1 00:37:47.170 --rc geninfo_unexecuted_blocks=1 00:37:47.170 00:37:47.170 ' 00:37:47.170 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:47.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:47.170 --rc genhtml_branch_coverage=1 00:37:47.170 --rc genhtml_function_coverage=1 00:37:47.170 --rc genhtml_legend=1 00:37:47.170 --rc geninfo_all_blocks=1 00:37:47.170 --rc geninfo_unexecuted_blocks=1 00:37:47.170 00:37:47.170 ' 00:37:47.170 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:47.170 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:37:47.170 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:47.170 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:47.170 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:47.170 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:47.170 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:47.170 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:47.170 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:47.170 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:47.170 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:47.170 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:47.170 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:47.171 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:47.171 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:47.171 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:47.171 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:47.171 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:47.171 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:47.171 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:37:47.171 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:47.171 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:47.171 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:47.171 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:47.171 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:47.171 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:47.171 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:37:47.171 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:47.171 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:37:47.171 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:47.171 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:47.171 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:47.171 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:47.171 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:47.171 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:47.171 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:47.171 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:47.171 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:47.171 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:47.171 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:47.171 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:37:47.171 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:37:47.171 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:47.171 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:47.171 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:47.171 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:47.171 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:47.171 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:47.171 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:47.171 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:47.171 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:47.171 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:47.171 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:37:47.171 08:01:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:49.081 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:49.081 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:37:49.081 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:49.081 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:49.081 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:49.081 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:49.081 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:49.081 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:37:49.081 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:49.081 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:37:49.081 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:37:49.081 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:37:49.081 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:37:49.081 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:37:49.081 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:37:49.081 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:49.081 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:49.081 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:49.081 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:49.081 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:49.081 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:49.081 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:49.081 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:49.081 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:49.081 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:49.081 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:49.081 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:49.081 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:49.081 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:49.081 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:49.081 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:49.081 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:49.081 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:49.081 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:49.081 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:49.081 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:49.081 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:49.081 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:49.081 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:49.081 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:49.081 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:49.081 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:49.081 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:49.081 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:49.081 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:49.081 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:49.081 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:49.081 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:49.081 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:49.081 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:49.081 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:49.081 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:49.081 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:49.081 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:49.081 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:49.081 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:49.081 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:49.081 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:49.081 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:49.081 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:49.081 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:49.081 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:49.081 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:49.081 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:49.081 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:49.081 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:49.081 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:49.081 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:49.081 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:49.082 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:49.082 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:49.082 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:49.082 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:49.082 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:37:49.082 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:49.082 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:49.082 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:49.082 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:49.082 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:49.082 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:49.082 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:49.082 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:49.082 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:49.082 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:49.082 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:49.082 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:49.082 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:49.082 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:49.082 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:49.082 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:49.082 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:49.082 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:49.082 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:49.082 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:49.082 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:49.082 08:01:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:49.342 08:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:49.342 08:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:49.342 08:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:49.342 08:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:49.342 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:49.342 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:37:49.342 00:37:49.342 --- 10.0.0.2 ping statistics --- 00:37:49.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:49.342 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:37:49.342 08:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:49.342 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:49.342 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:37:49.342 00:37:49.342 --- 10.0.0.1 ping statistics --- 00:37:49.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:49.342 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:37:49.342 08:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:49.342 08:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:37:49.342 08:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:49.342 08:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:49.342 08:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:49.342 08:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:49.342 08:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:49.342 08:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:49.342 08:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:49.342 08:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:37:49.342 08:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:49.342 08:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:49.342 08:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:49.342 08:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=3142057 00:37:49.342 08:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 3142057 00:37:49.342 08:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:37:49.342 08:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 3142057 ']' 00:37:49.342 08:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:49.342 08:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:49.342 08:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:49.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:49.342 08:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:49.342 08:01:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:49.342 [2024-11-19 08:01:41.163348] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:49.342 [2024-11-19 08:01:41.165803] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:37:49.342 [2024-11-19 08:01:41.165907] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:49.601 [2024-11-19 08:01:41.305202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:49.601 [2024-11-19 08:01:41.440900] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:49.601 [2024-11-19 08:01:41.440996] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:49.601 [2024-11-19 08:01:41.441026] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:49.601 [2024-11-19 08:01:41.441048] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:49.601 [2024-11-19 08:01:41.441072] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:49.601 [2024-11-19 08:01:41.443771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:49.601 [2024-11-19 08:01:41.443851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:49.601 [2024-11-19 08:01:41.443874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:50.172 [2024-11-19 08:01:41.819811] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:50.172 [2024-11-19 08:01:41.820860] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:50.172 [2024-11-19 08:01:41.821606] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:50.172 [2024-11-19 08:01:41.821952] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:50.431 08:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:50.431 08:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:37:50.431 08:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:50.431 08:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:50.431 08:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:50.431 08:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:50.431 08:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:37:50.431 08:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:50.431 08:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:50.431 [2024-11-19 08:01:42.152962] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:50.431 08:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:50.431 08:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:37:50.431 08:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:50.431 08:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:50.431 Malloc0 00:37:50.431 08:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:50.431 08:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:37:50.431 08:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:50.431 08:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:50.431 Delay0 00:37:50.431 08:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:50.431 08:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:37:50.431 08:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:50.431 08:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:50.431 08:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:50.431 08:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:37:50.431 08:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:50.431 08:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:50.431 08:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:50.431 08:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:50.431 08:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:50.431 08:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:50.431 [2024-11-19 08:01:42.273194] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:50.431 08:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:50.431 08:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:50.431 08:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:50.431 08:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:50.431 08:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:50.431 08:01:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:37:50.689 [2024-11-19 08:01:42.444100] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:37:52.593 Initializing NVMe Controllers 00:37:52.593 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:37:52.593 controller IO queue size 128 less than required 00:37:52.593 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:37:52.593 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:37:52.593 Initialization complete. Launching workers. 00:37:52.593 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 23433 00:37:52.593 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 23490, failed to submit 66 00:37:52.593 success 23433, unsuccessful 57, failed 0 00:37:52.593 08:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:52.593 08:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:52.593 08:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:52.593 08:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:52.593 08:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:37:52.593 08:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:37:52.593 08:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:52.593 08:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:37:52.852 08:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:52.852 08:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:37:52.852 08:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:52.852 08:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:52.852 rmmod nvme_tcp 00:37:52.852 rmmod nvme_fabrics 00:37:52.852 rmmod nvme_keyring 00:37:52.852 08:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:52.852 08:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:37:52.852 08:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:37:52.852 08:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 3142057 ']' 00:37:52.852 08:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 3142057 00:37:52.852 08:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 3142057 ']' 00:37:52.852 08:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 3142057 00:37:52.852 08:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:37:52.852 08:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:52.852 08:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3142057 00:37:52.852 08:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:52.852 08:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:52.852 08:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3142057' 00:37:52.852 killing process with pid 3142057 00:37:52.852 08:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 3142057 00:37:52.852 08:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 3142057 00:37:54.228 08:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:54.228 08:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:54.228 08:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:54.228 08:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:37:54.228 08:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:37:54.228 08:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:54.228 08:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:37:54.228 08:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:54.228 08:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:54.228 08:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:54.228 08:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:54.228 08:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:56.136 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:56.136 00:37:56.136 real 0m9.215s 00:37:56.136 user 0m11.194s 00:37:56.136 sys 0m3.022s 00:37:56.136 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:56.136 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:56.136 ************************************ 00:37:56.136 END TEST nvmf_abort 00:37:56.136 ************************************ 00:37:56.136 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:37:56.136 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:56.136 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:56.136 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:56.395 ************************************ 00:37:56.395 START TEST nvmf_ns_hotplug_stress 00:37:56.395 ************************************ 00:37:56.395 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:37:56.395 * Looking for test storage... 00:37:56.395 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:56.395 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:56.395 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:37:56.395 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:56.395 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:56.395 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:56.395 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:56.395 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:56.395 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:37:56.395 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:37:56.395 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:37:56.395 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:37:56.395 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:37:56.395 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:37:56.395 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:37:56.395 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:56.395 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:37:56.395 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:37:56.395 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:56.395 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:56.395 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:37:56.395 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:37:56.395 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:56.395 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:37:56.395 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:37:56.395 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:37:56.395 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:37:56.395 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:56.395 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:37:56.395 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:37:56.395 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:56.395 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:56.395 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:37:56.396 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:56.396 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:56.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:56.396 --rc genhtml_branch_coverage=1 00:37:56.396 --rc genhtml_function_coverage=1 00:37:56.396 --rc genhtml_legend=1 00:37:56.396 --rc geninfo_all_blocks=1 00:37:56.396 --rc geninfo_unexecuted_blocks=1 00:37:56.396 00:37:56.396 ' 00:37:56.396 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:56.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:56.396 --rc genhtml_branch_coverage=1 00:37:56.396 --rc genhtml_function_coverage=1 00:37:56.396 --rc genhtml_legend=1 00:37:56.396 --rc geninfo_all_blocks=1 00:37:56.396 --rc geninfo_unexecuted_blocks=1 00:37:56.396 00:37:56.396 ' 00:37:56.396 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:56.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:56.396 --rc genhtml_branch_coverage=1 00:37:56.396 --rc genhtml_function_coverage=1 00:37:56.396 --rc genhtml_legend=1 00:37:56.396 --rc geninfo_all_blocks=1 00:37:56.396 --rc geninfo_unexecuted_blocks=1 00:37:56.396 00:37:56.396 ' 00:37:56.396 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:56.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:56.396 --rc genhtml_branch_coverage=1 00:37:56.396 --rc genhtml_function_coverage=1 00:37:56.396 --rc genhtml_legend=1 00:37:56.396 --rc geninfo_all_blocks=1 00:37:56.396 --rc geninfo_unexecuted_blocks=1 00:37:56.396 00:37:56.396 ' 00:37:56.396 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:56.396 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:37:56.396 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:56.396 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:56.396 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:56.396 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:56.396 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:56.396 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:56.396 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:56.396 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:56.396 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:56.396 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:56.396 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:56.396 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:56.396 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:56.396 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:56.396 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:56.396 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:56.396 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:56.396 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:37:56.396 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:56.396 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:56.396 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:56.396 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:56.396 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:56.396 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:56.396 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:37:56.396 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:56.396 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:37:56.396 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:56.396 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:56.396 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:56.396 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:56.396 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:56.396 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:56.396 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:56.396 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:56.396 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:56.396 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:56.396 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:56.396 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:37:56.396 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:56.396 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:56.396 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:56.396 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:56.396 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:56.396 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:56.396 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:56.396 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:56.396 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:56.396 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:56.396 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:37:56.396 08:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:37:58.301 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:58.301 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:37:58.301 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:58.301 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:58.301 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:58.301 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:58.301 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:58.301 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:37:58.301 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:58.301 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:37:58.301 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:37:58.301 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:37:58.301 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:37:58.301 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:37:58.301 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:37:58.301 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:58.301 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:58.301 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:58.301 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:58.301 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:58.301 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:58.301 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:58.301 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:58.301 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:58.301 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:58.301 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:58.301 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:58.301 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:58.301 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:58.301 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:58.301 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:58.301 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:58.301 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:58.301 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:58.301 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:58.301 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:58.301 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:58.301 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:58.301 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:58.301 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:58.301 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:58.301 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:58.301 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:58.301 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:58.301 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:58.301 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:58.301 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:58.301 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:58.301 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:58.301 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:58.301 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:58.301 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:58.301 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:58.301 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:58.301 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:58.301 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:58.301 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:58.301 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:58.301 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:58.301 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:58.301 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:58.301 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:58.301 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:58.301 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:58.301 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:58.302 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:58.302 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:58.302 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:58.302 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:58.302 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:58.302 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:58.302 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:58.302 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:58.302 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:37:58.302 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:58.302 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:58.302 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:58.302 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:58.302 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:58.302 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:58.302 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:58.302 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:58.302 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:58.302 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:58.302 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:58.302 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:58.302 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:58.302 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:58.302 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:58.302 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:58.302 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:58.302 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:58.302 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:58.302 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:58.561 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:58.561 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:58.561 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:58.561 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:58.561 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:58.561 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:58.561 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:58.561 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:37:58.561 00:37:58.561 --- 10.0.0.2 ping statistics --- 00:37:58.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:58.561 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:37:58.561 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:58.561 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:58.561 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:37:58.561 00:37:58.561 --- 10.0.0.1 ping statistics --- 00:37:58.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:58.561 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:37:58.561 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:58.561 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:37:58.561 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:58.561 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:58.561 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:58.561 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:58.561 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:58.561 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:58.561 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:58.561 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:37:58.561 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:58.561 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:58.561 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:37:58.561 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=3144535 00:37:58.561 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:37:58.562 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 3144535 00:37:58.562 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 3144535 ']' 00:37:58.562 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:58.562 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:58.562 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:58.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:58.562 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:58.562 08:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:37:58.562 [2024-11-19 08:01:50.435259] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:58.562 [2024-11-19 08:01:50.437910] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:37:58.562 [2024-11-19 08:01:50.438003] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:58.821 [2024-11-19 08:01:50.594469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:58.821 [2024-11-19 08:01:50.737158] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:58.821 [2024-11-19 08:01:50.737238] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:58.821 [2024-11-19 08:01:50.737267] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:58.821 [2024-11-19 08:01:50.737289] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:58.821 [2024-11-19 08:01:50.737312] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:58.821 [2024-11-19 08:01:50.740014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:58.821 [2024-11-19 08:01:50.740054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:58.821 [2024-11-19 08:01:50.740063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:59.389 [2024-11-19 08:01:51.115393] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:59.390 [2024-11-19 08:01:51.116419] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:59.390 [2024-11-19 08:01:51.117172] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:59.390 [2024-11-19 08:01:51.117465] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:59.650 08:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:59.650 08:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:37:59.650 08:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:59.650 08:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:59.650 08:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:37:59.650 08:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:59.650 08:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:37:59.650 08:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:37:59.909 [2024-11-19 08:01:51.709224] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:59.909 08:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:38:00.168 08:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:00.427 [2024-11-19 08:01:52.269618] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:00.427 08:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:00.686 08:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:38:00.945 Malloc0 00:38:00.945 08:01:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:38:01.266 Delay0 00:38:01.266 08:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:01.525 08:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:38:01.784 NULL1 00:38:01.784 08:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:38:02.349 08:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3144965 00:38:02.349 08:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:38:02.349 08:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3144965 00:38:02.349 08:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:03.284 Read completed with error (sct=0, sc=11) 00:38:03.284 08:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:03.284 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:03.544 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:03.544 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:03.544 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:03.544 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:03.544 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:03.802 08:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:38:03.802 08:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:38:04.060 true 00:38:04.060 08:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3144965 00:38:04.060 08:01:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:04.625 08:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:04.883 08:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:38:04.883 08:01:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:38:05.141 true 00:38:05.141 08:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3144965 00:38:05.141 08:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:05.399 08:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:05.966 08:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:38:05.967 08:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:38:05.967 true 00:38:05.967 08:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3144965 00:38:05.967 08:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:06.225 08:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:06.483 08:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:38:06.483 08:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:38:06.742 true 00:38:07.000 08:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3144965 00:38:07.000 08:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:07.939 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:07.939 08:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:08.197 08:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:38:08.197 08:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:38:08.456 true 00:38:08.456 08:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3144965 00:38:08.456 08:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:08.714 08:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:08.972 08:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:38:08.972 08:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:38:09.230 true 00:38:09.230 08:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3144965 00:38:09.230 08:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:09.487 08:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:09.745 08:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:38:09.745 08:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:38:10.003 true 00:38:10.003 08:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3144965 00:38:10.003 08:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:11.382 08:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:11.382 08:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:38:11.382 08:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:38:11.647 true 00:38:11.647 08:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3144965 00:38:11.647 08:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:11.967 08:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:12.254 08:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:38:12.254 08:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:38:12.512 true 00:38:12.512 08:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3144965 00:38:12.512 08:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:12.770 08:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:13.028 08:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:38:13.028 08:02:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:38:13.287 true 00:38:13.287 08:02:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3144965 00:38:13.287 08:02:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:14.225 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:14.225 08:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:14.484 08:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:38:14.484 08:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:38:14.742 true 00:38:14.742 08:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3144965 00:38:14.742 08:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:15.001 08:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:15.259 08:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:38:15.259 08:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:38:15.517 true 00:38:15.518 08:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3144965 00:38:15.518 08:02:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:16.455 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:16.455 08:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:16.713 08:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:38:16.713 08:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:38:16.713 true 00:38:16.972 08:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3144965 00:38:16.972 08:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:17.230 08:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:17.487 08:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:38:17.487 08:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:38:17.744 true 00:38:17.744 08:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3144965 00:38:17.744 08:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:18.680 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:18.681 08:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:18.681 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:18.938 08:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:38:18.938 08:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:38:19.196 true 00:38:19.196 08:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3144965 00:38:19.196 08:02:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:19.454 08:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:19.712 08:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:38:19.712 08:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:38:19.970 true 00:38:19.970 08:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3144965 00:38:19.970 08:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:20.229 08:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:20.487 08:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:38:20.487 08:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:38:20.745 true 00:38:20.745 08:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3144965 00:38:20.745 08:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:21.680 08:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:21.938 08:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:38:21.938 08:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:38:22.195 true 00:38:22.195 08:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3144965 00:38:22.195 08:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:22.454 08:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:22.712 08:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:38:22.712 08:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:38:22.970 true 00:38:22.970 08:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3144965 00:38:22.970 08:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:23.228 08:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:23.486 08:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:38:23.486 08:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:38:23.743 true 00:38:24.002 08:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3144965 00:38:24.002 08:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:24.936 08:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:24.936 08:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:38:24.936 08:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:38:25.195 true 00:38:25.195 08:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3144965 00:38:25.195 08:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:25.454 08:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:26.023 08:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:38:26.023 08:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:38:26.023 true 00:38:26.023 08:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3144965 00:38:26.023 08:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:26.281 08:02:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:26.847 08:02:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:38:26.847 08:02:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:38:26.847 true 00:38:26.847 08:02:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3144965 00:38:26.847 08:02:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:27.785 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:27.785 08:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:28.044 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:28.044 08:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:38:28.044 08:02:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:38:28.611 true 00:38:28.611 08:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3144965 00:38:28.611 08:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:28.611 08:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:29.178 08:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:38:29.178 08:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:38:29.178 true 00:38:29.178 08:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3144965 00:38:29.178 08:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:30.115 08:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:30.115 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:30.374 08:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:38:30.374 08:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:38:30.633 true 00:38:30.633 08:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3144965 00:38:30.633 08:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:30.891 08:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:31.149 08:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:38:31.149 08:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:38:31.408 true 00:38:31.408 08:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3144965 00:38:31.408 08:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:32.344 08:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:32.344 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:32.344 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:32.344 Initializing NVMe Controllers 00:38:32.344 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:32.344 Controller IO queue size 128, less than required. 00:38:32.344 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:32.344 Controller IO queue size 128, less than required. 00:38:32.344 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:32.344 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:38:32.344 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:38:32.344 Initialization complete. Launching workers. 00:38:32.344 ======================================================== 00:38:32.344 Latency(us) 00:38:32.344 Device Information : IOPS MiB/s Average min max 00:38:32.344 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 606.30 0.30 95802.84 3366.08 1018455.97 00:38:32.344 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 7258.69 3.54 17635.76 3816.31 489436.36 00:38:32.344 ======================================================== 00:38:32.344 Total : 7864.99 3.84 23661.55 3366.08 1018455.97 00:38:32.344 00:38:32.603 08:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:38:32.603 08:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:38:32.861 true 00:38:32.861 08:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3144965 00:38:32.861 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3144965) - No such process 00:38:32.861 08:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3144965 00:38:32.861 08:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:33.119 08:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:33.377 08:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:38:33.377 08:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:38:33.377 08:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:38:33.377 08:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:33.377 08:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:38:33.635 null0 00:38:33.635 08:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:33.635 08:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:33.635 08:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:38:33.893 null1 00:38:33.893 08:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:33.893 08:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:33.893 08:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:38:34.150 null2 00:38:34.151 08:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:34.151 08:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:34.151 08:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:38:34.410 null3 00:38:34.410 08:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:34.410 08:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:34.410 08:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:38:34.669 null4 00:38:34.669 08:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:34.669 08:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:34.669 08:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:38:34.928 null5 00:38:34.928 08:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:34.928 08:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:34.928 08:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:38:35.187 null6 00:38:35.187 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:35.187 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:35.187 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:38:35.447 null7 00:38:35.447 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:35.447 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:35.447 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:38:35.447 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:35.447 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:35.447 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:35.447 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:35.447 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:38:35.447 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:38:35.447 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:35.447 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:35.447 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:35.447 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:38:35.447 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:35.447 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:35.447 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:35.447 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:38:35.447 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:35.447 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:35.447 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:35.447 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:35.447 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:38:35.447 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:35.447 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:35.447 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:38:35.447 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:35.447 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:35.447 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:35.447 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:35.447 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:38:35.447 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:35.447 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:35.447 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:38:35.447 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:35.447 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:35.447 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:35.447 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:35.447 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:35.447 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:38:35.447 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:35.447 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:38:35.447 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:35.447 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:35.447 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:35.447 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:35.447 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:38:35.447 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:35.447 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:38:35.447 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:35.447 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:35.448 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:35.448 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:35.448 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:35.448 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:38:35.448 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:35.448 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:35.448 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:38:35.448 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:35.448 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:35.448 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:35.448 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:35.448 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:38:35.448 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:35.448 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:38:35.448 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:35.448 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:35.448 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3148958 3148959 3148960 3148963 3148965 3148967 3148969 3148971 00:38:35.448 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:35.448 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:35.705 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:35.706 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:35.706 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:35.706 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:35.706 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:35.706 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:35.706 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:35.706 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:36.272 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:36.272 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:36.272 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:36.272 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:36.272 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:36.272 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:36.272 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:36.272 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:36.272 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:36.272 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:36.272 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:36.272 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:36.272 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:36.272 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:36.272 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:36.272 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:36.272 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:36.272 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:36.272 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:36.272 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:36.272 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:36.272 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:36.272 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:36.272 08:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:36.530 08:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:36.530 08:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:36.530 08:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:36.531 08:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:36.531 08:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:36.531 08:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:36.531 08:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:36.531 08:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:36.789 08:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:36.789 08:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:36.789 08:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:36.789 08:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:36.789 08:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:36.789 08:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:36.789 08:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:36.789 08:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:36.789 08:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:36.789 08:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:36.789 08:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:36.789 08:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:36.789 08:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:36.789 08:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:36.789 08:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:36.789 08:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:36.789 08:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:36.789 08:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:36.789 08:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:36.789 08:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:36.789 08:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:36.789 08:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:36.789 08:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:36.789 08:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:37.047 08:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:37.047 08:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:37.047 08:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:37.047 08:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:37.047 08:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:37.047 08:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:37.047 08:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:37.047 08:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:37.305 08:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:37.305 08:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:37.305 08:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:37.305 08:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:37.305 08:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:37.305 08:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:37.305 08:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:37.305 08:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:37.305 08:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:37.305 08:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:37.305 08:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:37.305 08:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:37.305 08:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:37.305 08:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:37.305 08:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:37.305 08:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:37.305 08:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:37.305 08:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:37.305 08:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:37.305 08:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:37.306 08:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:37.306 08:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:37.306 08:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:37.306 08:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:37.564 08:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:37.564 08:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:37.564 08:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:37.564 08:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:37.564 08:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:37.564 08:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:37.564 08:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:37.564 08:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:37.823 08:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:37.823 08:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:37.823 08:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:37.823 08:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:37.823 08:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:37.823 08:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:37.823 08:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:37.823 08:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:37.823 08:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:37.823 08:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:37.823 08:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:37.823 08:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:37.823 08:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:37.823 08:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:37.823 08:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:37.823 08:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:37.823 08:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:37.823 08:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:37.823 08:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:37.823 08:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:37.823 08:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:37.823 08:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:37.823 08:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:37.823 08:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:38.390 08:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:38.390 08:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:38.390 08:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:38.390 08:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:38.390 08:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:38.390 08:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:38.390 08:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:38.390 08:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:38.649 08:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:38.649 08:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:38.649 08:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:38.649 08:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:38.649 08:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:38.649 08:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:38.649 08:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:38.649 08:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:38.649 08:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:38.649 08:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:38.649 08:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:38.649 08:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:38.649 08:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:38.649 08:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:38.649 08:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:38.649 08:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:38.649 08:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:38.649 08:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:38.649 08:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:38.649 08:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:38.649 08:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:38.649 08:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:38.649 08:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:38.649 08:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:38.907 08:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:38.907 08:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:38.907 08:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:38.907 08:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:38.907 08:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:38.907 08:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:38.907 08:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:38.907 08:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:39.167 08:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:39.167 08:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:39.167 08:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:39.167 08:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:39.167 08:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:39.167 08:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:39.167 08:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:39.167 08:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:39.167 08:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:39.167 08:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:39.167 08:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:39.167 08:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:39.167 08:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:39.167 08:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:39.167 08:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:39.167 08:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:39.167 08:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:39.167 08:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:39.167 08:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:39.167 08:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:39.167 08:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:39.167 08:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:39.167 08:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:39.167 08:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:39.426 08:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:39.426 08:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:39.426 08:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:39.426 08:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:39.426 08:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:39.426 08:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:39.426 08:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:39.426 08:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:39.684 08:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:39.684 08:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:39.684 08:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:39.684 08:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:39.684 08:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:39.684 08:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:39.684 08:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:39.684 08:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:39.684 08:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:39.684 08:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:39.684 08:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:39.684 08:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:39.684 08:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:39.684 08:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:39.684 08:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:39.684 08:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:39.684 08:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:39.684 08:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:39.684 08:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:39.684 08:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:39.684 08:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:39.684 08:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:39.684 08:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:39.684 08:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:39.942 08:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:39.942 08:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:39.943 08:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:39.943 08:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:39.943 08:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:39.943 08:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:39.943 08:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:39.943 08:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:40.588 08:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:40.588 08:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:40.588 08:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:40.588 08:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:40.588 08:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:40.588 08:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:40.588 08:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:40.588 08:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:40.588 08:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:40.588 08:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:40.588 08:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:40.588 08:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:40.588 08:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:40.588 08:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:40.588 08:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:40.588 08:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:40.588 08:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:40.588 08:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:40.588 08:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:40.588 08:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:40.588 08:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:40.588 08:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:40.588 08:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:40.588 08:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:40.588 08:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:40.588 08:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:40.588 08:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:40.588 08:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:40.588 08:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:40.588 08:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:40.588 08:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:40.588 08:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:40.847 08:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:40.847 08:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:40.847 08:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:40.847 08:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:40.847 08:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:40.847 08:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:40.847 08:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:40.847 08:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:40.847 08:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:40.847 08:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:40.847 08:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:40.847 08:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:40.847 08:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:40.847 08:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:40.847 08:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:40.847 08:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:40.847 08:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:40.847 08:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:40.847 08:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:40.847 08:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:40.847 08:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:40.847 08:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:40.847 08:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:40.847 08:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:41.414 08:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:41.414 08:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:41.414 08:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:41.414 08:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:41.414 08:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:41.414 08:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:41.414 08:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:41.414 08:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:41.672 08:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:41.672 08:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:41.672 08:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:41.672 08:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:41.672 08:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:41.672 08:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:41.672 08:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:41.672 08:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:41.672 08:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:41.672 08:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:41.672 08:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:41.672 08:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:41.672 08:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:41.672 08:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:41.672 08:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:41.672 08:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:41.672 08:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:38:41.672 08:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:38:41.672 08:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:41.672 08:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:38:41.672 08:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:41.672 08:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:38:41.672 08:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:41.672 08:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:41.672 rmmod nvme_tcp 00:38:41.672 rmmod nvme_fabrics 00:38:41.672 rmmod nvme_keyring 00:38:41.672 08:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:41.672 08:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:38:41.672 08:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:38:41.672 08:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 3144535 ']' 00:38:41.672 08:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 3144535 00:38:41.672 08:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 3144535 ']' 00:38:41.672 08:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 3144535 00:38:41.672 08:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:38:41.672 08:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:41.672 08:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3144535 00:38:41.672 08:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:41.672 08:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:41.672 08:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3144535' 00:38:41.672 killing process with pid 3144535 00:38:41.672 08:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 3144535 00:38:41.672 08:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 3144535 00:38:43.051 08:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:43.051 08:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:43.051 08:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:43.051 08:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:38:43.051 08:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:38:43.051 08:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:43.051 08:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:38:43.052 08:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:43.052 08:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:43.052 08:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:43.052 08:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:43.052 08:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:44.959 08:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:44.959 00:38:44.959 real 0m48.722s 00:38:44.959 user 3m18.754s 00:38:44.959 sys 0m21.521s 00:38:44.959 08:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:44.959 08:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:38:44.959 ************************************ 00:38:44.959 END TEST nvmf_ns_hotplug_stress 00:38:44.959 ************************************ 00:38:44.959 08:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:38:44.959 08:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:44.959 08:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:44.959 08:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:44.959 ************************************ 00:38:44.959 START TEST nvmf_delete_subsystem 00:38:44.959 ************************************ 00:38:44.959 08:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:38:45.220 * Looking for test storage... 00:38:45.220 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:45.220 08:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:45.220 08:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:38:45.220 08:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:45.220 08:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:45.220 08:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:45.220 08:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:45.220 08:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:45.220 08:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:38:45.220 08:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:38:45.220 08:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:38:45.220 08:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:38:45.220 08:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:38:45.220 08:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:38:45.220 08:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:38:45.220 08:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:45.220 08:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:38:45.220 08:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:38:45.220 08:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:45.220 08:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:45.220 08:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:38:45.220 08:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:38:45.220 08:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:45.220 08:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:38:45.220 08:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:38:45.220 08:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:38:45.220 08:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:38:45.220 08:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:45.220 08:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:38:45.220 08:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:38:45.220 08:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:45.220 08:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:45.220 08:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:38:45.220 08:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:45.220 08:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:45.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:45.220 --rc genhtml_branch_coverage=1 00:38:45.220 --rc genhtml_function_coverage=1 00:38:45.220 --rc genhtml_legend=1 00:38:45.220 --rc geninfo_all_blocks=1 00:38:45.220 --rc geninfo_unexecuted_blocks=1 00:38:45.220 00:38:45.220 ' 00:38:45.220 08:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:45.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:45.220 --rc genhtml_branch_coverage=1 00:38:45.220 --rc genhtml_function_coverage=1 00:38:45.220 --rc genhtml_legend=1 00:38:45.220 --rc geninfo_all_blocks=1 00:38:45.220 --rc geninfo_unexecuted_blocks=1 00:38:45.220 00:38:45.220 ' 00:38:45.220 08:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:45.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:45.220 --rc genhtml_branch_coverage=1 00:38:45.220 --rc genhtml_function_coverage=1 00:38:45.220 --rc genhtml_legend=1 00:38:45.220 --rc geninfo_all_blocks=1 00:38:45.220 --rc geninfo_unexecuted_blocks=1 00:38:45.220 00:38:45.220 ' 00:38:45.220 08:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:45.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:45.220 --rc genhtml_branch_coverage=1 00:38:45.220 --rc genhtml_function_coverage=1 00:38:45.220 --rc genhtml_legend=1 00:38:45.220 --rc geninfo_all_blocks=1 00:38:45.220 --rc geninfo_unexecuted_blocks=1 00:38:45.220 00:38:45.220 ' 00:38:45.220 08:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:45.220 08:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:38:45.220 08:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:45.220 08:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:45.220 08:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:45.220 08:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:45.220 08:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:45.220 08:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:45.220 08:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:45.220 08:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:45.220 08:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:45.220 08:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:45.220 08:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:45.220 08:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:45.220 08:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:45.220 08:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:45.220 08:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:45.220 08:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:45.220 08:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:45.220 08:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:38:45.220 08:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:45.220 08:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:45.220 08:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:45.220 08:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:45.220 08:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:45.220 08:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:45.220 08:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:38:45.221 08:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:45.221 08:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:38:45.221 08:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:45.221 08:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:45.221 08:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:45.221 08:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:45.221 08:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:45.221 08:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:45.221 08:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:45.221 08:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:45.221 08:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:45.221 08:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:45.221 08:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:38:45.221 08:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:45.221 08:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:45.221 08:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:45.221 08:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:45.221 08:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:45.221 08:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:45.221 08:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:45.221 08:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:45.221 08:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:45.221 08:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:45.221 08:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:38:45.221 08:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:47.123 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:47.123 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:38:47.123 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:47.123 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:47.123 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:47.123 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:47.123 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:47.123 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:38:47.123 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:47.123 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:38:47.123 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:38:47.123 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:38:47.123 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:38:47.123 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:38:47.123 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:38:47.123 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:47.123 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:47.123 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:47.123 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:47.123 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:47.123 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:47.123 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:47.123 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:47.123 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:47.123 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:47.123 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:47.123 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:47.123 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:47.123 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:47.123 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:47.123 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:47.123 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:47.123 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:47.123 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:47.123 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:47.123 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:47.123 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:47.123 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:47.123 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:47.123 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:47.123 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:47.123 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:47.123 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:47.123 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:47.123 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:47.123 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:47.123 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:47.123 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:47.123 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:47.123 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:47.123 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:47.123 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:47.123 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:47.123 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:47.123 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:47.124 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:47.124 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:47.124 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:47.124 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:47.124 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:47.124 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:47.124 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:47.124 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:47.124 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:47.124 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:47.124 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:47.124 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:47.124 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:47.124 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:47.124 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:47.124 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:47.124 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:47.124 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:47.124 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:38:47.124 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:47.124 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:47.124 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:47.124 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:47.124 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:47.124 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:47.124 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:47.124 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:47.124 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:47.124 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:47.124 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:47.124 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:47.124 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:47.124 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:47.124 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:47.124 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:47.124 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:47.124 08:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:47.124 08:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:47.124 08:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:47.124 08:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:47.124 08:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:47.382 08:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:47.382 08:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:47.382 08:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:47.383 08:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:47.383 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:47.383 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.284 ms 00:38:47.383 00:38:47.383 --- 10.0.0.2 ping statistics --- 00:38:47.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:47.383 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:38:47.383 08:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:47.383 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:47.383 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.081 ms 00:38:47.383 00:38:47.383 --- 10.0.0.1 ping statistics --- 00:38:47.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:47.383 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:38:47.383 08:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:47.383 08:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:38:47.383 08:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:47.383 08:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:47.383 08:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:47.383 08:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:47.383 08:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:47.383 08:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:47.383 08:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:47.383 08:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:38:47.383 08:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:47.383 08:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:47.383 08:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:47.383 08:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=3151840 00:38:47.383 08:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:38:47.383 08:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 3151840 00:38:47.383 08:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 3151840 ']' 00:38:47.383 08:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:47.383 08:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:47.383 08:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:47.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:47.383 08:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:47.383 08:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:47.383 [2024-11-19 08:02:39.195877] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:47.383 [2024-11-19 08:02:39.198530] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:38:47.383 [2024-11-19 08:02:39.198628] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:47.642 [2024-11-19 08:02:39.355516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:38:47.642 [2024-11-19 08:02:39.496762] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:47.642 [2024-11-19 08:02:39.496848] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:47.642 [2024-11-19 08:02:39.496888] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:47.642 [2024-11-19 08:02:39.496908] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:47.642 [2024-11-19 08:02:39.496934] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:47.642 [2024-11-19 08:02:39.502747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:47.642 [2024-11-19 08:02:39.502748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:48.213 [2024-11-19 08:02:39.874171] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:48.213 [2024-11-19 08:02:39.874963] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:48.213 [2024-11-19 08:02:39.875333] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:48.472 08:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:48.472 08:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:38:48.472 08:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:48.472 08:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:48.472 08:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:48.472 08:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:48.473 08:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:48.473 08:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:48.473 08:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:48.473 [2024-11-19 08:02:40.195903] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:48.473 08:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:48.473 08:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:38:48.473 08:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:48.473 08:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:48.473 08:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:48.473 08:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:48.473 08:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:48.473 08:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:48.473 [2024-11-19 08:02:40.216129] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:48.473 08:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:48.473 08:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:38:48.473 08:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:48.473 08:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:48.473 NULL1 00:38:48.473 08:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:48.473 08:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:38:48.473 08:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:48.473 08:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:48.473 Delay0 00:38:48.473 08:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:48.473 08:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:48.473 08:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:48.473 08:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:48.473 08:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:48.473 08:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3151992 00:38:48.473 08:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:38:48.473 08:02:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:38:48.473 [2024-11-19 08:02:40.351679] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:38:50.376 08:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:50.376 08:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:50.376 08:02:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:50.635 Read completed with error (sct=0, sc=8) 00:38:50.635 Read completed with error (sct=0, sc=8) 00:38:50.635 starting I/O failed: -6 00:38:50.635 Read completed with error (sct=0, sc=8) 00:38:50.635 Read completed with error (sct=0, sc=8) 00:38:50.635 Read completed with error (sct=0, sc=8) 00:38:50.635 Write completed with error (sct=0, sc=8) 00:38:50.635 starting I/O failed: -6 00:38:50.635 Write completed with error (sct=0, sc=8) 00:38:50.635 Read completed with error (sct=0, sc=8) 00:38:50.635 Read completed with error (sct=0, sc=8) 00:38:50.635 Read completed with error (sct=0, sc=8) 00:38:50.635 starting I/O failed: -6 00:38:50.635 Write completed with error (sct=0, sc=8) 00:38:50.635 Write completed with error (sct=0, sc=8) 00:38:50.635 Read completed with error (sct=0, sc=8) 00:38:50.635 Read completed with error (sct=0, sc=8) 00:38:50.635 starting I/O failed: -6 00:38:50.635 Write completed with error (sct=0, sc=8) 00:38:50.635 Read completed with error (sct=0, sc=8) 00:38:50.635 Read completed with error (sct=0, sc=8) 00:38:50.635 Write completed with error (sct=0, sc=8) 00:38:50.635 starting I/O failed: -6 00:38:50.635 Read completed with error (sct=0, sc=8) 00:38:50.635 Write completed with error (sct=0, sc=8) 00:38:50.635 Read completed with error (sct=0, sc=8) 00:38:50.635 Read completed with error (sct=0, sc=8) 00:38:50.635 starting I/O failed: -6 00:38:50.635 Read completed with error (sct=0, sc=8) 00:38:50.635 Write completed with error (sct=0, sc=8) 00:38:50.635 Write completed with error (sct=0, sc=8) 00:38:50.635 Read completed with error (sct=0, sc=8) 00:38:50.635 starting I/O failed: -6 00:38:50.635 Write completed with error (sct=0, sc=8) 00:38:50.635 Read completed with error (sct=0, sc=8) 00:38:50.635 Read completed with error (sct=0, sc=8) 00:38:50.635 Read completed with error (sct=0, sc=8) 00:38:50.635 starting I/O failed: -6 00:38:50.635 Read completed with error (sct=0, sc=8) 00:38:50.635 Read completed with error (sct=0, sc=8) 00:38:50.635 Read completed with error (sct=0, sc=8) 00:38:50.635 Write completed with error (sct=0, sc=8) 00:38:50.635 starting I/O failed: -6 00:38:50.635 Write completed with error (sct=0, sc=8) 00:38:50.635 Read completed with error (sct=0, sc=8) 00:38:50.635 Write completed with error (sct=0, sc=8) 00:38:50.635 Read completed with error (sct=0, sc=8) 00:38:50.635 starting I/O failed: -6 00:38:50.635 Write completed with error (sct=0, sc=8) 00:38:50.635 Read completed with error (sct=0, sc=8) 00:38:50.635 Read completed with error (sct=0, sc=8) 00:38:50.635 Write completed with error (sct=0, sc=8) 00:38:50.635 starting I/O failed: -6 00:38:50.635 [2024-11-19 08:02:42.536554] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020380 is same with the state(6) to be set 00:38:50.635 Read completed with error (sct=0, sc=8) 00:38:50.635 Read completed with error (sct=0, sc=8) 00:38:50.635 starting I/O failed: -6 00:38:50.635 Read completed with error (sct=0, sc=8) 00:38:50.635 Read completed with error (sct=0, sc=8) 00:38:50.635 Write completed with error (sct=0, sc=8) 00:38:50.635 Read completed with error (sct=0, sc=8) 00:38:50.635 starting I/O failed: -6 00:38:50.635 Read completed with error (sct=0, sc=8) 00:38:50.635 Read completed with error (sct=0, sc=8) 00:38:50.635 Write completed with error (sct=0, sc=8) 00:38:50.635 Read completed with error (sct=0, sc=8) 00:38:50.635 starting I/O failed: -6 00:38:50.635 Read completed with error (sct=0, sc=8) 00:38:50.635 Read completed with error (sct=0, sc=8) 00:38:50.635 Read completed with error (sct=0, sc=8) 00:38:50.635 Read completed with error (sct=0, sc=8) 00:38:50.635 starting I/O failed: -6 00:38:50.635 Read completed with error (sct=0, sc=8) 00:38:50.635 Read completed with error (sct=0, sc=8) 00:38:50.635 Read completed with error (sct=0, sc=8) 00:38:50.635 Read completed with error (sct=0, sc=8) 00:38:50.635 starting I/O failed: -6 00:38:50.635 Write completed with error (sct=0, sc=8) 00:38:50.635 Read completed with error (sct=0, sc=8) 00:38:50.635 Read completed with error (sct=0, sc=8) 00:38:50.635 Read completed with error (sct=0, sc=8) 00:38:50.635 starting I/O failed: -6 00:38:50.635 Write completed with error (sct=0, sc=8) 00:38:50.635 Write completed with error (sct=0, sc=8) 00:38:50.635 Read completed with error (sct=0, sc=8) 00:38:50.635 Read completed with error (sct=0, sc=8) 00:38:50.635 starting I/O failed: -6 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 starting I/O failed: -6 00:38:50.636 Write completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 starting I/O failed: -6 00:38:50.636 Write completed with error (sct=0, sc=8) 00:38:50.636 Write completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 starting I/O failed: -6 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 starting I/O failed: -6 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 Write completed with error (sct=0, sc=8) 00:38:50.636 starting I/O failed: -6 00:38:50.636 Write completed with error (sct=0, sc=8) 00:38:50.636 Write completed with error (sct=0, sc=8) 00:38:50.636 [2024-11-19 08:02:42.537807] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016600 is same with the state(6) to be set 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 Write completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 Write completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 Write completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 Write completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 Write completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 Write completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 Write completed with error (sct=0, sc=8) 00:38:50.636 Write completed with error (sct=0, sc=8) 00:38:50.636 Write completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 Write completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 Write completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 Write completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 Write completed with error (sct=0, sc=8) 00:38:50.636 Write completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 Write completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 Write completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 Write completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 Write completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 Write completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 Write completed with error (sct=0, sc=8) 00:38:50.636 Write completed with error (sct=0, sc=8) 00:38:50.636 Write completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 Write completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 Write completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 Write completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 Write completed with error (sct=0, sc=8) 00:38:50.636 Write completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 Write completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 Write completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 Write completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 Write completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 Write completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 Write completed with error (sct=0, sc=8) 00:38:50.636 Write completed with error (sct=0, sc=8) 00:38:50.636 Write completed with error (sct=0, sc=8) 00:38:50.636 Write completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 Write completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 Write completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 Read completed with error (sct=0, sc=8) 00:38:50.636 [2024-11-19 08:02:42.538966] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001fe80 is same with the state(6) to be set 00:38:51.574 [2024-11-19 08:02:43.502829] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000015c00 is same with the state(6) to be set 00:38:51.835 Write completed with error (sct=0, sc=8) 00:38:51.835 Write completed with error (sct=0, sc=8) 00:38:51.835 Read completed with error (sct=0, sc=8) 00:38:51.835 Read completed with error (sct=0, sc=8) 00:38:51.835 Read completed with error (sct=0, sc=8) 00:38:51.835 Write completed with error (sct=0, sc=8) 00:38:51.835 Read completed with error (sct=0, sc=8) 00:38:51.835 Read completed with error (sct=0, sc=8) 00:38:51.835 Write completed with error (sct=0, sc=8) 00:38:51.835 Read completed with error (sct=0, sc=8) 00:38:51.835 Read completed with error (sct=0, sc=8) 00:38:51.835 Read completed with error (sct=0, sc=8) 00:38:51.835 Write completed with error (sct=0, sc=8) 00:38:51.835 Read completed with error (sct=0, sc=8) 00:38:51.835 Read completed with error (sct=0, sc=8) 00:38:51.835 Read completed with error (sct=0, sc=8) 00:38:51.835 Read completed with error (sct=0, sc=8) 00:38:51.835 Write completed with error (sct=0, sc=8) 00:38:51.835 Write completed with error (sct=0, sc=8) 00:38:51.835 Read completed with error (sct=0, sc=8) 00:38:51.835 Write completed with error (sct=0, sc=8) 00:38:51.835 Read completed with error (sct=0, sc=8) 00:38:51.835 Read completed with error (sct=0, sc=8) 00:38:51.835 Read completed with error (sct=0, sc=8) 00:38:51.835 Read completed with error (sct=0, sc=8) 00:38:51.835 Read completed with error (sct=0, sc=8) 00:38:51.835 Read completed with error (sct=0, sc=8) 00:38:51.835 Read completed with error (sct=0, sc=8) 00:38:51.835 [2024-11-19 08:02:43.540787] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016380 is same with the state(6) to be set 00:38:51.835 Read completed with error (sct=0, sc=8) 00:38:51.835 Read completed with error (sct=0, sc=8) 00:38:51.835 Read completed with error (sct=0, sc=8) 00:38:51.835 Read completed with error (sct=0, sc=8) 00:38:51.835 Write completed with error (sct=0, sc=8) 00:38:51.835 Read completed with error (sct=0, sc=8) 00:38:51.835 Read completed with error (sct=0, sc=8) 00:38:51.835 Read completed with error (sct=0, sc=8) 00:38:51.835 Read completed with error (sct=0, sc=8) 00:38:51.835 Read completed with error (sct=0, sc=8) 00:38:51.835 Write completed with error (sct=0, sc=8) 00:38:51.835 Read completed with error (sct=0, sc=8) 00:38:51.835 Read completed with error (sct=0, sc=8) 00:38:51.835 Read completed with error (sct=0, sc=8) 00:38:51.835 Read completed with error (sct=0, sc=8) 00:38:51.835 Read completed with error (sct=0, sc=8) 00:38:51.835 Read completed with error (sct=0, sc=8) 00:38:51.835 Read completed with error (sct=0, sc=8) 00:38:51.835 Read completed with error (sct=0, sc=8) 00:38:51.835 Read completed with error (sct=0, sc=8) 00:38:51.835 Read completed with error (sct=0, sc=8) 00:38:51.835 Write completed with error (sct=0, sc=8) 00:38:51.835 Read completed with error (sct=0, sc=8) 00:38:51.835 Read completed with error (sct=0, sc=8) 00:38:51.835 Write completed with error (sct=0, sc=8) 00:38:51.835 Read completed with error (sct=0, sc=8) 00:38:51.835 Read completed with error (sct=0, sc=8) 00:38:51.835 Read completed with error (sct=0, sc=8) 00:38:51.835 [2024-11-19 08:02:43.541583] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016880 is same with the state(6) to be set 00:38:51.835 Read completed with error (sct=0, sc=8) 00:38:51.835 Read completed with error (sct=0, sc=8) 00:38:51.835 Write completed with error (sct=0, sc=8) 00:38:51.835 Read completed with error (sct=0, sc=8) 00:38:51.835 Write completed with error (sct=0, sc=8) 00:38:51.835 Read completed with error (sct=0, sc=8) 00:38:51.835 Write completed with error (sct=0, sc=8) 00:38:51.835 Write completed with error (sct=0, sc=8) 00:38:51.835 Write completed with error (sct=0, sc=8) 00:38:51.835 Write completed with error (sct=0, sc=8) 00:38:51.835 Read completed with error (sct=0, sc=8) 00:38:51.835 Write completed with error (sct=0, sc=8) 00:38:51.835 Read completed with error (sct=0, sc=8) 00:38:51.835 Write completed with error (sct=0, sc=8) 00:38:51.835 Write completed with error (sct=0, sc=8) 00:38:51.835 Read completed with error (sct=0, sc=8) 00:38:51.835 Read completed with error (sct=0, sc=8) 00:38:51.835 Write completed with error (sct=0, sc=8) 00:38:51.835 Read completed with error (sct=0, sc=8) 00:38:51.835 Read completed with error (sct=0, sc=8) 00:38:51.835 Read completed with error (sct=0, sc=8) 00:38:51.835 [2024-11-19 08:02:43.542796] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020100 is same with the state(6) to be set 00:38:51.835 Write completed with error (sct=0, sc=8) 00:38:51.835 Read completed with error (sct=0, sc=8) 00:38:51.835 Write completed with error (sct=0, sc=8) 00:38:51.835 Write completed with error (sct=0, sc=8) 00:38:51.835 Read completed with error (sct=0, sc=8) 00:38:51.835 Read completed with error (sct=0, sc=8) 00:38:51.835 Write completed with error (sct=0, sc=8) 00:38:51.835 Write completed with error (sct=0, sc=8) 00:38:51.835 Read completed with error (sct=0, sc=8) 00:38:51.835 Read completed with error (sct=0, sc=8) 00:38:51.835 Write completed with error (sct=0, sc=8) 00:38:51.835 Write completed with error (sct=0, sc=8) 00:38:51.835 Read completed with error (sct=0, sc=8) 00:38:51.835 Read completed with error (sct=0, sc=8) 00:38:51.835 Read completed with error (sct=0, sc=8) 00:38:51.835 Read completed with error (sct=0, sc=8) 00:38:51.835 Read completed with error (sct=0, sc=8) 00:38:51.835 Read completed with error (sct=0, sc=8) 00:38:51.835 Write completed with error (sct=0, sc=8) 00:38:51.835 Write completed with error (sct=0, sc=8) 00:38:51.835 [2024-11-19 08:02:43.543351] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020600 is same with the state(6) to be set 00:38:51.835 Initializing NVMe Controllers 00:38:51.835 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:51.835 Controller IO queue size 128, less than required. 00:38:51.835 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:51.835 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:38:51.835 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:38:51.835 Initialization complete. Launching workers. 00:38:51.835 ======================================================== 00:38:51.835 Latency(us) 00:38:51.835 Device Information : IOPS MiB/s Average min max 00:38:51.835 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 176.16 0.09 884054.09 923.41 1016640.20 00:38:51.835 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 162.31 0.08 913955.02 2446.34 1018051.69 00:38:51.835 ======================================================== 00:38:51.835 Total : 338.47 0.17 898392.55 923.41 1018051.69 00:38:51.835 00:38:51.835 [2024-11-19 08:02:43.548172] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000015c00 (9): Bad file descriptor 00:38:51.835 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:38:51.835 08:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:51.835 08:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:38:51.836 08:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3151992 00:38:51.836 08:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:38:52.403 08:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:38:52.403 08:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3151992 00:38:52.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3151992) - No such process 00:38:52.403 08:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3151992 00:38:52.403 08:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:38:52.403 08:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3151992 00:38:52.403 08:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:38:52.403 08:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:52.403 08:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:38:52.403 08:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:52.403 08:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 3151992 00:38:52.403 08:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:38:52.403 08:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:52.403 08:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:52.403 08:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:52.403 08:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:38:52.403 08:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:52.403 08:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:52.403 08:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:52.403 08:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:52.403 08:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:52.403 08:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:52.403 [2024-11-19 08:02:44.068156] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:52.403 08:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:52.403 08:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:52.403 08:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:52.403 08:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:52.403 08:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:52.403 08:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3152514 00:38:52.403 08:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:38:52.403 08:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:38:52.403 08:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3152514 00:38:52.403 08:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:52.403 [2024-11-19 08:02:44.177833] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:38:52.661 08:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:52.661 08:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3152514 00:38:52.661 08:02:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:53.228 08:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:53.228 08:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3152514 00:38:53.228 08:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:53.797 08:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:53.797 08:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3152514 00:38:53.797 08:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:54.365 08:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:54.365 08:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3152514 00:38:54.365 08:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:54.931 08:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:54.931 08:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3152514 00:38:54.931 08:02:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:55.189 08:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:55.189 08:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3152514 00:38:55.189 08:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:55.447 Initializing NVMe Controllers 00:38:55.447 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:55.447 Controller IO queue size 128, less than required. 00:38:55.447 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:55.447 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:38:55.447 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:38:55.447 Initialization complete. Launching workers. 00:38:55.447 ======================================================== 00:38:55.447 Latency(us) 00:38:55.447 Device Information : IOPS MiB/s Average min max 00:38:55.447 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1005792.98 1000299.76 1041464.18 00:38:55.447 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005497.83 1000222.64 1013593.91 00:38:55.447 ======================================================== 00:38:55.447 Total : 256.00 0.12 1005645.41 1000222.64 1041464.18 00:38:55.447 00:38:55.705 08:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:55.705 08:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3152514 00:38:55.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3152514) - No such process 00:38:55.705 08:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3152514 00:38:55.705 08:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:38:55.705 08:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:38:55.705 08:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:55.705 08:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:38:55.705 08:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:55.705 08:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:38:55.705 08:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:55.705 08:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:55.705 rmmod nvme_tcp 00:38:55.705 rmmod nvme_fabrics 00:38:55.705 rmmod nvme_keyring 00:38:55.705 08:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:55.965 08:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:38:55.965 08:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:38:55.965 08:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 3151840 ']' 00:38:55.965 08:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 3151840 00:38:55.965 08:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 3151840 ']' 00:38:55.965 08:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 3151840 00:38:55.965 08:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:38:55.965 08:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:55.965 08:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3151840 00:38:55.965 08:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:55.965 08:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:55.965 08:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3151840' 00:38:55.965 killing process with pid 3151840 00:38:55.965 08:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 3151840 00:38:55.965 08:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 3151840 00:38:56.903 08:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:56.903 08:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:56.903 08:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:56.903 08:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:38:56.903 08:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:38:56.903 08:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:56.903 08:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:38:56.903 08:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:56.903 08:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:56.903 08:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:56.903 08:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:56.903 08:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:59.441 08:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:59.441 00:38:59.441 real 0m13.999s 00:38:59.441 user 0m26.270s 00:38:59.441 sys 0m4.027s 00:38:59.441 08:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:59.441 08:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:59.441 ************************************ 00:38:59.441 END TEST nvmf_delete_subsystem 00:38:59.441 ************************************ 00:38:59.441 08:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:38:59.441 08:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:59.441 08:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:59.441 08:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:59.441 ************************************ 00:38:59.441 START TEST nvmf_host_management 00:38:59.441 ************************************ 00:38:59.441 08:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:38:59.441 * Looking for test storage... 00:38:59.441 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:59.441 08:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:59.441 08:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:38:59.441 08:02:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:59.441 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:59.441 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:59.441 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:59.441 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:59.441 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:38:59.441 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:38:59.441 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:38:59.441 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:38:59.441 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:38:59.441 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:38:59.441 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:38:59.441 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:59.441 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:38:59.441 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:38:59.441 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:59.441 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:59.441 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:38:59.441 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:38:59.441 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:59.441 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:38:59.441 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:38:59.441 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:38:59.441 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:38:59.441 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:59.441 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:38:59.441 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:38:59.441 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:59.441 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:59.441 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:38:59.441 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:59.441 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:59.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:59.441 --rc genhtml_branch_coverage=1 00:38:59.441 --rc genhtml_function_coverage=1 00:38:59.441 --rc genhtml_legend=1 00:38:59.441 --rc geninfo_all_blocks=1 00:38:59.441 --rc geninfo_unexecuted_blocks=1 00:38:59.442 00:38:59.442 ' 00:38:59.442 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:59.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:59.442 --rc genhtml_branch_coverage=1 00:38:59.442 --rc genhtml_function_coverage=1 00:38:59.442 --rc genhtml_legend=1 00:38:59.442 --rc geninfo_all_blocks=1 00:38:59.442 --rc geninfo_unexecuted_blocks=1 00:38:59.442 00:38:59.442 ' 00:38:59.442 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:59.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:59.442 --rc genhtml_branch_coverage=1 00:38:59.442 --rc genhtml_function_coverage=1 00:38:59.442 --rc genhtml_legend=1 00:38:59.442 --rc geninfo_all_blocks=1 00:38:59.442 --rc geninfo_unexecuted_blocks=1 00:38:59.442 00:38:59.442 ' 00:38:59.442 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:59.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:59.442 --rc genhtml_branch_coverage=1 00:38:59.442 --rc genhtml_function_coverage=1 00:38:59.442 --rc genhtml_legend=1 00:38:59.442 --rc geninfo_all_blocks=1 00:38:59.442 --rc geninfo_unexecuted_blocks=1 00:38:59.442 00:38:59.442 ' 00:38:59.442 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:59.442 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:38:59.442 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:59.442 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:59.442 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:59.442 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:59.442 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:59.442 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:59.442 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:59.442 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:59.442 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:59.442 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:59.442 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:59.442 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:59.442 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:59.442 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:59.442 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:59.442 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:59.442 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:59.442 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:38:59.442 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:59.442 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:59.442 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:59.442 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:59.442 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:59.442 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:59.442 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:38:59.442 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:59.442 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:38:59.442 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:59.442 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:59.442 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:59.442 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:59.442 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:59.442 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:59.442 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:59.442 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:59.442 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:59.442 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:59.442 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:59.442 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:59.442 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:38:59.442 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:59.442 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:59.442 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:59.442 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:59.442 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:59.442 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:59.442 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:59.442 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:59.442 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:59.442 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:59.442 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:38:59.442 08:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:01.345 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:01.345 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:39:01.345 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:01.345 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:01.345 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:01.345 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:01.345 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:01.345 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:39:01.345 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:01.345 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:39:01.345 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:39:01.345 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:39:01.345 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:39:01.345 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:39:01.345 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:39:01.345 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:01.345 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:01.345 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:01.345 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:01.345 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:01.346 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:01.346 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:01.346 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:01.346 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:01.346 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:01.346 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:01.346 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:01.346 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:01.346 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:01.346 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:01.346 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:01.346 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:01.346 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:01.346 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:01.346 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:01.346 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:01.346 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:01.346 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:01.346 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:01.346 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:01.346 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:01.346 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:01.346 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:01.346 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:01.346 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:01.346 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:01.346 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:01.346 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:01.346 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:01.346 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:01.346 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:01.346 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:01.346 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:01.346 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:01.346 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:01.346 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:01.346 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:01.346 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:01.346 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:01.346 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:01.346 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:01.346 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:01.346 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:01.346 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:01.346 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:01.346 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:01.346 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:01.346 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:01.346 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:01.346 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:01.346 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:01.346 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:01.346 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:01.346 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:39:01.346 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:01.346 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:01.346 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:01.346 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:01.346 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:01.346 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:01.346 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:01.346 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:01.346 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:01.346 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:01.346 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:01.346 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:01.346 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:01.346 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:01.346 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:01.346 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:01.346 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:01.346 08:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:01.346 08:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:01.346 08:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:01.346 08:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:01.346 08:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:01.346 08:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:01.346 08:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:01.346 08:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:01.346 08:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:01.346 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:01.346 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.285 ms 00:39:01.346 00:39:01.346 --- 10.0.0.2 ping statistics --- 00:39:01.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:01.346 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:39:01.346 08:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:01.346 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:01.346 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.088 ms 00:39:01.346 00:39:01.346 --- 10.0.0.1 ping statistics --- 00:39:01.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:01.346 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:39:01.346 08:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:01.346 08:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:39:01.346 08:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:01.346 08:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:01.346 08:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:01.346 08:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:01.346 08:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:01.346 08:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:01.346 08:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:01.346 08:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:39:01.346 08:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:39:01.346 08:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:39:01.346 08:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:01.346 08:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:01.346 08:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:01.346 08:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=3154980 00:39:01.346 08:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:39:01.346 08:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 3154980 00:39:01.346 08:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3154980 ']' 00:39:01.346 08:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:01.346 08:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:01.346 08:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:01.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:01.346 08:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:01.346 08:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:01.346 [2024-11-19 08:02:53.222027] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:01.346 [2024-11-19 08:02:53.224603] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:39:01.346 [2024-11-19 08:02:53.224724] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:01.606 [2024-11-19 08:02:53.368022] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:01.606 [2024-11-19 08:02:53.496888] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:01.606 [2024-11-19 08:02:53.496949] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:01.606 [2024-11-19 08:02:53.496990] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:01.606 [2024-11-19 08:02:53.497011] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:01.606 [2024-11-19 08:02:53.497045] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:01.606 [2024-11-19 08:02:53.499632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:01.606 [2024-11-19 08:02:53.499703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:01.606 [2024-11-19 08:02:53.499804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:01.606 [2024-11-19 08:02:53.499815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:39:02.173 [2024-11-19 08:02:53.824563] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:02.173 [2024-11-19 08:02:53.836006] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:02.173 [2024-11-19 08:02:53.836228] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:02.173 [2024-11-19 08:02:53.837043] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:02.173 [2024-11-19 08:02:53.837363] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:02.432 08:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:02.432 08:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:39:02.432 08:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:02.432 08:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:02.432 08:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:02.432 08:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:02.432 08:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:02.432 08:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:02.432 08:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:02.432 [2024-11-19 08:02:54.184873] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:02.432 08:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:02.432 08:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:39:02.432 08:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:02.432 08:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:02.432 08:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:39:02.432 08:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:39:02.432 08:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:39:02.432 08:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:02.432 08:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:02.432 Malloc0 00:39:02.432 [2024-11-19 08:02:54.301168] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:02.432 08:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:02.432 08:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:39:02.432 08:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:02.432 08:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:02.432 08:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3155152 00:39:02.432 08:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3155152 /var/tmp/bdevperf.sock 00:39:02.432 08:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3155152 ']' 00:39:02.432 08:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:39:02.432 08:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:39:02.432 08:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:39:02.432 08:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:02.432 08:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:39:02.432 08:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:39:02.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:39:02.432 08:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:39:02.432 08:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:02.432 08:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:02.432 08:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:02.432 08:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:02.432 { 00:39:02.432 "params": { 00:39:02.432 "name": "Nvme$subsystem", 00:39:02.432 "trtype": "$TEST_TRANSPORT", 00:39:02.432 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:02.432 "adrfam": "ipv4", 00:39:02.432 "trsvcid": "$NVMF_PORT", 00:39:02.432 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:02.432 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:02.432 "hdgst": ${hdgst:-false}, 00:39:02.432 "ddgst": ${ddgst:-false} 00:39:02.432 }, 00:39:02.432 "method": "bdev_nvme_attach_controller" 00:39:02.432 } 00:39:02.432 EOF 00:39:02.432 )") 00:39:02.432 08:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:39:02.432 08:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:39:02.432 08:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:39:02.432 08:02:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:02.432 "params": { 00:39:02.432 "name": "Nvme0", 00:39:02.432 "trtype": "tcp", 00:39:02.432 "traddr": "10.0.0.2", 00:39:02.432 "adrfam": "ipv4", 00:39:02.432 "trsvcid": "4420", 00:39:02.432 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:02.432 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:02.432 "hdgst": false, 00:39:02.432 "ddgst": false 00:39:02.432 }, 00:39:02.432 "method": "bdev_nvme_attach_controller" 00:39:02.432 }' 00:39:02.692 [2024-11-19 08:02:54.419625] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:39:02.692 [2024-11-19 08:02:54.419819] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3155152 ] 00:39:02.692 [2024-11-19 08:02:54.556818] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:02.952 [2024-11-19 08:02:54.684479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:03.521 Running I/O for 10 seconds... 00:39:03.521 08:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:03.521 08:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:39:03.521 08:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:39:03.521 08:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:03.521 08:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:03.521 08:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:03.521 08:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:39:03.521 08:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:39:03.521 08:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:39:03.521 08:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:39:03.521 08:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:39:03.521 08:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:39:03.521 08:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:39:03.521 08:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:39:03.521 08:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:39:03.521 08:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:39:03.521 08:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:03.521 08:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:03.521 08:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:03.521 08:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=195 00:39:03.521 08:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 195 -ge 100 ']' 00:39:03.521 08:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:39:03.521 08:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:39:03.521 08:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:39:03.521 08:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:39:03.521 08:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:03.521 08:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:03.521 [2024-11-19 08:02:55.436876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:03.521 [2024-11-19 08:02:55.436947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:03.521 [2024-11-19 08:02:55.436979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:03.521 [2024-11-19 08:02:55.436998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:03.521 [2024-11-19 08:02:55.437017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:03.521 [2024-11-19 08:02:55.437021] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:39:03.521 [2024-11-19 08:02:55.437091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:03.521 [2024-11-19 08:02:55.437119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:39:03.521 [2024-11-19 08:02:55.437142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:03.521 [2024-11-19 08:02:55.437173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:39:03.521 [2024-11-19 08:02:55.437194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:03.521 [2024-11-19 08:02:55.437216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:39:03.521 [2024-11-19 08:02:55.437236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:03.521 [2024-11-19 08:02:55.437255] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:39:03.521 08:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:03.521 08:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:39:03.521 08:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:03.521 08:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:03.521 [2024-11-19 08:02:55.445009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:03.521 [2024-11-19 08:02:55.445048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:03.521 [2024-11-19 08:02:55.445104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:03.521 [2024-11-19 08:02:55.445128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:03.521 [2024-11-19 08:02:55.445154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:03.521 [2024-11-19 08:02:55.445193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:03.521 [2024-11-19 08:02:55.445220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:03.521 [2024-11-19 08:02:55.445242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:03.521 [2024-11-19 08:02:55.445266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:03.521 [2024-11-19 08:02:55.445288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:03.521 [2024-11-19 08:02:55.445312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:03.521 [2024-11-19 08:02:55.445334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:03.521 [2024-11-19 08:02:55.445358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:03.521 [2024-11-19 08:02:55.445379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:03.521 [2024-11-19 08:02:55.445403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:03.521 [2024-11-19 08:02:55.445425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:03.521 [2024-11-19 08:02:55.445449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:03.521 [2024-11-19 08:02:55.445475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:03.521 [2024-11-19 08:02:55.445501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:03.521 [2024-11-19 08:02:55.445523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:03.521 [2024-11-19 08:02:55.445547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:03.522 [2024-11-19 08:02:55.445568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:03.522 [2024-11-19 08:02:55.445592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:03.522 [2024-11-19 08:02:55.445613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:03.522 [2024-11-19 08:02:55.445637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:03.522 [2024-11-19 08:02:55.445658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:03.522 [2024-11-19 08:02:55.445695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:03.522 [2024-11-19 08:02:55.445719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:03.522 [2024-11-19 08:02:55.445743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:03.522 [2024-11-19 08:02:55.445765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:03.522 [2024-11-19 08:02:55.445789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:03.522 [2024-11-19 08:02:55.445810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:03.522 [2024-11-19 08:02:55.445834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:03.522 [2024-11-19 08:02:55.445856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:03.522 [2024-11-19 08:02:55.445880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:03.522 [2024-11-19 08:02:55.445902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:03.522 [2024-11-19 08:02:55.445925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:03.522 [2024-11-19 08:02:55.445947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:03.522 [2024-11-19 08:02:55.445970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:03.522 [2024-11-19 08:02:55.445998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:03.522 [2024-11-19 08:02:55.446021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:03.522 [2024-11-19 08:02:55.446043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:03.522 [2024-11-19 08:02:55.446071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:03.522 [2024-11-19 08:02:55.446092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:03.522 [2024-11-19 08:02:55.446116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:03.522 [2024-11-19 08:02:55.446138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:03.522 [2024-11-19 08:02:55.446161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:03.522 [2024-11-19 08:02:55.446183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:03.522 [2024-11-19 08:02:55.446206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:03.522 [2024-11-19 08:02:55.446227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:03.522 [2024-11-19 08:02:55.446251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:03.522 [2024-11-19 08:02:55.446273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:03.522 [2024-11-19 08:02:55.446296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:03.522 [2024-11-19 08:02:55.446317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:03.522 [2024-11-19 08:02:55.446341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:03.522 [2024-11-19 08:02:55.446362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:03.522 [2024-11-19 08:02:55.446386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:03.522 [2024-11-19 08:02:55.446408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:03.522 [2024-11-19 08:02:55.446431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:03.522 [2024-11-19 08:02:55.446453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:03.522 [2024-11-19 08:02:55.446476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:03.522 [2024-11-19 08:02:55.446498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:03.522 [2024-11-19 08:02:55.446522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:03.522 [2024-11-19 08:02:55.446543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:03.522 [2024-11-19 08:02:55.446566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:03.522 [2024-11-19 08:02:55.446587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:03.522 [2024-11-19 08:02:55.446611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:03.522 [2024-11-19 08:02:55.446636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:03.522 [2024-11-19 08:02:55.446661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:03.522 [2024-11-19 08:02:55.446699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:03.522 [2024-11-19 08:02:55.446726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:03.522 [2024-11-19 08:02:55.446748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:03.522 [2024-11-19 08:02:55.446772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:03.522 [2024-11-19 08:02:55.446793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:03.522 [2024-11-19 08:02:55.446817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:03.522 [2024-11-19 08:02:55.446838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:03.522 [2024-11-19 08:02:55.446862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:03.522 [2024-11-19 08:02:55.446883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:03.522 [2024-11-19 08:02:55.446907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:03.522 [2024-11-19 08:02:55.446928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:03.522 [2024-11-19 08:02:55.446952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:03.522 [2024-11-19 08:02:55.446973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:03.522 [2024-11-19 08:02:55.447003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:03.522 [2024-11-19 08:02:55.447024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:03.522 [2024-11-19 08:02:55.447048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:03.522 [2024-11-19 08:02:55.447069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:03.522 [2024-11-19 08:02:55.447092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:03.522 [2024-11-19 08:02:55.447113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:03.522 [2024-11-19 08:02:55.447137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:03.522 [2024-11-19 08:02:55.447158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:03.522 [2024-11-19 08:02:55.447182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:03.522 [2024-11-19 08:02:55.447203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:03.522 [2024-11-19 08:02:55.447236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:03.522 [2024-11-19 08:02:55.447258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:03.522 [2024-11-19 08:02:55.447282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:03.522 [2024-11-19 08:02:55.447303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:03.522 [2024-11-19 08:02:55.447327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:03.522 [2024-11-19 08:02:55.447349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:03.522 [2024-11-19 08:02:55.447373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:03.523 [2024-11-19 08:02:55.447394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:03.523 [2024-11-19 08:02:55.447418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:03.523 [2024-11-19 08:02:55.447439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:03.523 [2024-11-19 08:02:55.447463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:03.523 [2024-11-19 08:02:55.447485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:03.523 [2024-11-19 08:02:55.447508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:03.523 [2024-11-19 08:02:55.447530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:03.523 [2024-11-19 08:02:55.447554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:03.523 [2024-11-19 08:02:55.447575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:03.523 [2024-11-19 08:02:55.447599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:03.523 [2024-11-19 08:02:55.447621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:03.523 [2024-11-19 08:02:55.447645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:03.523 [2024-11-19 08:02:55.447666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:03.523 [2024-11-19 08:02:55.447706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:03.523 [2024-11-19 08:02:55.447730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:03.523 [2024-11-19 08:02:55.447755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:03.523 [2024-11-19 08:02:55.447776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:03.523 [2024-11-19 08:02:55.447800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:03.523 [2024-11-19 08:02:55.447825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:03.523 [2024-11-19 08:02:55.447851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:03.523 [2024-11-19 08:02:55.447872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:03.523 [2024-11-19 08:02:55.447896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:03.523 [2024-11-19 08:02:55.447918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:03.523 [2024-11-19 08:02:55.447942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:03.523 [2024-11-19 08:02:55.447964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:03.523 [2024-11-19 08:02:55.447995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:03.523 [2024-11-19 08:02:55.448016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:03.523 [2024-11-19 08:02:55.448050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:03.523 [2024-11-19 08:02:55.448071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:03.523 [2024-11-19 08:02:55.448412] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:39:03.523 08:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:03.523 08:02:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:39:03.523 [2024-11-19 08:02:55.449623] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:39:03.523 task offset: 32768 on job bdev=Nvme0n1 fails 00:39:03.523 00:39:03.523 Latency(us) 00:39:03.523 [2024-11-19T07:02:55.453Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:03.523 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:39:03.523 Job: Nvme0n1 ended in about 0.21 seconds with error 00:39:03.523 Verification LBA range: start 0x0 length 0x400 00:39:03.523 Nvme0n1 : 0.21 1208.70 75.54 302.17 0.00 40184.11 3835.07 42137.22 00:39:03.523 [2024-11-19T07:02:55.453Z] =================================================================================================================== 00:39:03.523 [2024-11-19T07:02:55.453Z] Total : 1208.70 75.54 302.17 0.00 40184.11 3835.07 42137.22 00:39:03.782 [2024-11-19 08:02:55.454484] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:39:03.782 [2024-11-19 08:02:55.460016] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:39:04.718 08:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3155152 00:39:04.718 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3155152) - No such process 00:39:04.718 08:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:39:04.718 08:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:39:04.718 08:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:39:04.718 08:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:39:04.718 08:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:39:04.718 08:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:39:04.718 08:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:04.718 08:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:04.718 { 00:39:04.718 "params": { 00:39:04.718 "name": "Nvme$subsystem", 00:39:04.718 "trtype": "$TEST_TRANSPORT", 00:39:04.718 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:04.718 "adrfam": "ipv4", 00:39:04.718 "trsvcid": "$NVMF_PORT", 00:39:04.718 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:04.718 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:04.718 "hdgst": ${hdgst:-false}, 00:39:04.718 "ddgst": ${ddgst:-false} 00:39:04.718 }, 00:39:04.718 "method": "bdev_nvme_attach_controller" 00:39:04.718 } 00:39:04.718 EOF 00:39:04.718 )") 00:39:04.718 08:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:39:04.718 08:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:39:04.718 08:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:39:04.718 08:02:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:04.718 "params": { 00:39:04.718 "name": "Nvme0", 00:39:04.718 "trtype": "tcp", 00:39:04.718 "traddr": "10.0.0.2", 00:39:04.718 "adrfam": "ipv4", 00:39:04.718 "trsvcid": "4420", 00:39:04.718 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:04.718 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:04.718 "hdgst": false, 00:39:04.718 "ddgst": false 00:39:04.718 }, 00:39:04.718 "method": "bdev_nvme_attach_controller" 00:39:04.718 }' 00:39:04.718 [2024-11-19 08:02:56.538956] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:39:04.718 [2024-11-19 08:02:56.539118] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3155361 ] 00:39:04.978 [2024-11-19 08:02:56.675248] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:04.978 [2024-11-19 08:02:56.805071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:05.548 Running I/O for 1 seconds... 00:39:06.510 1344.00 IOPS, 84.00 MiB/s 00:39:06.510 Latency(us) 00:39:06.510 [2024-11-19T07:02:58.440Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:06.510 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:39:06.510 Verification LBA range: start 0x0 length 0x400 00:39:06.510 Nvme0n1 : 1.02 1375.62 85.98 0.00 0.00 45742.79 7475.96 40972.14 00:39:06.510 [2024-11-19T07:02:58.440Z] =================================================================================================================== 00:39:06.510 [2024-11-19T07:02:58.440Z] Total : 1375.62 85.98 0.00 0.00 45742.79 7475.96 40972.14 00:39:07.462 08:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:39:07.462 08:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:39:07.462 08:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:39:07.462 08:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:39:07.462 08:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:39:07.462 08:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:07.462 08:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:39:07.462 08:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:07.462 08:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:39:07.462 08:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:07.462 08:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:07.462 rmmod nvme_tcp 00:39:07.462 rmmod nvme_fabrics 00:39:07.462 rmmod nvme_keyring 00:39:07.462 08:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:07.462 08:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:39:07.462 08:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:39:07.462 08:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 3154980 ']' 00:39:07.462 08:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 3154980 00:39:07.462 08:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 3154980 ']' 00:39:07.462 08:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 3154980 00:39:07.462 08:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:39:07.462 08:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:07.463 08:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3154980 00:39:07.463 08:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:07.463 08:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:07.463 08:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3154980' 00:39:07.463 killing process with pid 3154980 00:39:07.463 08:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 3154980 00:39:07.463 08:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 3154980 00:39:08.838 [2024-11-19 08:03:00.503611] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:39:08.838 08:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:08.838 08:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:08.838 08:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:08.838 08:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:39:08.838 08:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:39:08.838 08:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:08.838 08:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:39:08.838 08:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:08.838 08:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:08.838 08:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:08.838 08:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:08.838 08:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:10.742 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:10.742 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:39:10.742 00:39:10.742 real 0m11.720s 00:39:10.742 user 0m25.514s 00:39:10.742 sys 0m4.469s 00:39:10.742 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:10.742 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:10.742 ************************************ 00:39:10.742 END TEST nvmf_host_management 00:39:10.742 ************************************ 00:39:10.742 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:39:10.742 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:10.742 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:10.742 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:10.742 ************************************ 00:39:10.742 START TEST nvmf_lvol 00:39:10.742 ************************************ 00:39:10.742 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:39:11.002 * Looking for test storage... 00:39:11.002 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:11.002 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:11.002 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:39:11.002 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:11.002 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:11.002 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:11.002 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:11.002 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:11.002 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:39:11.002 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:39:11.002 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:39:11.002 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:39:11.002 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:39:11.002 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:39:11.002 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:39:11.002 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:11.002 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:39:11.002 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:39:11.002 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:11.002 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:11.002 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:39:11.002 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:39:11.002 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:11.002 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:39:11.002 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:39:11.002 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:39:11.002 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:39:11.002 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:11.002 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:39:11.002 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:39:11.002 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:11.002 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:11.002 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:39:11.002 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:11.002 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:11.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:11.002 --rc genhtml_branch_coverage=1 00:39:11.002 --rc genhtml_function_coverage=1 00:39:11.002 --rc genhtml_legend=1 00:39:11.002 --rc geninfo_all_blocks=1 00:39:11.002 --rc geninfo_unexecuted_blocks=1 00:39:11.002 00:39:11.002 ' 00:39:11.002 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:11.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:11.002 --rc genhtml_branch_coverage=1 00:39:11.002 --rc genhtml_function_coverage=1 00:39:11.002 --rc genhtml_legend=1 00:39:11.002 --rc geninfo_all_blocks=1 00:39:11.002 --rc geninfo_unexecuted_blocks=1 00:39:11.002 00:39:11.002 ' 00:39:11.002 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:11.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:11.002 --rc genhtml_branch_coverage=1 00:39:11.002 --rc genhtml_function_coverage=1 00:39:11.002 --rc genhtml_legend=1 00:39:11.002 --rc geninfo_all_blocks=1 00:39:11.002 --rc geninfo_unexecuted_blocks=1 00:39:11.002 00:39:11.002 ' 00:39:11.002 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:11.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:11.002 --rc genhtml_branch_coverage=1 00:39:11.002 --rc genhtml_function_coverage=1 00:39:11.002 --rc genhtml_legend=1 00:39:11.002 --rc geninfo_all_blocks=1 00:39:11.002 --rc geninfo_unexecuted_blocks=1 00:39:11.002 00:39:11.002 ' 00:39:11.002 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:11.002 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:39:11.002 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:11.002 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:11.002 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:11.002 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:11.002 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:11.002 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:11.002 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:11.002 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:11.002 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:11.002 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:11.002 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:11.002 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:11.002 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:11.002 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:11.002 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:11.002 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:11.002 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:11.002 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:39:11.002 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:11.003 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:11.003 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:11.003 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:11.003 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:11.003 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:11.003 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:39:11.003 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:11.003 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:39:11.003 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:11.003 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:11.003 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:11.003 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:11.003 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:11.003 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:11.003 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:11.003 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:11.003 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:11.003 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:11.003 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:11.003 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:11.003 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:39:11.003 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:39:11.003 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:11.003 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:39:11.003 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:11.003 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:11.003 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:11.003 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:11.003 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:11.003 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:11.003 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:11.003 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:11.003 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:11.003 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:11.003 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:39:11.003 08:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:39:13.540 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:13.540 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:39:13.540 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:13.540 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:13.540 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:13.540 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:13.540 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:13.540 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:39:13.540 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:13.540 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:39:13.540 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:39:13.540 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:39:13.540 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:39:13.540 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:39:13.540 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:39:13.540 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:13.540 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:13.540 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:13.540 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:13.540 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:13.540 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:13.540 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:13.540 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:13.540 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:13.540 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:13.540 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:13.540 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:13.540 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:13.540 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:13.540 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:13.540 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:13.540 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:13.540 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:13.540 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:13.540 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:13.540 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:13.540 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:13.540 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:13.540 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:13.540 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:13.540 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:13.540 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:13.540 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:13.540 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:13.540 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:13.540 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:13.540 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:13.540 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:13.540 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:13.540 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:13.540 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:13.540 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:13.540 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:13.540 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:13.540 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:13.540 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:13.540 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:13.540 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:13.540 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:13.540 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:13.540 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:13.540 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:13.540 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:13.540 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:13.540 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:13.540 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:13.540 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:13.540 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:13.540 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:13.540 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:13.540 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:13.540 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:13.540 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:13.540 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:39:13.540 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:13.540 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:13.540 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:13.540 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:13.541 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:13.541 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:13.541 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:13.541 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:13.541 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:13.541 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:13.541 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:13.541 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:13.541 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:13.541 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:13.541 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:13.541 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:13.541 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:13.541 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:13.541 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:13.541 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:13.541 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:13.541 08:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:13.541 08:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:13.541 08:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:13.541 08:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:13.541 08:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:13.541 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:13.541 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:39:13.541 00:39:13.541 --- 10.0.0.2 ping statistics --- 00:39:13.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:13.541 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:39:13.541 08:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:13.541 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:13.541 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:39:13.541 00:39:13.541 --- 10.0.0.1 ping statistics --- 00:39:13.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:13.541 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:39:13.541 08:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:13.541 08:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:39:13.541 08:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:13.541 08:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:13.541 08:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:13.541 08:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:13.541 08:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:13.541 08:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:13.541 08:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:13.541 08:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:39:13.541 08:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:13.541 08:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:13.541 08:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:39:13.541 08:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=3157882 00:39:13.541 08:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:39:13.541 08:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 3157882 00:39:13.541 08:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 3157882 ']' 00:39:13.541 08:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:13.541 08:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:13.541 08:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:13.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:13.541 08:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:13.541 08:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:39:13.541 [2024-11-19 08:03:05.144063] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:13.541 [2024-11-19 08:03:05.146612] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:39:13.541 [2024-11-19 08:03:05.146730] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:13.541 [2024-11-19 08:03:05.295131] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:13.541 [2024-11-19 08:03:05.424533] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:13.541 [2024-11-19 08:03:05.424622] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:13.541 [2024-11-19 08:03:05.424648] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:13.541 [2024-11-19 08:03:05.424667] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:13.541 [2024-11-19 08:03:05.424685] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:13.541 [2024-11-19 08:03:05.427171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:13.541 [2024-11-19 08:03:05.427189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:13.541 [2024-11-19 08:03:05.427196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:14.111 [2024-11-19 08:03:05.761401] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:14.111 [2024-11-19 08:03:05.762422] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:14.111 [2024-11-19 08:03:05.763224] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:14.111 [2024-11-19 08:03:05.763534] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:14.369 08:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:14.369 08:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:39:14.369 08:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:14.369 08:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:14.369 08:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:39:14.370 08:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:14.370 08:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:39:14.628 [2024-11-19 08:03:06.388249] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:14.628 08:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:14.888 08:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:39:14.888 08:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:15.454 08:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:39:15.454 08:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:39:15.712 08:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:39:15.970 08:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=add579e9-9ac4-4bc1-b74c-5b74d8334b84 00:39:15.970 08:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u add579e9-9ac4-4bc1-b74c-5b74d8334b84 lvol 20 00:39:16.228 08:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=9f153842-9a86-4f48-afe0-2e59658e0a9a 00:39:16.228 08:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:39:16.486 08:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 9f153842-9a86-4f48-afe0-2e59658e0a9a 00:39:16.744 08:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:17.002 [2024-11-19 08:03:08.856399] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:17.002 08:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:17.259 08:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3158938 00:39:17.260 08:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:39:17.260 08:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:39:18.634 08:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 9f153842-9a86-4f48-afe0-2e59658e0a9a MY_SNAPSHOT 00:39:18.634 08:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=d394e361-4ffb-48d2-a0b5-d20b5647add7 00:39:18.634 08:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 9f153842-9a86-4f48-afe0-2e59658e0a9a 30 00:39:18.892 08:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone d394e361-4ffb-48d2-a0b5-d20b5647add7 MY_CLONE 00:39:19.458 08:03:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=4277f1f0-1258-4e13-96f9-24a541bef169 00:39:19.458 08:03:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 4277f1f0-1258-4e13-96f9-24a541bef169 00:39:20.027 08:03:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3158938 00:39:28.147 Initializing NVMe Controllers 00:39:28.147 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:39:28.147 Controller IO queue size 128, less than required. 00:39:28.147 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:28.147 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:39:28.147 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:39:28.147 Initialization complete. Launching workers. 00:39:28.147 ======================================================== 00:39:28.147 Latency(us) 00:39:28.147 Device Information : IOPS MiB/s Average min max 00:39:28.147 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 8206.10 32.06 15598.77 493.46 187064.46 00:39:28.147 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 8033.20 31.38 15940.67 5592.75 151484.40 00:39:28.147 ======================================================== 00:39:28.147 Total : 16239.30 63.43 15767.90 493.46 187064.46 00:39:28.147 00:39:28.147 08:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:28.147 08:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 9f153842-9a86-4f48-afe0-2e59658e0a9a 00:39:28.405 08:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u add579e9-9ac4-4bc1-b74c-5b74d8334b84 00:39:28.974 08:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:39:28.974 08:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:39:28.974 08:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:39:28.974 08:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:28.974 08:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:39:28.974 08:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:28.974 08:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:39:28.974 08:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:28.974 08:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:28.974 rmmod nvme_tcp 00:39:28.974 rmmod nvme_fabrics 00:39:28.974 rmmod nvme_keyring 00:39:28.974 08:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:28.974 08:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:39:28.974 08:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:39:28.974 08:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 3157882 ']' 00:39:28.974 08:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 3157882 00:39:28.974 08:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 3157882 ']' 00:39:28.974 08:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 3157882 00:39:28.974 08:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:39:28.974 08:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:28.974 08:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3157882 00:39:28.974 08:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:28.974 08:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:28.974 08:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3157882' 00:39:28.974 killing process with pid 3157882 00:39:28.974 08:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 3157882 00:39:28.974 08:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 3157882 00:39:30.358 08:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:30.358 08:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:30.358 08:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:30.358 08:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:39:30.358 08:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:39:30.358 08:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:30.358 08:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:39:30.358 08:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:30.358 08:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:30.358 08:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:30.358 08:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:30.358 08:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:32.265 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:32.265 00:39:32.265 real 0m21.467s 00:39:32.265 user 0m59.686s 00:39:32.265 sys 0m7.534s 00:39:32.265 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:32.265 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:39:32.265 ************************************ 00:39:32.265 END TEST nvmf_lvol 00:39:32.265 ************************************ 00:39:32.265 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:39:32.265 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:32.265 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:32.265 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:32.265 ************************************ 00:39:32.265 START TEST nvmf_lvs_grow 00:39:32.265 ************************************ 00:39:32.265 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:39:32.524 * Looking for test storage... 00:39:32.524 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:32.524 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:32.524 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:39:32.524 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:32.524 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:32.524 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:32.524 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:32.524 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:32.524 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:39:32.524 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:39:32.524 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:39:32.524 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:39:32.524 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:39:32.524 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:39:32.524 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:39:32.524 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:32.524 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:39:32.524 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:39:32.524 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:32.524 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:32.524 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:39:32.524 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:39:32.524 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:32.524 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:39:32.524 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:39:32.524 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:39:32.524 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:39:32.524 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:32.524 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:39:32.524 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:39:32.524 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:32.524 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:32.524 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:39:32.524 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:32.524 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:32.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:32.524 --rc genhtml_branch_coverage=1 00:39:32.524 --rc genhtml_function_coverage=1 00:39:32.524 --rc genhtml_legend=1 00:39:32.524 --rc geninfo_all_blocks=1 00:39:32.524 --rc geninfo_unexecuted_blocks=1 00:39:32.524 00:39:32.524 ' 00:39:32.524 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:32.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:32.524 --rc genhtml_branch_coverage=1 00:39:32.524 --rc genhtml_function_coverage=1 00:39:32.524 --rc genhtml_legend=1 00:39:32.524 --rc geninfo_all_blocks=1 00:39:32.524 --rc geninfo_unexecuted_blocks=1 00:39:32.524 00:39:32.524 ' 00:39:32.524 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:32.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:32.524 --rc genhtml_branch_coverage=1 00:39:32.524 --rc genhtml_function_coverage=1 00:39:32.524 --rc genhtml_legend=1 00:39:32.524 --rc geninfo_all_blocks=1 00:39:32.524 --rc geninfo_unexecuted_blocks=1 00:39:32.524 00:39:32.524 ' 00:39:32.524 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:32.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:32.524 --rc genhtml_branch_coverage=1 00:39:32.524 --rc genhtml_function_coverage=1 00:39:32.524 --rc genhtml_legend=1 00:39:32.524 --rc geninfo_all_blocks=1 00:39:32.524 --rc geninfo_unexecuted_blocks=1 00:39:32.524 00:39:32.524 ' 00:39:32.524 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:32.524 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:39:32.524 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:32.524 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:32.524 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:32.524 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:32.524 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:32.524 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:32.524 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:32.524 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:32.524 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:32.524 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:32.524 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:32.525 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:32.525 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:32.525 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:32.525 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:32.525 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:32.525 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:32.525 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:39:32.525 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:32.525 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:32.525 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:32.525 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:32.525 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:32.525 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:32.525 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:39:32.525 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:32.525 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:39:32.525 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:32.525 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:32.525 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:32.525 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:32.525 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:32.525 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:32.525 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:32.525 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:32.525 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:32.525 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:32.525 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:32.525 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:39:32.525 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:39:32.525 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:32.525 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:32.525 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:32.525 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:32.525 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:32.525 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:32.525 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:32.525 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:32.525 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:32.525 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:32.525 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:39:32.525 08:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:34.431 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:34.431 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:39:34.431 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:34.431 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:34.431 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:34.431 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:34.432 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:34.432 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:34.432 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:34.432 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:34.432 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:34.433 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:34.433 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.229 ms 00:39:34.433 00:39:34.433 --- 10.0.0.2 ping statistics --- 00:39:34.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:34.433 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:39:34.433 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:34.433 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:34.433 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:39:34.433 00:39:34.433 --- 10.0.0.1 ping statistics --- 00:39:34.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:34.433 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:39:34.433 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:34.433 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:39:34.433 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:34.433 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:34.433 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:34.433 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:34.433 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:34.433 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:34.433 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:34.433 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:39:34.433 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:34.433 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:34.433 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:34.433 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=3162318 00:39:34.433 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:39:34.433 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 3162318 00:39:34.692 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 3162318 ']' 00:39:34.692 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:34.692 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:34.692 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:34.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:34.692 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:34.692 08:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:34.692 [2024-11-19 08:03:26.449174] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:34.692 [2024-11-19 08:03:26.451623] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:39:34.692 [2024-11-19 08:03:26.451741] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:34.692 [2024-11-19 08:03:26.591472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:34.952 [2024-11-19 08:03:26.709875] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:34.952 [2024-11-19 08:03:26.709953] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:34.952 [2024-11-19 08:03:26.709994] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:34.952 [2024-11-19 08:03:26.710012] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:34.952 [2024-11-19 08:03:26.710045] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:34.952 [2024-11-19 08:03:26.711448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:35.211 [2024-11-19 08:03:27.034840] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:35.211 [2024-11-19 08:03:27.035259] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:35.779 08:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:35.779 08:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:39:35.779 08:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:35.779 08:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:35.779 08:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:35.779 08:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:35.779 08:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:39:36.038 [2024-11-19 08:03:27.728555] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:36.038 08:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:39:36.039 08:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:36.039 08:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:36.039 08:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:36.039 ************************************ 00:39:36.039 START TEST lvs_grow_clean 00:39:36.039 ************************************ 00:39:36.039 08:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:39:36.039 08:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:39:36.039 08:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:39:36.039 08:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:39:36.039 08:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:39:36.039 08:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:39:36.039 08:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:39:36.039 08:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:36.039 08:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:36.039 08:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:39:36.298 08:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:39:36.298 08:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:39:36.557 08:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=e7727a55-7eb7-45ff-9f7f-8f369cf797f6 00:39:36.557 08:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e7727a55-7eb7-45ff-9f7f-8f369cf797f6 00:39:36.557 08:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:39:36.817 08:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:39:36.817 08:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:39:36.817 08:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e7727a55-7eb7-45ff-9f7f-8f369cf797f6 lvol 150 00:39:37.076 08:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=e853532c-028d-4d30-9b1c-de7bf727575e 00:39:37.076 08:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:37.076 08:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:39:37.335 [2024-11-19 08:03:29.176334] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:39:37.335 [2024-11-19 08:03:29.176490] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:39:37.335 true 00:39:37.335 08:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e7727a55-7eb7-45ff-9f7f-8f369cf797f6 00:39:37.335 08:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:39:37.594 08:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:39:37.594 08:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:39:37.852 08:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e853532c-028d-4d30-9b1c-de7bf727575e 00:39:38.110 08:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:38.678 [2024-11-19 08:03:30.336799] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:38.678 08:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:38.937 08:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3162764 00:39:38.937 08:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:39:38.937 08:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:39:38.937 08:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3162764 /var/tmp/bdevperf.sock 00:39:38.937 08:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 3162764 ']' 00:39:38.937 08:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:39:38.937 08:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:38.937 08:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:39:38.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:39:38.937 08:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:38.937 08:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:39:38.937 [2024-11-19 08:03:30.716180] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:39:38.938 [2024-11-19 08:03:30.716331] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3162764 ] 00:39:38.938 [2024-11-19 08:03:30.854383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:39.196 [2024-11-19 08:03:30.983488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:39.832 08:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:39.832 08:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:39:39.832 08:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:39:40.401 Nvme0n1 00:39:40.401 08:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:39:40.401 [ 00:39:40.401 { 00:39:40.401 "name": "Nvme0n1", 00:39:40.401 "aliases": [ 00:39:40.401 "e853532c-028d-4d30-9b1c-de7bf727575e" 00:39:40.401 ], 00:39:40.401 "product_name": "NVMe disk", 00:39:40.401 "block_size": 4096, 00:39:40.401 "num_blocks": 38912, 00:39:40.401 "uuid": "e853532c-028d-4d30-9b1c-de7bf727575e", 00:39:40.401 "numa_id": 0, 00:39:40.401 "assigned_rate_limits": { 00:39:40.401 "rw_ios_per_sec": 0, 00:39:40.401 "rw_mbytes_per_sec": 0, 00:39:40.401 "r_mbytes_per_sec": 0, 00:39:40.401 "w_mbytes_per_sec": 0 00:39:40.401 }, 00:39:40.401 "claimed": false, 00:39:40.401 "zoned": false, 00:39:40.401 "supported_io_types": { 00:39:40.401 "read": true, 00:39:40.401 "write": true, 00:39:40.401 "unmap": true, 00:39:40.401 "flush": true, 00:39:40.401 "reset": true, 00:39:40.401 "nvme_admin": true, 00:39:40.401 "nvme_io": true, 00:39:40.401 "nvme_io_md": false, 00:39:40.401 "write_zeroes": true, 00:39:40.401 "zcopy": false, 00:39:40.401 "get_zone_info": false, 00:39:40.401 "zone_management": false, 00:39:40.401 "zone_append": false, 00:39:40.401 "compare": true, 00:39:40.401 "compare_and_write": true, 00:39:40.401 "abort": true, 00:39:40.401 "seek_hole": false, 00:39:40.401 "seek_data": false, 00:39:40.401 "copy": true, 00:39:40.401 "nvme_iov_md": false 00:39:40.401 }, 00:39:40.401 "memory_domains": [ 00:39:40.401 { 00:39:40.401 "dma_device_id": "system", 00:39:40.401 "dma_device_type": 1 00:39:40.401 } 00:39:40.401 ], 00:39:40.401 "driver_specific": { 00:39:40.401 "nvme": [ 00:39:40.401 { 00:39:40.401 "trid": { 00:39:40.401 "trtype": "TCP", 00:39:40.401 "adrfam": "IPv4", 00:39:40.401 "traddr": "10.0.0.2", 00:39:40.401 "trsvcid": "4420", 00:39:40.401 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:39:40.401 }, 00:39:40.401 "ctrlr_data": { 00:39:40.401 "cntlid": 1, 00:39:40.401 "vendor_id": "0x8086", 00:39:40.401 "model_number": "SPDK bdev Controller", 00:39:40.401 "serial_number": "SPDK0", 00:39:40.401 "firmware_revision": "25.01", 00:39:40.401 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:40.401 "oacs": { 00:39:40.401 "security": 0, 00:39:40.401 "format": 0, 00:39:40.401 "firmware": 0, 00:39:40.401 "ns_manage": 0 00:39:40.401 }, 00:39:40.401 "multi_ctrlr": true, 00:39:40.401 "ana_reporting": false 00:39:40.401 }, 00:39:40.401 "vs": { 00:39:40.401 "nvme_version": "1.3" 00:39:40.401 }, 00:39:40.401 "ns_data": { 00:39:40.401 "id": 1, 00:39:40.401 "can_share": true 00:39:40.401 } 00:39:40.401 } 00:39:40.401 ], 00:39:40.401 "mp_policy": "active_passive" 00:39:40.401 } 00:39:40.401 } 00:39:40.401 ] 00:39:40.661 08:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3163025 00:39:40.661 08:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:39:40.661 08:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:39:40.661 Running I/O for 10 seconds... 00:39:41.598 Latency(us) 00:39:41.598 [2024-11-19T07:03:33.528Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:41.598 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:41.598 Nvme0n1 : 1.00 10414.00 40.68 0.00 0.00 0.00 0.00 0.00 00:39:41.598 [2024-11-19T07:03:33.528Z] =================================================================================================================== 00:39:41.598 [2024-11-19T07:03:33.528Z] Total : 10414.00 40.68 0.00 0.00 0.00 0.00 0.00 00:39:41.598 00:39:42.531 08:03:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u e7727a55-7eb7-45ff-9f7f-8f369cf797f6 00:39:42.531 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:42.531 Nvme0n1 : 2.00 10477.50 40.93 0.00 0.00 0.00 0.00 0.00 00:39:42.531 [2024-11-19T07:03:34.461Z] =================================================================================================================== 00:39:42.531 [2024-11-19T07:03:34.461Z] Total : 10477.50 40.93 0.00 0.00 0.00 0.00 0.00 00:39:42.531 00:39:42.789 true 00:39:42.789 08:03:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e7727a55-7eb7-45ff-9f7f-8f369cf797f6 00:39:42.789 08:03:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:39:43.049 08:03:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:39:43.049 08:03:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:39:43.049 08:03:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3163025 00:39:43.615 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:43.615 Nvme0n1 : 3.00 10541.00 41.18 0.00 0.00 0.00 0.00 0.00 00:39:43.615 [2024-11-19T07:03:35.546Z] =================================================================================================================== 00:39:43.616 [2024-11-19T07:03:35.546Z] Total : 10541.00 41.18 0.00 0.00 0.00 0.00 0.00 00:39:43.616 00:39:44.551 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:44.551 Nvme0n1 : 4.00 10604.50 41.42 0.00 0.00 0.00 0.00 0.00 00:39:44.551 [2024-11-19T07:03:36.481Z] =================================================================================================================== 00:39:44.551 [2024-11-19T07:03:36.481Z] Total : 10604.50 41.42 0.00 0.00 0.00 0.00 0.00 00:39:44.551 00:39:45.927 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:45.927 Nvme0n1 : 5.00 10642.60 41.57 0.00 0.00 0.00 0.00 0.00 00:39:45.927 [2024-11-19T07:03:37.857Z] =================================================================================================================== 00:39:45.927 [2024-11-19T07:03:37.857Z] Total : 10642.60 41.57 0.00 0.00 0.00 0.00 0.00 00:39:45.927 00:39:46.862 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:46.862 Nvme0n1 : 6.00 10668.00 41.67 0.00 0.00 0.00 0.00 0.00 00:39:46.862 [2024-11-19T07:03:38.792Z] =================================================================================================================== 00:39:46.862 [2024-11-19T07:03:38.792Z] Total : 10668.00 41.67 0.00 0.00 0.00 0.00 0.00 00:39:46.862 00:39:47.795 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:47.796 Nvme0n1 : 7.00 10686.14 41.74 0.00 0.00 0.00 0.00 0.00 00:39:47.796 [2024-11-19T07:03:39.726Z] =================================================================================================================== 00:39:47.796 [2024-11-19T07:03:39.726Z] Total : 10686.14 41.74 0.00 0.00 0.00 0.00 0.00 00:39:47.796 00:39:48.729 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:48.729 Nvme0n1 : 8.00 10699.75 41.80 0.00 0.00 0.00 0.00 0.00 00:39:48.729 [2024-11-19T07:03:40.659Z] =================================================================================================================== 00:39:48.729 [2024-11-19T07:03:40.659Z] Total : 10699.75 41.80 0.00 0.00 0.00 0.00 0.00 00:39:48.729 00:39:49.664 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:49.664 Nvme0n1 : 9.00 10717.44 41.87 0.00 0.00 0.00 0.00 0.00 00:39:49.664 [2024-11-19T07:03:41.594Z] =================================================================================================================== 00:39:49.664 [2024-11-19T07:03:41.594Z] Total : 10717.44 41.87 0.00 0.00 0.00 0.00 0.00 00:39:49.664 00:39:50.600 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:50.600 Nvme0n1 : 10.00 10744.20 41.97 0.00 0.00 0.00 0.00 0.00 00:39:50.600 [2024-11-19T07:03:42.530Z] =================================================================================================================== 00:39:50.600 [2024-11-19T07:03:42.530Z] Total : 10744.20 41.97 0.00 0.00 0.00 0.00 0.00 00:39:50.600 00:39:50.600 00:39:50.600 Latency(us) 00:39:50.600 [2024-11-19T07:03:42.530Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:50.600 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:50.600 Nvme0n1 : 10.01 10745.73 41.98 0.00 0.00 11904.77 9466.31 26602.76 00:39:50.600 [2024-11-19T07:03:42.530Z] =================================================================================================================== 00:39:50.600 [2024-11-19T07:03:42.530Z] Total : 10745.73 41.98 0.00 0.00 11904.77 9466.31 26602.76 00:39:50.600 { 00:39:50.600 "results": [ 00:39:50.600 { 00:39:50.600 "job": "Nvme0n1", 00:39:50.600 "core_mask": "0x2", 00:39:50.600 "workload": "randwrite", 00:39:50.600 "status": "finished", 00:39:50.600 "queue_depth": 128, 00:39:50.600 "io_size": 4096, 00:39:50.600 "runtime": 10.01049, 00:39:50.600 "iops": 10745.727731609542, 00:39:50.600 "mibps": 41.975498951599775, 00:39:50.600 "io_failed": 0, 00:39:50.600 "io_timeout": 0, 00:39:50.600 "avg_latency_us": 11904.771144040573, 00:39:50.600 "min_latency_us": 9466.31111111111, 00:39:50.600 "max_latency_us": 26602.76148148148 00:39:50.600 } 00:39:50.600 ], 00:39:50.600 "core_count": 1 00:39:50.600 } 00:39:50.600 08:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3162764 00:39:50.600 08:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 3162764 ']' 00:39:50.600 08:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 3162764 00:39:50.600 08:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:39:50.600 08:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:50.600 08:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3162764 00:39:50.859 08:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:50.859 08:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:50.859 08:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3162764' 00:39:50.859 killing process with pid 3162764 00:39:50.859 08:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 3162764 00:39:50.859 Received shutdown signal, test time was about 10.000000 seconds 00:39:50.859 00:39:50.859 Latency(us) 00:39:50.859 [2024-11-19T07:03:42.789Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:50.859 [2024-11-19T07:03:42.789Z] =================================================================================================================== 00:39:50.859 [2024-11-19T07:03:42.789Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:50.859 08:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 3162764 00:39:51.794 08:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:51.794 08:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:52.360 08:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e7727a55-7eb7-45ff-9f7f-8f369cf797f6 00:39:52.360 08:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:39:52.360 08:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:39:52.360 08:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:39:52.360 08:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:39:52.618 [2024-11-19 08:03:44.520452] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:39:52.618 08:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e7727a55-7eb7-45ff-9f7f-8f369cf797f6 00:39:52.618 08:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:39:52.618 08:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e7727a55-7eb7-45ff-9f7f-8f369cf797f6 00:39:52.618 08:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:52.876 08:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:52.876 08:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:52.876 08:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:52.876 08:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:52.876 08:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:52.876 08:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:52.876 08:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:39:52.877 08:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e7727a55-7eb7-45ff-9f7f-8f369cf797f6 00:39:53.135 request: 00:39:53.135 { 00:39:53.135 "uuid": "e7727a55-7eb7-45ff-9f7f-8f369cf797f6", 00:39:53.135 "method": "bdev_lvol_get_lvstores", 00:39:53.135 "req_id": 1 00:39:53.135 } 00:39:53.135 Got JSON-RPC error response 00:39:53.135 response: 00:39:53.135 { 00:39:53.135 "code": -19, 00:39:53.135 "message": "No such device" 00:39:53.135 } 00:39:53.135 08:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:39:53.135 08:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:53.135 08:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:53.135 08:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:53.135 08:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:39:53.394 aio_bdev 00:39:53.394 08:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev e853532c-028d-4d30-9b1c-de7bf727575e 00:39:53.394 08:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=e853532c-028d-4d30-9b1c-de7bf727575e 00:39:53.394 08:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:39:53.394 08:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:39:53.394 08:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:39:53.394 08:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:39:53.394 08:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:39:53.652 08:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e853532c-028d-4d30-9b1c-de7bf727575e -t 2000 00:39:53.914 [ 00:39:53.914 { 00:39:53.914 "name": "e853532c-028d-4d30-9b1c-de7bf727575e", 00:39:53.914 "aliases": [ 00:39:53.914 "lvs/lvol" 00:39:53.914 ], 00:39:53.914 "product_name": "Logical Volume", 00:39:53.914 "block_size": 4096, 00:39:53.914 "num_blocks": 38912, 00:39:53.914 "uuid": "e853532c-028d-4d30-9b1c-de7bf727575e", 00:39:53.914 "assigned_rate_limits": { 00:39:53.914 "rw_ios_per_sec": 0, 00:39:53.914 "rw_mbytes_per_sec": 0, 00:39:53.914 "r_mbytes_per_sec": 0, 00:39:53.914 "w_mbytes_per_sec": 0 00:39:53.914 }, 00:39:53.914 "claimed": false, 00:39:53.914 "zoned": false, 00:39:53.914 "supported_io_types": { 00:39:53.914 "read": true, 00:39:53.914 "write": true, 00:39:53.914 "unmap": true, 00:39:53.914 "flush": false, 00:39:53.914 "reset": true, 00:39:53.914 "nvme_admin": false, 00:39:53.914 "nvme_io": false, 00:39:53.914 "nvme_io_md": false, 00:39:53.914 "write_zeroes": true, 00:39:53.914 "zcopy": false, 00:39:53.914 "get_zone_info": false, 00:39:53.914 "zone_management": false, 00:39:53.914 "zone_append": false, 00:39:53.914 "compare": false, 00:39:53.914 "compare_and_write": false, 00:39:53.914 "abort": false, 00:39:53.914 "seek_hole": true, 00:39:53.914 "seek_data": true, 00:39:53.914 "copy": false, 00:39:53.914 "nvme_iov_md": false 00:39:53.914 }, 00:39:53.914 "driver_specific": { 00:39:53.914 "lvol": { 00:39:53.914 "lvol_store_uuid": "e7727a55-7eb7-45ff-9f7f-8f369cf797f6", 00:39:53.914 "base_bdev": "aio_bdev", 00:39:53.914 "thin_provision": false, 00:39:53.914 "num_allocated_clusters": 38, 00:39:53.914 "snapshot": false, 00:39:53.914 "clone": false, 00:39:53.914 "esnap_clone": false 00:39:53.914 } 00:39:53.914 } 00:39:53.914 } 00:39:53.914 ] 00:39:53.914 08:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:39:53.914 08:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:39:53.914 08:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e7727a55-7eb7-45ff-9f7f-8f369cf797f6 00:39:54.173 08:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:39:54.173 08:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e7727a55-7eb7-45ff-9f7f-8f369cf797f6 00:39:54.173 08:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:39:54.431 08:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:39:54.431 08:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e853532c-028d-4d30-9b1c-de7bf727575e 00:39:54.690 08:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e7727a55-7eb7-45ff-9f7f-8f369cf797f6 00:39:54.948 08:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:39:55.206 08:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:55.206 00:39:55.206 real 0m19.311s 00:39:55.206 user 0m19.047s 00:39:55.206 sys 0m1.917s 00:39:55.206 08:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:55.206 08:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:39:55.206 ************************************ 00:39:55.206 END TEST lvs_grow_clean 00:39:55.206 ************************************ 00:39:55.206 08:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:39:55.206 08:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:39:55.206 08:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:55.206 08:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:55.206 ************************************ 00:39:55.206 START TEST lvs_grow_dirty 00:39:55.206 ************************************ 00:39:55.206 08:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:39:55.206 08:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:39:55.206 08:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:39:55.207 08:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:39:55.207 08:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:39:55.207 08:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:39:55.207 08:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:39:55.207 08:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:55.207 08:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:55.207 08:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:39:55.774 08:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:39:55.774 08:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:39:56.032 08:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=cfa50da8-0f12-4bdc-91e5-d3078fe723d1 00:39:56.032 08:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cfa50da8-0f12-4bdc-91e5-d3078fe723d1 00:39:56.032 08:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:39:56.291 08:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:39:56.291 08:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:39:56.291 08:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u cfa50da8-0f12-4bdc-91e5-d3078fe723d1 lvol 150 00:39:56.549 08:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=677c6e3a-0c2f-4af4-bdce-632b49394ac4 00:39:56.549 08:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:56.549 08:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:39:56.808 [2024-11-19 08:03:48.596275] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:39:56.808 [2024-11-19 08:03:48.596416] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:39:56.808 true 00:39:56.808 08:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cfa50da8-0f12-4bdc-91e5-d3078fe723d1 00:39:56.808 08:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:39:57.066 08:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:39:57.066 08:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:39:57.332 08:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 677c6e3a-0c2f-4af4-bdce-632b49394ac4 00:39:57.602 08:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:57.859 [2024-11-19 08:03:49.684765] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:57.859 08:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:58.117 08:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3165051 00:39:58.117 08:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:39:58.117 08:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:39:58.117 08:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3165051 /var/tmp/bdevperf.sock 00:39:58.117 08:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3165051 ']' 00:39:58.117 08:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:39:58.117 08:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:58.117 08:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:39:58.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:39:58.117 08:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:58.117 08:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:39:58.375 [2024-11-19 08:03:50.052396] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:39:58.375 [2024-11-19 08:03:50.052550] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3165051 ] 00:39:58.375 [2024-11-19 08:03:50.196465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:58.634 [2024-11-19 08:03:50.328929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:59.201 08:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:59.201 08:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:39:59.201 08:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:39:59.767 Nvme0n1 00:39:59.767 08:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:40:00.026 [ 00:40:00.026 { 00:40:00.026 "name": "Nvme0n1", 00:40:00.026 "aliases": [ 00:40:00.026 "677c6e3a-0c2f-4af4-bdce-632b49394ac4" 00:40:00.026 ], 00:40:00.026 "product_name": "NVMe disk", 00:40:00.026 "block_size": 4096, 00:40:00.026 "num_blocks": 38912, 00:40:00.026 "uuid": "677c6e3a-0c2f-4af4-bdce-632b49394ac4", 00:40:00.026 "numa_id": 0, 00:40:00.026 "assigned_rate_limits": { 00:40:00.026 "rw_ios_per_sec": 0, 00:40:00.026 "rw_mbytes_per_sec": 0, 00:40:00.026 "r_mbytes_per_sec": 0, 00:40:00.026 "w_mbytes_per_sec": 0 00:40:00.026 }, 00:40:00.026 "claimed": false, 00:40:00.026 "zoned": false, 00:40:00.026 "supported_io_types": { 00:40:00.026 "read": true, 00:40:00.026 "write": true, 00:40:00.026 "unmap": true, 00:40:00.026 "flush": true, 00:40:00.026 "reset": true, 00:40:00.026 "nvme_admin": true, 00:40:00.026 "nvme_io": true, 00:40:00.027 "nvme_io_md": false, 00:40:00.027 "write_zeroes": true, 00:40:00.027 "zcopy": false, 00:40:00.027 "get_zone_info": false, 00:40:00.027 "zone_management": false, 00:40:00.027 "zone_append": false, 00:40:00.027 "compare": true, 00:40:00.027 "compare_and_write": true, 00:40:00.027 "abort": true, 00:40:00.027 "seek_hole": false, 00:40:00.027 "seek_data": false, 00:40:00.027 "copy": true, 00:40:00.027 "nvme_iov_md": false 00:40:00.027 }, 00:40:00.027 "memory_domains": [ 00:40:00.027 { 00:40:00.027 "dma_device_id": "system", 00:40:00.027 "dma_device_type": 1 00:40:00.027 } 00:40:00.027 ], 00:40:00.027 "driver_specific": { 00:40:00.027 "nvme": [ 00:40:00.027 { 00:40:00.027 "trid": { 00:40:00.027 "trtype": "TCP", 00:40:00.027 "adrfam": "IPv4", 00:40:00.027 "traddr": "10.0.0.2", 00:40:00.027 "trsvcid": "4420", 00:40:00.027 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:40:00.027 }, 00:40:00.027 "ctrlr_data": { 00:40:00.027 "cntlid": 1, 00:40:00.027 "vendor_id": "0x8086", 00:40:00.027 "model_number": "SPDK bdev Controller", 00:40:00.027 "serial_number": "SPDK0", 00:40:00.027 "firmware_revision": "25.01", 00:40:00.027 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:00.027 "oacs": { 00:40:00.027 "security": 0, 00:40:00.027 "format": 0, 00:40:00.027 "firmware": 0, 00:40:00.027 "ns_manage": 0 00:40:00.027 }, 00:40:00.027 "multi_ctrlr": true, 00:40:00.027 "ana_reporting": false 00:40:00.027 }, 00:40:00.027 "vs": { 00:40:00.027 "nvme_version": "1.3" 00:40:00.027 }, 00:40:00.027 "ns_data": { 00:40:00.027 "id": 1, 00:40:00.027 "can_share": true 00:40:00.027 } 00:40:00.027 } 00:40:00.027 ], 00:40:00.027 "mp_policy": "active_passive" 00:40:00.027 } 00:40:00.027 } 00:40:00.027 ] 00:40:00.027 08:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3165317 00:40:00.027 08:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:40:00.027 08:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:40:00.285 Running I/O for 10 seconds... 00:40:01.221 Latency(us) 00:40:01.221 [2024-11-19T07:03:53.151Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:01.221 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:01.221 Nvme0n1 : 1.00 10414.00 40.68 0.00 0.00 0.00 0.00 0.00 00:40:01.221 [2024-11-19T07:03:53.151Z] =================================================================================================================== 00:40:01.221 [2024-11-19T07:03:53.151Z] Total : 10414.00 40.68 0.00 0.00 0.00 0.00 0.00 00:40:01.221 00:40:02.157 08:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u cfa50da8-0f12-4bdc-91e5-d3078fe723d1 00:40:02.157 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:02.157 Nvme0n1 : 2.00 10477.50 40.93 0.00 0.00 0.00 0.00 0.00 00:40:02.157 [2024-11-19T07:03:54.087Z] =================================================================================================================== 00:40:02.157 [2024-11-19T07:03:54.087Z] Total : 10477.50 40.93 0.00 0.00 0.00 0.00 0.00 00:40:02.157 00:40:02.415 true 00:40:02.415 08:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cfa50da8-0f12-4bdc-91e5-d3078fe723d1 00:40:02.415 08:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:40:02.674 08:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:40:02.674 08:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:40:02.674 08:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3165317 00:40:03.241 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:03.241 Nvme0n1 : 3.00 10541.00 41.18 0.00 0.00 0.00 0.00 0.00 00:40:03.241 [2024-11-19T07:03:55.171Z] =================================================================================================================== 00:40:03.241 [2024-11-19T07:03:55.171Z] Total : 10541.00 41.18 0.00 0.00 0.00 0.00 0.00 00:40:03.241 00:40:04.176 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:04.176 Nvme0n1 : 4.00 10604.50 41.42 0.00 0.00 0.00 0.00 0.00 00:40:04.176 [2024-11-19T07:03:56.106Z] =================================================================================================================== 00:40:04.176 [2024-11-19T07:03:56.106Z] Total : 10604.50 41.42 0.00 0.00 0.00 0.00 0.00 00:40:04.176 00:40:05.151 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:05.151 Nvme0n1 : 5.00 10642.60 41.57 0.00 0.00 0.00 0.00 0.00 00:40:05.151 [2024-11-19T07:03:57.081Z] =================================================================================================================== 00:40:05.151 [2024-11-19T07:03:57.081Z] Total : 10642.60 41.57 0.00 0.00 0.00 0.00 0.00 00:40:05.151 00:40:06.117 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:06.117 Nvme0n1 : 6.00 10689.17 41.75 0.00 0.00 0.00 0.00 0.00 00:40:06.117 [2024-11-19T07:03:58.047Z] =================================================================================================================== 00:40:06.117 [2024-11-19T07:03:58.047Z] Total : 10689.17 41.75 0.00 0.00 0.00 0.00 0.00 00:40:06.117 00:40:07.492 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:07.492 Nvme0n1 : 7.00 10704.29 41.81 0.00 0.00 0.00 0.00 0.00 00:40:07.492 [2024-11-19T07:03:59.423Z] =================================================================================================================== 00:40:07.493 [2024-11-19T07:03:59.423Z] Total : 10704.29 41.81 0.00 0.00 0.00 0.00 0.00 00:40:07.493 00:40:08.427 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:08.427 Nvme0n1 : 8.00 10731.50 41.92 0.00 0.00 0.00 0.00 0.00 00:40:08.427 [2024-11-19T07:04:00.357Z] =================================================================================================================== 00:40:08.427 [2024-11-19T07:04:00.357Z] Total : 10731.50 41.92 0.00 0.00 0.00 0.00 0.00 00:40:08.427 00:40:09.362 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:09.362 Nvme0n1 : 9.00 10731.56 41.92 0.00 0.00 0.00 0.00 0.00 00:40:09.362 [2024-11-19T07:04:01.292Z] =================================================================================================================== 00:40:09.362 [2024-11-19T07:04:01.292Z] Total : 10731.56 41.92 0.00 0.00 0.00 0.00 0.00 00:40:09.362 00:40:10.297 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:10.297 Nvme0n1 : 10.00 10750.60 41.99 0.00 0.00 0.00 0.00 0.00 00:40:10.297 [2024-11-19T07:04:02.227Z] =================================================================================================================== 00:40:10.297 [2024-11-19T07:04:02.227Z] Total : 10750.60 41.99 0.00 0.00 0.00 0.00 0.00 00:40:10.297 00:40:10.297 00:40:10.297 Latency(us) 00:40:10.297 [2024-11-19T07:04:02.227Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:10.297 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:10.297 Nvme0n1 : 10.00 10752.35 42.00 0.00 0.00 11896.58 9514.86 25631.86 00:40:10.297 [2024-11-19T07:04:02.227Z] =================================================================================================================== 00:40:10.297 [2024-11-19T07:04:02.227Z] Total : 10752.35 42.00 0.00 0.00 11896.58 9514.86 25631.86 00:40:10.297 { 00:40:10.297 "results": [ 00:40:10.297 { 00:40:10.297 "job": "Nvme0n1", 00:40:10.297 "core_mask": "0x2", 00:40:10.297 "workload": "randwrite", 00:40:10.297 "status": "finished", 00:40:10.297 "queue_depth": 128, 00:40:10.297 "io_size": 4096, 00:40:10.297 "runtime": 10.004324, 00:40:10.297 "iops": 10752.350683564428, 00:40:10.297 "mibps": 42.001369857673545, 00:40:10.297 "io_failed": 0, 00:40:10.297 "io_timeout": 0, 00:40:10.297 "avg_latency_us": 11896.57956712425, 00:40:10.297 "min_latency_us": 9514.856296296297, 00:40:10.297 "max_latency_us": 25631.85777777778 00:40:10.297 } 00:40:10.297 ], 00:40:10.297 "core_count": 1 00:40:10.297 } 00:40:10.297 08:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3165051 00:40:10.297 08:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 3165051 ']' 00:40:10.297 08:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 3165051 00:40:10.297 08:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:40:10.297 08:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:10.297 08:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3165051 00:40:10.297 08:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:40:10.297 08:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:40:10.297 08:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3165051' 00:40:10.297 killing process with pid 3165051 00:40:10.297 08:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 3165051 00:40:10.297 Received shutdown signal, test time was about 10.000000 seconds 00:40:10.297 00:40:10.297 Latency(us) 00:40:10.297 [2024-11-19T07:04:02.227Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:10.297 [2024-11-19T07:04:02.227Z] =================================================================================================================== 00:40:10.297 [2024-11-19T07:04:02.227Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:10.297 08:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 3165051 00:40:11.232 08:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:11.491 08:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:11.749 08:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cfa50da8-0f12-4bdc-91e5-d3078fe723d1 00:40:11.749 08:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:40:12.008 08:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:40:12.008 08:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:40:12.008 08:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3162318 00:40:12.008 08:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3162318 00:40:12.008 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3162318 Killed "${NVMF_APP[@]}" "$@" 00:40:12.008 08:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:40:12.008 08:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:40:12.008 08:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:12.008 08:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:12.008 08:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:40:12.008 08:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=3166773 00:40:12.008 08:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:40:12.008 08:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 3166773 00:40:12.008 08:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3166773 ']' 00:40:12.008 08:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:12.008 08:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:12.008 08:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:12.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:12.008 08:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:12.008 08:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:40:12.266 [2024-11-19 08:04:03.954430] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:12.266 [2024-11-19 08:04:03.957506] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:40:12.266 [2024-11-19 08:04:03.957617] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:12.266 [2024-11-19 08:04:04.132794] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:12.525 [2024-11-19 08:04:04.270199] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:12.525 [2024-11-19 08:04:04.270283] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:12.525 [2024-11-19 08:04:04.270313] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:12.525 [2024-11-19 08:04:04.270334] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:12.525 [2024-11-19 08:04:04.270356] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:12.525 [2024-11-19 08:04:04.272007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:12.783 [2024-11-19 08:04:04.645337] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:12.783 [2024-11-19 08:04:04.645835] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:13.042 08:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:13.042 08:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:40:13.042 08:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:13.042 08:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:13.042 08:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:40:13.301 08:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:13.301 08:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:40:13.301 [2024-11-19 08:04:05.228464] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:40:13.301 [2024-11-19 08:04:05.228747] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:40:13.301 [2024-11-19 08:04:05.228839] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:40:13.559 08:04:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:40:13.559 08:04:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 677c6e3a-0c2f-4af4-bdce-632b49394ac4 00:40:13.559 08:04:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=677c6e3a-0c2f-4af4-bdce-632b49394ac4 00:40:13.559 08:04:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:40:13.559 08:04:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:40:13.559 08:04:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:40:13.559 08:04:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:40:13.559 08:04:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:40:13.819 08:04:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 677c6e3a-0c2f-4af4-bdce-632b49394ac4 -t 2000 00:40:14.077 [ 00:40:14.077 { 00:40:14.077 "name": "677c6e3a-0c2f-4af4-bdce-632b49394ac4", 00:40:14.077 "aliases": [ 00:40:14.077 "lvs/lvol" 00:40:14.077 ], 00:40:14.077 "product_name": "Logical Volume", 00:40:14.077 "block_size": 4096, 00:40:14.077 "num_blocks": 38912, 00:40:14.077 "uuid": "677c6e3a-0c2f-4af4-bdce-632b49394ac4", 00:40:14.077 "assigned_rate_limits": { 00:40:14.077 "rw_ios_per_sec": 0, 00:40:14.077 "rw_mbytes_per_sec": 0, 00:40:14.077 "r_mbytes_per_sec": 0, 00:40:14.077 "w_mbytes_per_sec": 0 00:40:14.077 }, 00:40:14.077 "claimed": false, 00:40:14.077 "zoned": false, 00:40:14.077 "supported_io_types": { 00:40:14.077 "read": true, 00:40:14.077 "write": true, 00:40:14.077 "unmap": true, 00:40:14.077 "flush": false, 00:40:14.077 "reset": true, 00:40:14.077 "nvme_admin": false, 00:40:14.077 "nvme_io": false, 00:40:14.077 "nvme_io_md": false, 00:40:14.077 "write_zeroes": true, 00:40:14.077 "zcopy": false, 00:40:14.077 "get_zone_info": false, 00:40:14.077 "zone_management": false, 00:40:14.077 "zone_append": false, 00:40:14.077 "compare": false, 00:40:14.077 "compare_and_write": false, 00:40:14.077 "abort": false, 00:40:14.077 "seek_hole": true, 00:40:14.077 "seek_data": true, 00:40:14.077 "copy": false, 00:40:14.077 "nvme_iov_md": false 00:40:14.078 }, 00:40:14.078 "driver_specific": { 00:40:14.078 "lvol": { 00:40:14.078 "lvol_store_uuid": "cfa50da8-0f12-4bdc-91e5-d3078fe723d1", 00:40:14.078 "base_bdev": "aio_bdev", 00:40:14.078 "thin_provision": false, 00:40:14.078 "num_allocated_clusters": 38, 00:40:14.078 "snapshot": false, 00:40:14.078 "clone": false, 00:40:14.078 "esnap_clone": false 00:40:14.078 } 00:40:14.078 } 00:40:14.078 } 00:40:14.078 ] 00:40:14.078 08:04:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:40:14.078 08:04:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cfa50da8-0f12-4bdc-91e5-d3078fe723d1 00:40:14.078 08:04:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:40:14.336 08:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:40:14.336 08:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cfa50da8-0f12-4bdc-91e5-d3078fe723d1 00:40:14.336 08:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:40:14.595 08:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:40:14.595 08:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:40:14.853 [2024-11-19 08:04:06.637091] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:40:14.853 08:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cfa50da8-0f12-4bdc-91e5-d3078fe723d1 00:40:14.853 08:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:40:14.853 08:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cfa50da8-0f12-4bdc-91e5-d3078fe723d1 00:40:14.853 08:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:14.853 08:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:14.853 08:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:14.853 08:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:14.853 08:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:14.853 08:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:14.853 08:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:14.853 08:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:40:14.853 08:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cfa50da8-0f12-4bdc-91e5-d3078fe723d1 00:40:15.112 request: 00:40:15.112 { 00:40:15.112 "uuid": "cfa50da8-0f12-4bdc-91e5-d3078fe723d1", 00:40:15.112 "method": "bdev_lvol_get_lvstores", 00:40:15.112 "req_id": 1 00:40:15.112 } 00:40:15.112 Got JSON-RPC error response 00:40:15.112 response: 00:40:15.112 { 00:40:15.112 "code": -19, 00:40:15.112 "message": "No such device" 00:40:15.112 } 00:40:15.112 08:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:40:15.112 08:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:15.112 08:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:40:15.112 08:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:15.112 08:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:40:15.370 aio_bdev 00:40:15.370 08:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 677c6e3a-0c2f-4af4-bdce-632b49394ac4 00:40:15.370 08:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=677c6e3a-0c2f-4af4-bdce-632b49394ac4 00:40:15.370 08:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:40:15.370 08:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:40:15.370 08:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:40:15.370 08:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:40:15.370 08:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:40:15.628 08:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 677c6e3a-0c2f-4af4-bdce-632b49394ac4 -t 2000 00:40:15.886 [ 00:40:15.886 { 00:40:15.886 "name": "677c6e3a-0c2f-4af4-bdce-632b49394ac4", 00:40:15.886 "aliases": [ 00:40:15.886 "lvs/lvol" 00:40:15.886 ], 00:40:15.886 "product_name": "Logical Volume", 00:40:15.886 "block_size": 4096, 00:40:15.886 "num_blocks": 38912, 00:40:15.886 "uuid": "677c6e3a-0c2f-4af4-bdce-632b49394ac4", 00:40:15.886 "assigned_rate_limits": { 00:40:15.886 "rw_ios_per_sec": 0, 00:40:15.886 "rw_mbytes_per_sec": 0, 00:40:15.886 "r_mbytes_per_sec": 0, 00:40:15.886 "w_mbytes_per_sec": 0 00:40:15.886 }, 00:40:15.886 "claimed": false, 00:40:15.886 "zoned": false, 00:40:15.886 "supported_io_types": { 00:40:15.886 "read": true, 00:40:15.886 "write": true, 00:40:15.886 "unmap": true, 00:40:15.886 "flush": false, 00:40:15.886 "reset": true, 00:40:15.886 "nvme_admin": false, 00:40:15.886 "nvme_io": false, 00:40:15.886 "nvme_io_md": false, 00:40:15.886 "write_zeroes": true, 00:40:15.886 "zcopy": false, 00:40:15.886 "get_zone_info": false, 00:40:15.886 "zone_management": false, 00:40:15.886 "zone_append": false, 00:40:15.886 "compare": false, 00:40:15.886 "compare_and_write": false, 00:40:15.886 "abort": false, 00:40:15.886 "seek_hole": true, 00:40:15.886 "seek_data": true, 00:40:15.886 "copy": false, 00:40:15.886 "nvme_iov_md": false 00:40:15.886 }, 00:40:15.886 "driver_specific": { 00:40:15.886 "lvol": { 00:40:15.886 "lvol_store_uuid": "cfa50da8-0f12-4bdc-91e5-d3078fe723d1", 00:40:15.886 "base_bdev": "aio_bdev", 00:40:15.886 "thin_provision": false, 00:40:15.886 "num_allocated_clusters": 38, 00:40:15.886 "snapshot": false, 00:40:15.886 "clone": false, 00:40:15.886 "esnap_clone": false 00:40:15.886 } 00:40:15.886 } 00:40:15.886 } 00:40:15.886 ] 00:40:15.886 08:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:40:15.887 08:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cfa50da8-0f12-4bdc-91e5-d3078fe723d1 00:40:15.887 08:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:40:16.145 08:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:40:16.145 08:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cfa50da8-0f12-4bdc-91e5-d3078fe723d1 00:40:16.145 08:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:40:16.403 08:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:40:16.403 08:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 677c6e3a-0c2f-4af4-bdce-632b49394ac4 00:40:16.661 08:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u cfa50da8-0f12-4bdc-91e5-d3078fe723d1 00:40:17.229 08:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:40:17.229 08:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:17.229 00:40:17.229 real 0m22.008s 00:40:17.229 user 0m39.166s 00:40:17.229 sys 0m4.751s 00:40:17.229 08:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:17.229 08:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:40:17.229 ************************************ 00:40:17.229 END TEST lvs_grow_dirty 00:40:17.229 ************************************ 00:40:17.489 08:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:40:17.489 08:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:40:17.489 08:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:40:17.489 08:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:40:17.489 08:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:40:17.489 08:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:40:17.489 08:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:40:17.489 08:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:40:17.489 08:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:40:17.489 nvmf_trace.0 00:40:17.489 08:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:40:17.489 08:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:40:17.489 08:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:17.489 08:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:40:17.489 08:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:17.489 08:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:40:17.489 08:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:17.489 08:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:17.489 rmmod nvme_tcp 00:40:17.489 rmmod nvme_fabrics 00:40:17.489 rmmod nvme_keyring 00:40:17.489 08:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:17.489 08:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:40:17.489 08:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:40:17.489 08:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 3166773 ']' 00:40:17.489 08:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 3166773 00:40:17.489 08:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 3166773 ']' 00:40:17.489 08:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 3166773 00:40:17.489 08:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:40:17.489 08:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:17.489 08:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3166773 00:40:17.489 08:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:17.489 08:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:17.489 08:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3166773' 00:40:17.489 killing process with pid 3166773 00:40:17.489 08:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 3166773 00:40:17.489 08:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 3166773 00:40:18.864 08:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:18.864 08:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:18.864 08:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:18.864 08:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:40:18.864 08:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:40:18.864 08:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:18.864 08:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:40:18.864 08:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:18.864 08:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:18.864 08:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:18.864 08:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:18.864 08:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:20.780 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:20.780 00:40:20.780 real 0m48.314s 00:40:20.780 user 1m1.494s 00:40:20.780 sys 0m8.700s 00:40:20.780 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:20.780 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:20.780 ************************************ 00:40:20.780 END TEST nvmf_lvs_grow 00:40:20.780 ************************************ 00:40:20.780 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:40:20.780 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:20.780 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:20.780 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:20.780 ************************************ 00:40:20.780 START TEST nvmf_bdev_io_wait 00:40:20.780 ************************************ 00:40:20.780 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:40:20.780 * Looking for test storage... 00:40:20.780 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:20.780 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:40:20.780 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:40:20.780 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:40:20.780 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:40:20.780 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:20.780 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:20.780 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:20.780 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:40:20.780 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:40:20.780 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:40:20.780 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:40:20.780 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:40:20.780 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:40:20.780 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:40:20.780 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:20.780 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:40:20.780 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:40:20.780 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:20.780 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:20.780 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:40:20.780 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:40:20.780 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:20.780 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:40:20.780 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:40:20.780 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:40:20.780 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:40:20.780 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:20.780 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:40:20.780 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:40:20.780 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:20.780 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:20.780 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:40:20.780 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:20.780 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:40:20.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:20.780 --rc genhtml_branch_coverage=1 00:40:20.780 --rc genhtml_function_coverage=1 00:40:20.780 --rc genhtml_legend=1 00:40:20.780 --rc geninfo_all_blocks=1 00:40:20.780 --rc geninfo_unexecuted_blocks=1 00:40:20.780 00:40:20.780 ' 00:40:20.780 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:40:20.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:20.780 --rc genhtml_branch_coverage=1 00:40:20.780 --rc genhtml_function_coverage=1 00:40:20.780 --rc genhtml_legend=1 00:40:20.780 --rc geninfo_all_blocks=1 00:40:20.780 --rc geninfo_unexecuted_blocks=1 00:40:20.780 00:40:20.780 ' 00:40:20.780 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:40:20.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:20.780 --rc genhtml_branch_coverage=1 00:40:20.780 --rc genhtml_function_coverage=1 00:40:20.780 --rc genhtml_legend=1 00:40:20.780 --rc geninfo_all_blocks=1 00:40:20.780 --rc geninfo_unexecuted_blocks=1 00:40:20.780 00:40:20.780 ' 00:40:20.780 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:40:20.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:20.780 --rc genhtml_branch_coverage=1 00:40:20.780 --rc genhtml_function_coverage=1 00:40:20.780 --rc genhtml_legend=1 00:40:20.780 --rc geninfo_all_blocks=1 00:40:20.780 --rc geninfo_unexecuted_blocks=1 00:40:20.780 00:40:20.780 ' 00:40:20.780 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:20.780 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:40:20.780 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:20.780 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:20.780 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:20.780 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:20.780 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:20.780 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:20.780 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:20.780 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:20.780 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:20.780 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:20.780 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:20.780 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:20.780 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:20.780 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:20.780 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:20.781 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:20.781 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:20.781 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:40:20.781 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:20.781 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:20.781 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:20.781 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:20.781 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:20.781 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:20.781 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:40:20.781 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:20.781 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:40:20.781 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:20.781 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:20.781 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:20.781 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:20.781 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:20.781 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:20.781 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:20.781 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:20.781 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:20.781 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:20.781 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:20.781 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:20.781 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:40:20.781 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:20.781 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:20.781 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:20.781 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:20.781 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:20.781 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:20.781 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:20.781 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:20.781 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:20.781 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:20.781 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:40:20.781 08:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:22.684 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:22.684 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:40:22.684 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:22.684 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:22.684 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:22.684 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:22.684 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:22.684 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:40:22.684 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:22.684 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:40:22.684 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:40:22.684 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:40:22.684 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:40:22.684 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:40:22.684 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:40:22.684 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:22.685 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:22.685 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:22.685 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:22.685 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:22.685 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:22.685 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:22.685 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:22.685 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:22.685 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:22.685 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:22.685 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:22.685 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:22.685 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:22.685 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:22.685 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:22.685 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:22.685 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:22.685 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:22.685 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:40:22.685 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:40:22.685 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:22.685 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:22.685 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:22.685 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:22.685 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:22.685 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:22.685 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:40:22.685 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:40:22.685 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:22.685 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:22.685 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:22.685 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:22.685 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:22.685 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:22.685 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:22.685 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:22.685 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:22.685 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:22.685 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:22.685 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:22.685 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:22.685 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:22.685 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:22.685 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:40:22.685 Found net devices under 0000:0a:00.0: cvl_0_0 00:40:22.685 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:22.685 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:22.685 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:22.685 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:22.685 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:22.685 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:22.685 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:22.685 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:22.685 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:40:22.685 Found net devices under 0000:0a:00.1: cvl_0_1 00:40:22.685 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:22.685 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:22.685 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:40:22.685 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:22.685 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:22.685 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:22.685 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:22.685 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:22.685 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:22.685 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:22.685 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:22.685 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:22.685 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:22.685 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:22.685 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:22.685 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:22.685 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:22.685 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:22.685 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:22.685 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:22.685 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:22.945 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:22.945 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:22.945 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:22.945 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:22.945 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:22.945 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:22.945 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:22.945 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:22.945 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:22.945 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.289 ms 00:40:22.945 00:40:22.945 --- 10.0.0.2 ping statistics --- 00:40:22.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:22.945 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:40:22.945 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:22.945 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:22.945 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:40:22.945 00:40:22.945 --- 10.0.0.1 ping statistics --- 00:40:22.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:22.945 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:40:22.945 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:22.945 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:40:22.945 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:22.945 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:22.945 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:22.945 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:22.945 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:22.945 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:22.945 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:22.945 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:40:22.945 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:22.945 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:22.945 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:22.945 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=3169434 00:40:22.945 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:40:22.945 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 3169434 00:40:22.945 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 3169434 ']' 00:40:22.945 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:22.945 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:22.945 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:22.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:22.945 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:22.945 08:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:22.945 [2024-11-19 08:04:14.839685] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:22.945 [2024-11-19 08:04:14.842452] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:40:22.945 [2024-11-19 08:04:14.842553] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:23.203 [2024-11-19 08:04:15.001105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:23.462 [2024-11-19 08:04:15.142208] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:23.462 [2024-11-19 08:04:15.142278] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:23.462 [2024-11-19 08:04:15.142306] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:23.462 [2024-11-19 08:04:15.142328] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:23.462 [2024-11-19 08:04:15.142351] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:23.462 [2024-11-19 08:04:15.145182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:23.462 [2024-11-19 08:04:15.145458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:40:23.462 [2024-11-19 08:04:15.145469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:23.462 [2024-11-19 08:04:15.145478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:40:23.462 [2024-11-19 08:04:15.146257] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:24.029 08:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:24.029 08:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:40:24.029 08:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:24.029 08:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:24.029 08:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:24.029 08:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:24.029 08:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:40:24.029 08:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:24.029 08:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:24.029 08:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:24.029 08:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:40:24.029 08:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:24.029 08:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:24.288 [2024-11-19 08:04:16.097299] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:24.288 [2024-11-19 08:04:16.098448] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:24.288 [2024-11-19 08:04:16.099666] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:24.288 [2024-11-19 08:04:16.100817] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:40:24.288 08:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:24.288 08:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:24.288 08:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:24.288 08:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:24.288 [2024-11-19 08:04:16.106507] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:24.288 08:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:24.288 08:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:24.288 08:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:24.288 08:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:24.288 Malloc0 00:40:24.288 08:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:24.288 08:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:40:24.288 08:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:24.288 08:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:24.288 08:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:24.288 08:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:24.288 08:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:24.288 08:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:24.288 08:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:24.288 08:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:24.288 08:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:24.288 08:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:24.547 [2024-11-19 08:04:16.222803] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:24.547 08:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:24.547 08:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3169591 00:40:24.547 08:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3169592 00:40:24.547 08:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:40:24.547 08:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:40:24.547 08:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:40:24.547 08:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:40:24.547 08:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3169595 00:40:24.547 08:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:40:24.547 08:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:40:24.547 08:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:24.547 08:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:24.547 { 00:40:24.547 "params": { 00:40:24.547 "name": "Nvme$subsystem", 00:40:24.547 "trtype": "$TEST_TRANSPORT", 00:40:24.547 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:24.547 "adrfam": "ipv4", 00:40:24.547 "trsvcid": "$NVMF_PORT", 00:40:24.547 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:24.547 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:24.547 "hdgst": ${hdgst:-false}, 00:40:24.547 "ddgst": ${ddgst:-false} 00:40:24.547 }, 00:40:24.547 "method": "bdev_nvme_attach_controller" 00:40:24.547 } 00:40:24.547 EOF 00:40:24.547 )") 00:40:24.547 08:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:40:24.547 08:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:40:24.547 08:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:24.547 08:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3169597 00:40:24.547 08:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:24.547 { 00:40:24.547 "params": { 00:40:24.547 "name": "Nvme$subsystem", 00:40:24.547 "trtype": "$TEST_TRANSPORT", 00:40:24.547 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:24.547 "adrfam": "ipv4", 00:40:24.547 "trsvcid": "$NVMF_PORT", 00:40:24.547 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:24.547 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:24.547 "hdgst": ${hdgst:-false}, 00:40:24.547 "ddgst": ${ddgst:-false} 00:40:24.547 }, 00:40:24.547 "method": "bdev_nvme_attach_controller" 00:40:24.547 } 00:40:24.547 EOF 00:40:24.547 )") 00:40:24.547 08:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:40:24.547 08:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:40:24.547 08:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:40:24.547 08:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:40:24.547 08:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:40:24.547 08:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:24.547 08:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:40:24.547 08:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:40:24.547 08:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:40:24.547 08:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:24.547 { 00:40:24.547 "params": { 00:40:24.547 "name": "Nvme$subsystem", 00:40:24.547 "trtype": "$TEST_TRANSPORT", 00:40:24.547 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:24.547 "adrfam": "ipv4", 00:40:24.547 "trsvcid": "$NVMF_PORT", 00:40:24.547 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:24.547 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:24.547 "hdgst": ${hdgst:-false}, 00:40:24.547 "ddgst": ${ddgst:-false} 00:40:24.547 }, 00:40:24.547 "method": "bdev_nvme_attach_controller" 00:40:24.547 } 00:40:24.547 EOF 00:40:24.547 )") 00:40:24.547 08:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:40:24.547 08:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:40:24.547 08:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:40:24.547 08:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:24.547 08:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:24.547 { 00:40:24.547 "params": { 00:40:24.547 "name": "Nvme$subsystem", 00:40:24.547 "trtype": "$TEST_TRANSPORT", 00:40:24.547 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:24.547 "adrfam": "ipv4", 00:40:24.547 "trsvcid": "$NVMF_PORT", 00:40:24.547 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:24.547 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:24.547 "hdgst": ${hdgst:-false}, 00:40:24.547 "ddgst": ${ddgst:-false} 00:40:24.547 }, 00:40:24.547 "method": "bdev_nvme_attach_controller" 00:40:24.547 } 00:40:24.547 EOF 00:40:24.547 )") 00:40:24.547 08:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:40:24.547 08:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3169591 00:40:24.547 08:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:40:24.547 08:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:40:24.547 08:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:40:24.547 08:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:40:24.547 08:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:40:24.547 08:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:40:24.547 08:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:40:24.547 08:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:24.547 "params": { 00:40:24.547 "name": "Nvme1", 00:40:24.547 "trtype": "tcp", 00:40:24.547 "traddr": "10.0.0.2", 00:40:24.547 "adrfam": "ipv4", 00:40:24.547 "trsvcid": "4420", 00:40:24.547 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:24.547 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:24.547 "hdgst": false, 00:40:24.547 "ddgst": false 00:40:24.547 }, 00:40:24.547 "method": "bdev_nvme_attach_controller" 00:40:24.547 }' 00:40:24.547 08:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:24.547 "params": { 00:40:24.547 "name": "Nvme1", 00:40:24.547 "trtype": "tcp", 00:40:24.547 "traddr": "10.0.0.2", 00:40:24.547 "adrfam": "ipv4", 00:40:24.547 "trsvcid": "4420", 00:40:24.547 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:24.548 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:24.548 "hdgst": false, 00:40:24.548 "ddgst": false 00:40:24.548 }, 00:40:24.548 "method": "bdev_nvme_attach_controller" 00:40:24.548 }' 00:40:24.548 08:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:40:24.548 08:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:24.548 "params": { 00:40:24.548 "name": "Nvme1", 00:40:24.548 "trtype": "tcp", 00:40:24.548 "traddr": "10.0.0.2", 00:40:24.548 "adrfam": "ipv4", 00:40:24.548 "trsvcid": "4420", 00:40:24.548 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:24.548 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:24.548 "hdgst": false, 00:40:24.548 "ddgst": false 00:40:24.548 }, 00:40:24.548 "method": "bdev_nvme_attach_controller" 00:40:24.548 }' 00:40:24.548 08:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:40:24.548 08:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:24.548 "params": { 00:40:24.548 "name": "Nvme1", 00:40:24.548 "trtype": "tcp", 00:40:24.548 "traddr": "10.0.0.2", 00:40:24.548 "adrfam": "ipv4", 00:40:24.548 "trsvcid": "4420", 00:40:24.548 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:24.548 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:24.548 "hdgst": false, 00:40:24.548 "ddgst": false 00:40:24.548 }, 00:40:24.548 "method": "bdev_nvme_attach_controller" 00:40:24.548 }' 00:40:24.548 [2024-11-19 08:04:16.314270] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:40:24.548 [2024-11-19 08:04:16.314270] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:40:24.548 [2024-11-19 08:04:16.314273] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:40:24.548 [2024-11-19 08:04:16.314403] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-19 08:04:16.314403] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:40:24.548 --proc-type=auto ] 00:40:24.548 [2024-11-19 08:04:16.314425] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:40:24.548 [2024-11-19 08:04:16.314753] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:40:24.548 [2024-11-19 08:04:16.314884] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:40:24.806 [2024-11-19 08:04:16.573947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:24.806 [2024-11-19 08:04:16.678892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:24.806 [2024-11-19 08:04:16.695731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:40:25.065 [2024-11-19 08:04:16.783749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:25.065 [2024-11-19 08:04:16.801804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:40:25.065 [2024-11-19 08:04:16.884833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:25.065 [2024-11-19 08:04:16.907272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:40:25.324 [2024-11-19 08:04:17.008336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:40:25.324 Running I/O for 1 seconds... 00:40:25.583 Running I/O for 1 seconds... 00:40:25.583 Running I/O for 1 seconds... 00:40:25.583 Running I/O for 1 seconds... 00:40:26.518 5478.00 IOPS, 21.40 MiB/s 00:40:26.519 Latency(us) 00:40:26.519 [2024-11-19T07:04:18.449Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:26.519 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:40:26.519 Nvme1n1 : 1.03 5466.52 21.35 0.00 0.00 23164.60 5509.88 44079.03 00:40:26.519 [2024-11-19T07:04:18.449Z] =================================================================================================================== 00:40:26.519 [2024-11-19T07:04:18.449Z] Total : 5466.52 21.35 0.00 0.00 23164.60 5509.88 44079.03 00:40:26.519 5156.00 IOPS, 20.14 MiB/s 00:40:26.519 Latency(us) 00:40:26.519 [2024-11-19T07:04:18.449Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:26.519 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:40:26.519 Nvme1n1 : 1.01 5257.38 20.54 0.00 0.00 24242.10 7184.69 45049.93 00:40:26.519 [2024-11-19T07:04:18.449Z] =================================================================================================================== 00:40:26.519 [2024-11-19T07:04:18.449Z] Total : 5257.38 20.54 0.00 0.00 24242.10 7184.69 45049.93 00:40:26.519 7666.00 IOPS, 29.95 MiB/s 00:40:26.519 Latency(us) 00:40:26.519 [2024-11-19T07:04:18.449Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:26.519 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:40:26.519 Nvme1n1 : 1.01 7740.16 30.24 0.00 0.00 16459.89 3737.98 26214.40 00:40:26.519 [2024-11-19T07:04:18.449Z] =================================================================================================================== 00:40:26.519 [2024-11-19T07:04:18.449Z] Total : 7740.16 30.24 0.00 0.00 16459.89 3737.98 26214.40 00:40:26.777 151928.00 IOPS, 593.47 MiB/s 00:40:26.777 Latency(us) 00:40:26.777 [2024-11-19T07:04:18.707Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:26.777 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:40:26.777 Nvme1n1 : 1.00 151618.66 592.26 0.00 0.00 839.86 354.99 2002.49 00:40:26.777 [2024-11-19T07:04:18.707Z] =================================================================================================================== 00:40:26.777 [2024-11-19T07:04:18.707Z] Total : 151618.66 592.26 0.00 0.00 839.86 354.99 2002.49 00:40:27.035 08:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3169592 00:40:27.035 08:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3169595 00:40:27.293 08:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3169597 00:40:27.294 08:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:27.294 08:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:27.294 08:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:27.294 08:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:27.294 08:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:40:27.294 08:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:40:27.294 08:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:27.294 08:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:40:27.294 08:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:27.294 08:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:40:27.294 08:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:27.294 08:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:27.294 rmmod nvme_tcp 00:40:27.294 rmmod nvme_fabrics 00:40:27.294 rmmod nvme_keyring 00:40:27.294 08:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:27.294 08:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:40:27.294 08:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:40:27.294 08:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 3169434 ']' 00:40:27.294 08:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 3169434 00:40:27.294 08:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 3169434 ']' 00:40:27.294 08:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 3169434 00:40:27.294 08:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:40:27.294 08:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:27.294 08:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3169434 00:40:27.294 08:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:27.294 08:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:27.294 08:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3169434' 00:40:27.294 killing process with pid 3169434 00:40:27.294 08:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 3169434 00:40:27.294 08:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 3169434 00:40:28.670 08:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:28.670 08:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:28.670 08:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:28.670 08:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:40:28.670 08:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:40:28.670 08:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:28.670 08:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:40:28.670 08:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:28.670 08:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:28.670 08:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:28.670 08:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:28.670 08:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:30.574 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:30.574 00:40:30.574 real 0m9.785s 00:40:30.574 user 0m22.312s 00:40:30.574 sys 0m4.653s 00:40:30.574 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:30.574 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:30.574 ************************************ 00:40:30.574 END TEST nvmf_bdev_io_wait 00:40:30.574 ************************************ 00:40:30.574 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:40:30.574 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:30.574 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:30.574 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:30.574 ************************************ 00:40:30.574 START TEST nvmf_queue_depth 00:40:30.574 ************************************ 00:40:30.574 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:40:30.574 * Looking for test storage... 00:40:30.574 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:30.574 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:40:30.574 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:40:30.574 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:40:30.834 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:40:30.834 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:30.834 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:30.834 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:30.834 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:40:30.834 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:40:30.834 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:40:30.834 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:40:30.834 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:40:30.834 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:40:30.834 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:40:30.834 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:30.834 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:40:30.834 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:40:30.834 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:30.834 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:30.834 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:40:30.834 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:40:30.834 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:30.834 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:40:30.834 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:40:30.834 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:40:30.834 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:40:30.834 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:30.834 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:40:30.834 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:40:30.834 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:30.834 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:30.834 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:40:30.834 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:30.834 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:40:30.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:30.834 --rc genhtml_branch_coverage=1 00:40:30.834 --rc genhtml_function_coverage=1 00:40:30.834 --rc genhtml_legend=1 00:40:30.834 --rc geninfo_all_blocks=1 00:40:30.834 --rc geninfo_unexecuted_blocks=1 00:40:30.834 00:40:30.834 ' 00:40:30.834 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:40:30.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:30.834 --rc genhtml_branch_coverage=1 00:40:30.834 --rc genhtml_function_coverage=1 00:40:30.834 --rc genhtml_legend=1 00:40:30.834 --rc geninfo_all_blocks=1 00:40:30.834 --rc geninfo_unexecuted_blocks=1 00:40:30.834 00:40:30.834 ' 00:40:30.834 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:40:30.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:30.834 --rc genhtml_branch_coverage=1 00:40:30.834 --rc genhtml_function_coverage=1 00:40:30.834 --rc genhtml_legend=1 00:40:30.834 --rc geninfo_all_blocks=1 00:40:30.834 --rc geninfo_unexecuted_blocks=1 00:40:30.834 00:40:30.834 ' 00:40:30.834 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:40:30.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:30.834 --rc genhtml_branch_coverage=1 00:40:30.834 --rc genhtml_function_coverage=1 00:40:30.834 --rc genhtml_legend=1 00:40:30.834 --rc geninfo_all_blocks=1 00:40:30.834 --rc geninfo_unexecuted_blocks=1 00:40:30.834 00:40:30.834 ' 00:40:30.834 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:30.834 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:40:30.834 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:30.834 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:30.834 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:30.834 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:30.834 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:30.834 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:30.834 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:30.834 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:30.834 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:30.834 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:30.834 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:30.834 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:30.835 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:30.835 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:30.835 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:30.835 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:30.835 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:30.835 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:40:30.835 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:30.835 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:30.835 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:30.835 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:30.835 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:30.835 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:30.835 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:40:30.835 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:30.835 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:40:30.835 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:30.835 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:30.835 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:30.835 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:30.835 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:30.835 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:30.835 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:30.835 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:30.835 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:30.835 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:30.835 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:40:30.835 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:40:30.835 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:40:30.835 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:40:30.835 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:30.835 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:30.835 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:30.835 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:30.835 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:30.835 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:30.835 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:30.835 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:30.835 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:30.835 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:30.835 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:40:30.835 08:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:32.814 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:32.814 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:40:32.814 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:32.814 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:32.814 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:32.814 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:32.814 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:32.814 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:40:32.814 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:32.814 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:40:32.814 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:40:32.814 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:40:32.814 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:40:32.814 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:40:32.814 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:40:32.814 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:32.814 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:32.814 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:32.814 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:32.814 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:32.814 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:32.814 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:32.814 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:32.814 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:32.814 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:32.814 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:32.814 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:32.814 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:32.814 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:32.814 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:32.814 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:32.814 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:32.814 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:32.814 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:32.814 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:40:32.814 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:40:32.814 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:32.814 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:32.814 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:32.814 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:32.814 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:32.814 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:32.814 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:40:32.814 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:40:32.814 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:32.814 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:32.814 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:32.814 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:32.814 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:32.814 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:32.814 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:32.815 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:32.815 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:32.815 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:32.815 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:32.815 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:32.815 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:32.815 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:32.815 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:32.815 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:40:32.815 Found net devices under 0000:0a:00.0: cvl_0_0 00:40:32.815 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:32.815 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:32.815 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:32.815 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:32.815 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:32.815 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:32.815 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:32.815 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:32.815 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:40:32.815 Found net devices under 0000:0a:00.1: cvl_0_1 00:40:32.815 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:32.815 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:32.815 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:40:32.815 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:32.815 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:32.815 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:32.815 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:32.815 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:32.815 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:32.815 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:32.815 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:32.815 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:32.815 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:32.815 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:32.815 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:32.815 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:32.815 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:32.815 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:32.815 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:32.815 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:32.815 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:32.815 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:32.815 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:32.815 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:32.815 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:32.815 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:32.815 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:32.815 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:32.815 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:32.815 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:32.815 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.217 ms 00:40:32.815 00:40:32.815 --- 10.0.0.2 ping statistics --- 00:40:32.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:32.815 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:40:32.815 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:32.815 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:32.815 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:40:32.815 00:40:32.815 --- 10.0.0.1 ping statistics --- 00:40:32.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:32.815 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:40:32.815 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:32.815 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:40:32.815 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:32.815 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:32.815 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:32.815 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:32.815 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:32.815 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:32.815 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:32.815 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:40:32.815 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:32.815 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:32.815 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:32.815 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=3172073 00:40:32.815 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:40:32.815 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 3172073 00:40:32.815 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3172073 ']' 00:40:32.815 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:32.815 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:32.815 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:32.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:32.815 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:32.815 08:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:32.815 [2024-11-19 08:04:24.689098] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:32.815 [2024-11-19 08:04:24.691591] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:40:32.815 [2024-11-19 08:04:24.691717] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:33.074 [2024-11-19 08:04:24.848820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:33.074 [2024-11-19 08:04:24.984331] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:33.074 [2024-11-19 08:04:24.984417] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:33.074 [2024-11-19 08:04:24.984446] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:33.074 [2024-11-19 08:04:24.984467] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:33.074 [2024-11-19 08:04:24.984488] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:33.074 [2024-11-19 08:04:24.986080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:33.640 [2024-11-19 08:04:25.347562] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:33.640 [2024-11-19 08:04:25.348000] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:33.897 08:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:33.897 08:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:40:33.897 08:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:33.897 08:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:33.897 08:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:33.897 08:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:33.897 08:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:33.897 08:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:33.897 08:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:33.897 [2024-11-19 08:04:25.679140] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:33.897 08:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:33.897 08:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:33.897 08:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:33.897 08:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:33.897 Malloc0 00:40:33.897 08:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:33.897 08:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:40:33.897 08:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:33.897 08:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:33.897 08:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:33.897 08:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:33.897 08:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:33.897 08:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:33.897 08:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:33.897 08:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:33.897 08:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:33.897 08:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:33.897 [2024-11-19 08:04:25.803298] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:33.897 08:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:33.897 08:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3172227 00:40:33.897 08:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:40:33.897 08:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:33.897 08:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3172227 /var/tmp/bdevperf.sock 00:40:33.897 08:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3172227 ']' 00:40:33.897 08:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:40:33.897 08:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:33.897 08:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:40:33.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:40:33.897 08:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:33.897 08:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:34.156 [2024-11-19 08:04:25.889727] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:40:34.156 [2024-11-19 08:04:25.889876] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3172227 ] 00:40:34.156 [2024-11-19 08:04:26.032698] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:34.416 [2024-11-19 08:04:26.168425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:34.983 08:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:34.983 08:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:40:34.984 08:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:40:34.984 08:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:34.984 08:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:35.244 NVMe0n1 00:40:35.244 08:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:35.244 08:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:40:35.244 Running I/O for 10 seconds... 00:40:37.562 6089.00 IOPS, 23.79 MiB/s [2024-11-19T07:04:30.426Z] 5910.00 IOPS, 23.09 MiB/s [2024-11-19T07:04:31.364Z] 5813.33 IOPS, 22.71 MiB/s [2024-11-19T07:04:32.300Z] 5889.00 IOPS, 23.00 MiB/s [2024-11-19T07:04:33.236Z] 5935.40 IOPS, 23.19 MiB/s [2024-11-19T07:04:34.175Z] 5910.50 IOPS, 23.09 MiB/s [2024-11-19T07:04:35.551Z] 5883.57 IOPS, 22.98 MiB/s [2024-11-19T07:04:36.490Z] 5889.50 IOPS, 23.01 MiB/s [2024-11-19T07:04:37.426Z] 5917.00 IOPS, 23.11 MiB/s [2024-11-19T07:04:37.426Z] 5934.40 IOPS, 23.18 MiB/s 00:40:45.496 Latency(us) 00:40:45.496 [2024-11-19T07:04:37.426Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:45.496 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:40:45.496 Verification LBA range: start 0x0 length 0x4000 00:40:45.496 NVMe0n1 : 10.16 5944.00 23.22 0.00 0.00 171350.58 25049.32 99032.18 00:40:45.496 [2024-11-19T07:04:37.426Z] =================================================================================================================== 00:40:45.496 [2024-11-19T07:04:37.426Z] Total : 5944.00 23.22 0.00 0.00 171350.58 25049.32 99032.18 00:40:45.496 { 00:40:45.496 "results": [ 00:40:45.496 { 00:40:45.496 "job": "NVMe0n1", 00:40:45.496 "core_mask": "0x1", 00:40:45.496 "workload": "verify", 00:40:45.496 "status": "finished", 00:40:45.496 "verify_range": { 00:40:45.496 "start": 0, 00:40:45.496 "length": 16384 00:40:45.496 }, 00:40:45.496 "queue_depth": 1024, 00:40:45.496 "io_size": 4096, 00:40:45.496 "runtime": 10.156131, 00:40:45.496 "iops": 5943.995799187703, 00:40:45.496 "mibps": 23.218733590576964, 00:40:45.496 "io_failed": 0, 00:40:45.496 "io_timeout": 0, 00:40:45.496 "avg_latency_us": 171350.57575168597, 00:40:45.496 "min_latency_us": 25049.315555555557, 00:40:45.496 "max_latency_us": 99032.17777777778 00:40:45.496 } 00:40:45.496 ], 00:40:45.496 "core_count": 1 00:40:45.496 } 00:40:45.496 08:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3172227 00:40:45.496 08:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3172227 ']' 00:40:45.496 08:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3172227 00:40:45.496 08:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:40:45.496 08:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:45.496 08:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3172227 00:40:45.496 08:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:45.496 08:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:45.496 08:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3172227' 00:40:45.496 killing process with pid 3172227 00:40:45.496 08:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3172227 00:40:45.496 Received shutdown signal, test time was about 10.000000 seconds 00:40:45.496 00:40:45.496 Latency(us) 00:40:45.496 [2024-11-19T07:04:37.426Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:45.496 [2024-11-19T07:04:37.426Z] =================================================================================================================== 00:40:45.496 [2024-11-19T07:04:37.426Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:45.496 08:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3172227 00:40:46.434 08:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:40:46.434 08:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:40:46.434 08:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:46.434 08:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:40:46.434 08:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:46.434 08:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:40:46.434 08:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:46.434 08:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:46.434 rmmod nvme_tcp 00:40:46.434 rmmod nvme_fabrics 00:40:46.434 rmmod nvme_keyring 00:40:46.434 08:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:46.434 08:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:40:46.434 08:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:40:46.434 08:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 3172073 ']' 00:40:46.434 08:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 3172073 00:40:46.434 08:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3172073 ']' 00:40:46.434 08:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3172073 00:40:46.434 08:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:40:46.434 08:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:46.434 08:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3172073 00:40:46.434 08:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:40:46.434 08:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:40:46.434 08:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3172073' 00:40:46.434 killing process with pid 3172073 00:40:46.434 08:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3172073 00:40:46.434 08:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3172073 00:40:47.811 08:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:47.811 08:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:47.811 08:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:47.811 08:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:40:47.811 08:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:40:47.811 08:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:47.811 08:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:40:47.811 08:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:47.811 08:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:47.811 08:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:47.811 08:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:47.811 08:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:50.348 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:50.348 00:40:50.348 real 0m19.336s 00:40:50.348 user 0m26.894s 00:40:50.348 sys 0m3.668s 00:40:50.348 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:50.348 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:50.348 ************************************ 00:40:50.348 END TEST nvmf_queue_depth 00:40:50.348 ************************************ 00:40:50.348 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:40:50.348 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:50.348 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:50.348 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:50.348 ************************************ 00:40:50.348 START TEST nvmf_target_multipath 00:40:50.348 ************************************ 00:40:50.348 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:40:50.348 * Looking for test storage... 00:40:50.348 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:50.348 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:40:50.348 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:40:50.348 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:40:50.348 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:40:50.348 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:50.348 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:50.348 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:50.349 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:40:50.349 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:40:50.349 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:40:50.349 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:40:50.349 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:40:50.349 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:40:50.349 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:40:50.349 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:50.349 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:40:50.349 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:40:50.349 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:50.349 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:50.349 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:40:50.349 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:40:50.349 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:50.349 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:40:50.349 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:40:50.349 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:40:50.349 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:40:50.349 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:50.349 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:40:50.349 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:40:50.349 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:50.349 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:50.349 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:40:50.349 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:50.349 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:40:50.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:50.349 --rc genhtml_branch_coverage=1 00:40:50.349 --rc genhtml_function_coverage=1 00:40:50.349 --rc genhtml_legend=1 00:40:50.349 --rc geninfo_all_blocks=1 00:40:50.349 --rc geninfo_unexecuted_blocks=1 00:40:50.349 00:40:50.349 ' 00:40:50.349 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:40:50.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:50.349 --rc genhtml_branch_coverage=1 00:40:50.349 --rc genhtml_function_coverage=1 00:40:50.349 --rc genhtml_legend=1 00:40:50.349 --rc geninfo_all_blocks=1 00:40:50.349 --rc geninfo_unexecuted_blocks=1 00:40:50.349 00:40:50.349 ' 00:40:50.349 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:40:50.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:50.349 --rc genhtml_branch_coverage=1 00:40:50.349 --rc genhtml_function_coverage=1 00:40:50.349 --rc genhtml_legend=1 00:40:50.349 --rc geninfo_all_blocks=1 00:40:50.349 --rc geninfo_unexecuted_blocks=1 00:40:50.349 00:40:50.349 ' 00:40:50.349 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:40:50.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:50.349 --rc genhtml_branch_coverage=1 00:40:50.349 --rc genhtml_function_coverage=1 00:40:50.349 --rc genhtml_legend=1 00:40:50.349 --rc geninfo_all_blocks=1 00:40:50.349 --rc geninfo_unexecuted_blocks=1 00:40:50.349 00:40:50.349 ' 00:40:50.349 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:50.349 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:40:50.349 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:50.349 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:50.349 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:50.349 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:50.349 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:50.349 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:50.349 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:50.349 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:50.349 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:50.349 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:50.349 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:50.349 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:50.349 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:50.349 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:50.349 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:50.349 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:50.349 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:50.349 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:40:50.349 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:50.349 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:50.349 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:50.349 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:50.349 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:50.349 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:50.349 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:40:50.349 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:50.349 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:40:50.349 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:50.349 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:50.349 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:50.349 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:50.349 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:50.349 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:50.349 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:50.349 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:50.349 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:50.349 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:50.350 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:50.350 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:50.350 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:40:50.350 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:50.350 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:40:50.350 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:50.350 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:50.350 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:50.350 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:50.350 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:50.350 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:50.350 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:50.350 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:50.350 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:50.350 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:50.350 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:40:50.350 08:04:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:40:52.259 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:52.259 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:40:52.259 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:52.259 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:52.259 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:52.259 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:52.259 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:52.259 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:40:52.259 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:52.259 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:40:52.259 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:40:52.259 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:40:52.259 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:40:52.259 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:40:52.259 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:40:52.259 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:52.259 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:52.259 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:52.259 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:52.259 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:52.259 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:52.259 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:52.259 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:52.259 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:52.259 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:52.259 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:52.259 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:52.259 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:52.259 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:52.259 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:52.259 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:52.259 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:52.259 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:52.259 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:52.259 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:40:52.259 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:40:52.259 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:52.259 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:52.259 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:52.259 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:52.259 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:52.259 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:52.259 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:40:52.259 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:40:52.259 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:52.259 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:52.259 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:52.259 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:52.259 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:52.259 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:52.259 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:52.259 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:52.259 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:52.259 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:52.259 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:52.259 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:52.259 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:52.259 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:52.259 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:52.259 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:40:52.259 Found net devices under 0000:0a:00.0: cvl_0_0 00:40:52.259 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:52.259 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:52.259 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:52.259 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:52.259 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:52.259 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:52.259 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:52.259 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:52.259 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:40:52.259 Found net devices under 0000:0a:00.1: cvl_0_1 00:40:52.259 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:52.259 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:52.259 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:40:52.259 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:52.259 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:52.259 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:52.259 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:52.259 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:52.260 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:52.260 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:52.260 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:52.260 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:52.260 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:52.260 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:52.260 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:52.260 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:52.260 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:52.260 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:52.260 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:52.260 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:52.260 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:52.260 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:52.260 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:52.260 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:52.260 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:52.260 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:52.260 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:52.260 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:52.260 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:52.260 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:52.260 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.241 ms 00:40:52.260 00:40:52.260 --- 10.0.0.2 ping statistics --- 00:40:52.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:52.260 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:40:52.260 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:52.260 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:52.260 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:40:52.260 00:40:52.260 --- 10.0.0.1 ping statistics --- 00:40:52.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:52.260 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:40:52.260 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:52.260 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:40:52.260 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:52.260 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:52.260 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:52.260 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:52.260 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:52.260 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:52.260 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:52.260 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:40:52.260 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:40:52.260 only one NIC for nvmf test 00:40:52.260 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:40:52.260 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:52.260 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:40:52.260 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:52.260 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:40:52.260 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:52.260 08:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:52.260 rmmod nvme_tcp 00:40:52.260 rmmod nvme_fabrics 00:40:52.260 rmmod nvme_keyring 00:40:52.260 08:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:52.260 08:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:40:52.260 08:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:40:52.260 08:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:40:52.260 08:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:52.260 08:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:52.260 08:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:52.260 08:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:40:52.260 08:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:40:52.260 08:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:52.260 08:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:40:52.260 08:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:52.260 08:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:52.260 08:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:52.260 08:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:52.260 08:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:54.166 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:54.166 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:40:54.166 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:40:54.166 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:54.166 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:40:54.166 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:54.166 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:40:54.166 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:54.166 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:54.166 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:54.166 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:40:54.166 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:40:54.166 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:40:54.166 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:54.166 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:54.166 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:54.166 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:40:54.166 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:40:54.166 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:54.166 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:40:54.166 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:54.166 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:54.166 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:54.166 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:54.166 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:54.166 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:54.166 00:40:54.166 real 0m4.307s 00:40:54.166 user 0m0.865s 00:40:54.166 sys 0m1.418s 00:40:54.166 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:54.166 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:40:54.166 ************************************ 00:40:54.166 END TEST nvmf_target_multipath 00:40:54.166 ************************************ 00:40:54.425 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:40:54.425 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:54.425 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:54.425 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:54.425 ************************************ 00:40:54.425 START TEST nvmf_zcopy 00:40:54.425 ************************************ 00:40:54.425 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:40:54.425 * Looking for test storage... 00:40:54.425 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:54.425 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:40:54.425 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:40:54.425 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:40:54.425 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:40:54.425 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:54.425 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:54.425 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:54.425 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:40:54.425 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:40:54.425 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:40:54.425 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:40:54.425 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:40:54.425 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:40:54.425 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:40:54.425 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:54.425 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:40:54.425 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:40:54.425 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:54.425 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:54.425 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:40:54.426 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:40:54.426 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:54.426 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:40:54.426 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:40:54.426 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:40:54.426 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:40:54.426 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:54.426 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:40:54.426 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:40:54.426 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:54.426 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:54.426 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:40:54.426 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:54.426 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:40:54.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:54.426 --rc genhtml_branch_coverage=1 00:40:54.426 --rc genhtml_function_coverage=1 00:40:54.426 --rc genhtml_legend=1 00:40:54.426 --rc geninfo_all_blocks=1 00:40:54.426 --rc geninfo_unexecuted_blocks=1 00:40:54.426 00:40:54.426 ' 00:40:54.426 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:40:54.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:54.426 --rc genhtml_branch_coverage=1 00:40:54.426 --rc genhtml_function_coverage=1 00:40:54.426 --rc genhtml_legend=1 00:40:54.426 --rc geninfo_all_blocks=1 00:40:54.426 --rc geninfo_unexecuted_blocks=1 00:40:54.426 00:40:54.426 ' 00:40:54.426 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:40:54.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:54.426 --rc genhtml_branch_coverage=1 00:40:54.426 --rc genhtml_function_coverage=1 00:40:54.426 --rc genhtml_legend=1 00:40:54.426 --rc geninfo_all_blocks=1 00:40:54.426 --rc geninfo_unexecuted_blocks=1 00:40:54.426 00:40:54.426 ' 00:40:54.426 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:40:54.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:54.426 --rc genhtml_branch_coverage=1 00:40:54.426 --rc genhtml_function_coverage=1 00:40:54.426 --rc genhtml_legend=1 00:40:54.426 --rc geninfo_all_blocks=1 00:40:54.426 --rc geninfo_unexecuted_blocks=1 00:40:54.426 00:40:54.426 ' 00:40:54.426 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:54.426 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:40:54.426 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:54.426 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:54.426 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:54.426 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:54.426 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:54.426 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:54.426 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:54.426 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:54.426 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:54.426 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:54.426 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:54.426 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:54.426 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:54.426 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:54.426 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:54.426 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:54.426 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:54.426 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:40:54.426 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:54.426 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:54.426 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:54.426 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:54.426 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:54.426 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:54.426 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:40:54.426 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:54.426 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:40:54.426 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:54.426 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:54.426 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:54.426 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:54.426 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:54.426 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:54.426 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:54.426 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:54.426 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:54.426 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:54.426 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:40:54.426 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:54.426 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:54.426 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:54.426 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:54.426 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:54.426 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:54.426 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:54.426 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:54.426 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:54.426 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:54.426 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:40:54.426 08:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:56.960 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:56.960 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:40:56.960 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:56.960 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:56.960 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:56.960 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:56.960 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:56.960 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:40:56.960 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:56.960 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:40:56.960 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:40:56.960 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:40:56.960 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:40:56.960 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:40:56.960 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:40:56.961 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:56.961 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:56.961 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:56.961 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:56.961 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:56.961 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:56.961 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:56.961 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:56.961 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:56.961 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:56.961 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:56.961 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:56.961 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:56.961 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:56.961 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:56.961 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:56.961 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:56.961 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:56.961 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:56.961 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:40:56.961 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:40:56.961 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:56.961 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:56.961 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:56.961 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:56.961 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:56.961 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:56.961 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:40:56.961 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:40:56.961 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:56.961 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:56.961 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:56.961 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:56.961 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:56.961 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:56.961 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:56.961 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:56.961 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:56.961 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:56.961 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:56.961 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:56.961 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:56.961 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:56.961 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:56.961 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:40:56.961 Found net devices under 0000:0a:00.0: cvl_0_0 00:40:56.961 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:56.961 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:56.961 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:56.961 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:56.961 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:56.961 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:56.961 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:56.961 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:56.961 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:40:56.961 Found net devices under 0000:0a:00.1: cvl_0_1 00:40:56.961 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:56.961 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:56.961 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:40:56.961 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:56.961 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:56.961 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:56.961 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:56.961 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:56.961 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:56.961 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:56.961 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:56.961 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:56.961 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:56.961 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:56.961 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:56.961 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:56.961 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:56.961 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:56.961 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:56.961 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:56.961 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:56.961 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:56.961 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:56.961 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:56.961 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:56.961 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:56.961 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:56.961 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:56.961 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:56.961 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:56.961 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.287 ms 00:40:56.961 00:40:56.961 --- 10.0.0.2 ping statistics --- 00:40:56.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:56.961 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:40:56.961 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:56.961 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:56.961 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:40:56.961 00:40:56.961 --- 10.0.0.1 ping statistics --- 00:40:56.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:56.961 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:40:56.961 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:56.961 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:40:56.962 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:56.962 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:56.962 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:56.962 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:56.962 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:56.962 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:56.962 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:56.962 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:40:56.962 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:56.962 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:56.962 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:56.962 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=3177664 00:40:56.962 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:40:56.962 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 3177664 00:40:56.962 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 3177664 ']' 00:40:56.962 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:56.962 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:56.962 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:56.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:56.962 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:56.962 08:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:56.962 [2024-11-19 08:04:48.703567] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:56.962 [2024-11-19 08:04:48.706392] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:40:56.962 [2024-11-19 08:04:48.706508] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:56.962 [2024-11-19 08:04:48.857045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:57.221 [2024-11-19 08:04:48.978644] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:57.221 [2024-11-19 08:04:48.978747] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:57.222 [2024-11-19 08:04:48.978788] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:57.222 [2024-11-19 08:04:48.978806] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:57.222 [2024-11-19 08:04:48.978825] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:57.222 [2024-11-19 08:04:48.980249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:57.481 [2024-11-19 08:04:49.304325] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:57.481 [2024-11-19 08:04:49.304747] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:58.049 08:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:58.049 08:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:40:58.049 08:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:58.049 08:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:58.049 08:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:58.049 08:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:58.049 08:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:40:58.049 08:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:40:58.049 08:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:58.049 08:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:58.049 [2024-11-19 08:04:49.709278] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:58.049 08:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:58.049 08:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:40:58.049 08:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:58.049 08:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:58.049 08:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:58.049 08:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:58.049 08:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:58.050 08:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:58.050 [2024-11-19 08:04:49.725459] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:58.050 08:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:58.050 08:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:58.050 08:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:58.050 08:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:58.050 08:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:58.050 08:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:40:58.050 08:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:58.050 08:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:58.050 malloc0 00:40:58.050 08:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:58.050 08:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:40:58.050 08:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:58.050 08:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:58.050 08:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:58.050 08:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:40:58.050 08:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:40:58.050 08:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:40:58.050 08:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:40:58.050 08:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:58.050 08:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:58.050 { 00:40:58.050 "params": { 00:40:58.050 "name": "Nvme$subsystem", 00:40:58.050 "trtype": "$TEST_TRANSPORT", 00:40:58.050 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:58.050 "adrfam": "ipv4", 00:40:58.050 "trsvcid": "$NVMF_PORT", 00:40:58.050 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:58.050 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:58.050 "hdgst": ${hdgst:-false}, 00:40:58.050 "ddgst": ${ddgst:-false} 00:40:58.050 }, 00:40:58.050 "method": "bdev_nvme_attach_controller" 00:40:58.050 } 00:40:58.050 EOF 00:40:58.050 )") 00:40:58.050 08:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:40:58.050 08:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:40:58.050 08:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:40:58.050 08:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:58.050 "params": { 00:40:58.050 "name": "Nvme1", 00:40:58.050 "trtype": "tcp", 00:40:58.050 "traddr": "10.0.0.2", 00:40:58.050 "adrfam": "ipv4", 00:40:58.050 "trsvcid": "4420", 00:40:58.050 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:58.050 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:58.050 "hdgst": false, 00:40:58.050 "ddgst": false 00:40:58.050 }, 00:40:58.050 "method": "bdev_nvme_attach_controller" 00:40:58.050 }' 00:40:58.050 [2024-11-19 08:04:49.873082] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:40:58.050 [2024-11-19 08:04:49.873205] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3177816 ] 00:40:58.309 [2024-11-19 08:04:50.018516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:58.309 [2024-11-19 08:04:50.157404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:58.878 Running I/O for 10 seconds... 00:41:00.749 4111.00 IOPS, 32.12 MiB/s [2024-11-19T07:04:54.088Z] 4167.00 IOPS, 32.55 MiB/s [2024-11-19T07:04:55.022Z] 4190.33 IOPS, 32.74 MiB/s [2024-11-19T07:04:55.961Z] 4207.75 IOPS, 32.87 MiB/s [2024-11-19T07:04:56.896Z] 4213.80 IOPS, 32.92 MiB/s [2024-11-19T07:04:57.832Z] 4208.00 IOPS, 32.88 MiB/s [2024-11-19T07:04:58.770Z] 4217.43 IOPS, 32.95 MiB/s [2024-11-19T07:04:59.709Z] 4217.00 IOPS, 32.95 MiB/s [2024-11-19T07:05:01.085Z] 4235.78 IOPS, 33.09 MiB/s [2024-11-19T07:05:01.085Z] 4232.80 IOPS, 33.07 MiB/s 00:41:09.155 Latency(us) 00:41:09.155 [2024-11-19T07:05:01.086Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:09.156 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:41:09.156 Verification LBA range: start 0x0 length 0x1000 00:41:09.156 Nvme1n1 : 10.07 4217.81 32.95 0.00 0.00 30148.62 4830.25 45632.47 00:41:09.156 [2024-11-19T07:05:01.086Z] =================================================================================================================== 00:41:09.156 [2024-11-19T07:05:01.086Z] Total : 4217.81 32.95 0.00 0.00 30148.62 4830.25 45632.47 00:41:09.724 08:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3179136 00:41:09.724 08:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:41:09.724 08:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:09.724 08:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:41:09.724 08:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:41:09.724 08:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:41:09.724 08:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:41:09.724 08:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:09.724 08:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:09.724 { 00:41:09.724 "params": { 00:41:09.724 "name": "Nvme$subsystem", 00:41:09.724 "trtype": "$TEST_TRANSPORT", 00:41:09.724 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:09.724 "adrfam": "ipv4", 00:41:09.724 "trsvcid": "$NVMF_PORT", 00:41:09.724 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:09.724 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:09.724 "hdgst": ${hdgst:-false}, 00:41:09.724 "ddgst": ${ddgst:-false} 00:41:09.724 }, 00:41:09.724 "method": "bdev_nvme_attach_controller" 00:41:09.724 } 00:41:09.724 EOF 00:41:09.724 )") 00:41:09.724 08:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:41:09.724 [2024-11-19 08:05:01.649222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.724 [2024-11-19 08:05:01.649283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.724 08:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:41:09.724 08:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:41:09.724 08:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:09.724 "params": { 00:41:09.724 "name": "Nvme1", 00:41:09.724 "trtype": "tcp", 00:41:09.724 "traddr": "10.0.0.2", 00:41:09.724 "adrfam": "ipv4", 00:41:09.724 "trsvcid": "4420", 00:41:09.724 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:09.724 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:09.724 "hdgst": false, 00:41:09.724 "ddgst": false 00:41:09.724 }, 00:41:09.724 "method": "bdev_nvme_attach_controller" 00:41:09.724 }' 00:41:09.724 [2024-11-19 08:05:01.657144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.724 [2024-11-19 08:05:01.657180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.984 [2024-11-19 08:05:01.665113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.984 [2024-11-19 08:05:01.665147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.984 [2024-11-19 08:05:01.673120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.984 [2024-11-19 08:05:01.673153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.984 [2024-11-19 08:05:01.681122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.984 [2024-11-19 08:05:01.681155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.984 [2024-11-19 08:05:01.689091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.984 [2024-11-19 08:05:01.689124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.984 [2024-11-19 08:05:01.697109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.984 [2024-11-19 08:05:01.697141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.984 [2024-11-19 08:05:01.705103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.984 [2024-11-19 08:05:01.705136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.984 [2024-11-19 08:05:01.713087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.984 [2024-11-19 08:05:01.713119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.984 [2024-11-19 08:05:01.721104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.984 [2024-11-19 08:05:01.721136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.985 [2024-11-19 08:05:01.726290] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:41:09.985 [2024-11-19 08:05:01.726413] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3179136 ] 00:41:09.985 [2024-11-19 08:05:01.729085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.985 [2024-11-19 08:05:01.729117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.985 [2024-11-19 08:05:01.737100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.985 [2024-11-19 08:05:01.737133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.985 [2024-11-19 08:05:01.745104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.985 [2024-11-19 08:05:01.745136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.985 [2024-11-19 08:05:01.753081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.985 [2024-11-19 08:05:01.753112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.985 [2024-11-19 08:05:01.761111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.985 [2024-11-19 08:05:01.761151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.985 [2024-11-19 08:05:01.769101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.985 [2024-11-19 08:05:01.769133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.985 [2024-11-19 08:05:01.777107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.985 [2024-11-19 08:05:01.777139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.985 [2024-11-19 08:05:01.785100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.985 [2024-11-19 08:05:01.785132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.985 [2024-11-19 08:05:01.793087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.985 [2024-11-19 08:05:01.793118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.985 [2024-11-19 08:05:01.801125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.985 [2024-11-19 08:05:01.801157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.985 [2024-11-19 08:05:01.809138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.985 [2024-11-19 08:05:01.809171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.985 [2024-11-19 08:05:01.817108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.985 [2024-11-19 08:05:01.817139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.985 [2024-11-19 08:05:01.825125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.985 [2024-11-19 08:05:01.825157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.985 [2024-11-19 08:05:01.833113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.985 [2024-11-19 08:05:01.833145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.985 [2024-11-19 08:05:01.841112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.985 [2024-11-19 08:05:01.841143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.985 [2024-11-19 08:05:01.849122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.985 [2024-11-19 08:05:01.849153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.985 [2024-11-19 08:05:01.857105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.985 [2024-11-19 08:05:01.857136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.985 [2024-11-19 08:05:01.865117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.985 [2024-11-19 08:05:01.865148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.985 [2024-11-19 08:05:01.866922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:09.985 [2024-11-19 08:05:01.873145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.985 [2024-11-19 08:05:01.873177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.985 [2024-11-19 08:05:01.881121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.985 [2024-11-19 08:05:01.881157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.985 [2024-11-19 08:05:01.889197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.985 [2024-11-19 08:05:01.889251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.985 [2024-11-19 08:05:01.897135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.985 [2024-11-19 08:05:01.897168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.985 [2024-11-19 08:05:01.905107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.985 [2024-11-19 08:05:01.905139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:09.985 [2024-11-19 08:05:01.913134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:09.985 [2024-11-19 08:05:01.913166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.245 [2024-11-19 08:05:01.921098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.245 [2024-11-19 08:05:01.921130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.245 [2024-11-19 08:05:01.929133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.245 [2024-11-19 08:05:01.929166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.245 [2024-11-19 08:05:01.937122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.245 [2024-11-19 08:05:01.937154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.245 [2024-11-19 08:05:01.945105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.245 [2024-11-19 08:05:01.945137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.245 [2024-11-19 08:05:01.953125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.245 [2024-11-19 08:05:01.953157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.245 [2024-11-19 08:05:01.961124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.245 [2024-11-19 08:05:01.961156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.245 [2024-11-19 08:05:01.969119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.245 [2024-11-19 08:05:01.969151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.245 [2024-11-19 08:05:01.977120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.245 [2024-11-19 08:05:01.977152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.245 [2024-11-19 08:05:01.985118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.245 [2024-11-19 08:05:01.985149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.245 [2024-11-19 08:05:01.993117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.245 [2024-11-19 08:05:01.993151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.245 [2024-11-19 08:05:02.001118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.245 [2024-11-19 08:05:02.001151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.245 [2024-11-19 08:05:02.007588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:10.245 [2024-11-19 08:05:02.009098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.245 [2024-11-19 08:05:02.009130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.245 [2024-11-19 08:05:02.017124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.245 [2024-11-19 08:05:02.017157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.245 [2024-11-19 08:05:02.025189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.245 [2024-11-19 08:05:02.025243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.245 [2024-11-19 08:05:02.033166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.245 [2024-11-19 08:05:02.033220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.245 [2024-11-19 08:05:02.041133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.245 [2024-11-19 08:05:02.041167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.245 [2024-11-19 08:05:02.049120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.245 [2024-11-19 08:05:02.049152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.245 [2024-11-19 08:05:02.057121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.245 [2024-11-19 08:05:02.057161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.245 [2024-11-19 08:05:02.065131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.245 [2024-11-19 08:05:02.065165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.245 [2024-11-19 08:05:02.073098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.245 [2024-11-19 08:05:02.073130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.245 [2024-11-19 08:05:02.081112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.245 [2024-11-19 08:05:02.081144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.245 [2024-11-19 08:05:02.089118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.245 [2024-11-19 08:05:02.089150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.245 [2024-11-19 08:05:02.097126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.245 [2024-11-19 08:05:02.097164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.245 [2024-11-19 08:05:02.105175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.245 [2024-11-19 08:05:02.105226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.245 [2024-11-19 08:05:02.113160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.246 [2024-11-19 08:05:02.113213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.246 [2024-11-19 08:05:02.121189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.246 [2024-11-19 08:05:02.121243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.246 [2024-11-19 08:05:02.129193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.246 [2024-11-19 08:05:02.129247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.246 [2024-11-19 08:05:02.137099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.246 [2024-11-19 08:05:02.137130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.246 [2024-11-19 08:05:02.145125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.246 [2024-11-19 08:05:02.145158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.246 [2024-11-19 08:05:02.153110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.246 [2024-11-19 08:05:02.153153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.246 [2024-11-19 08:05:02.161125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.246 [2024-11-19 08:05:02.161157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.246 [2024-11-19 08:05:02.169123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.246 [2024-11-19 08:05:02.169155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.246 [2024-11-19 08:05:02.177104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.246 [2024-11-19 08:05:02.177136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.506 [2024-11-19 08:05:02.185119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.506 [2024-11-19 08:05:02.185152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.506 [2024-11-19 08:05:02.193122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.506 [2024-11-19 08:05:02.193155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.506 [2024-11-19 08:05:02.201100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.506 [2024-11-19 08:05:02.201132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.506 [2024-11-19 08:05:02.209119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.506 [2024-11-19 08:05:02.209159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.506 [2024-11-19 08:05:02.217127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.506 [2024-11-19 08:05:02.217156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.506 [2024-11-19 08:05:02.225109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.506 [2024-11-19 08:05:02.225140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.506 [2024-11-19 08:05:02.233125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.506 [2024-11-19 08:05:02.233168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.506 [2024-11-19 08:05:02.241101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.506 [2024-11-19 08:05:02.241132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.506 [2024-11-19 08:05:02.249127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.506 [2024-11-19 08:05:02.249159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.506 [2024-11-19 08:05:02.257178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.506 [2024-11-19 08:05:02.257221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.507 [2024-11-19 08:05:02.265173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.507 [2024-11-19 08:05:02.265225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.507 [2024-11-19 08:05:02.273197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.507 [2024-11-19 08:05:02.273252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.507 [2024-11-19 08:05:02.281129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.507 [2024-11-19 08:05:02.281164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.507 [2024-11-19 08:05:02.289126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.507 [2024-11-19 08:05:02.289158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.507 [2024-11-19 08:05:02.297114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.507 [2024-11-19 08:05:02.297146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.507 [2024-11-19 08:05:02.305104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.507 [2024-11-19 08:05:02.305136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.507 [2024-11-19 08:05:02.313123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.507 [2024-11-19 08:05:02.313155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.507 [2024-11-19 08:05:02.321134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.507 [2024-11-19 08:05:02.321166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.507 [2024-11-19 08:05:02.329115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.507 [2024-11-19 08:05:02.329147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.507 [2024-11-19 08:05:02.337159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.507 [2024-11-19 08:05:02.337192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.507 [2024-11-19 08:05:02.345112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.507 [2024-11-19 08:05:02.345144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.507 [2024-11-19 08:05:02.353124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.507 [2024-11-19 08:05:02.353156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.507 [2024-11-19 08:05:02.361117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.507 [2024-11-19 08:05:02.361149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.507 [2024-11-19 08:05:02.369100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.507 [2024-11-19 08:05:02.369133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.507 [2024-11-19 08:05:02.377127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.507 [2024-11-19 08:05:02.377160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.507 [2024-11-19 08:05:02.385120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.507 [2024-11-19 08:05:02.385153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.507 [2024-11-19 08:05:02.393140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.507 [2024-11-19 08:05:02.393179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.507 [2024-11-19 08:05:02.401135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.507 [2024-11-19 08:05:02.401172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.507 [2024-11-19 08:05:02.409133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.507 [2024-11-19 08:05:02.409169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.507 [2024-11-19 08:05:02.417138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.507 [2024-11-19 08:05:02.417175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.507 [2024-11-19 08:05:02.425114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.507 [2024-11-19 08:05:02.425149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.507 [2024-11-19 08:05:02.433115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.507 [2024-11-19 08:05:02.433148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.766 [2024-11-19 08:05:02.441140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.766 [2024-11-19 08:05:02.441174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.766 [2024-11-19 08:05:02.449119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.766 [2024-11-19 08:05:02.449153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.766 [2024-11-19 08:05:02.457118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.766 [2024-11-19 08:05:02.457152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.766 [2024-11-19 08:05:02.465133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.766 [2024-11-19 08:05:02.465169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.766 [2024-11-19 08:05:02.473127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.766 [2024-11-19 08:05:02.473163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.766 [2024-11-19 08:05:02.481102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.766 [2024-11-19 08:05:02.481138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.766 [2024-11-19 08:05:02.547942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.766 [2024-11-19 08:05:02.548004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.766 [2024-11-19 08:05:02.553129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.766 [2024-11-19 08:05:02.553165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.766 [2024-11-19 08:05:02.561118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.766 [2024-11-19 08:05:02.561153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.766 Running I/O for 5 seconds... 00:41:10.766 [2024-11-19 08:05:02.577280] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.766 [2024-11-19 08:05:02.577316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.766 [2024-11-19 08:05:02.590511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.766 [2024-11-19 08:05:02.590552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.766 [2024-11-19 08:05:02.609583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.766 [2024-11-19 08:05:02.609619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.766 [2024-11-19 08:05:02.622361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.766 [2024-11-19 08:05:02.622396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.766 [2024-11-19 08:05:02.638440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.766 [2024-11-19 08:05:02.638480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.767 [2024-11-19 08:05:02.653538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.767 [2024-11-19 08:05:02.653579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.767 [2024-11-19 08:05:02.668116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.767 [2024-11-19 08:05:02.668151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.767 [2024-11-19 08:05:02.683123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.767 [2024-11-19 08:05:02.683157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:10.767 [2024-11-19 08:05:02.697785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:10.767 [2024-11-19 08:05:02.697821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.026 [2024-11-19 08:05:02.712823] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.026 [2024-11-19 08:05:02.712859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.026 [2024-11-19 08:05:02.727759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.026 [2024-11-19 08:05:02.727794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.026 [2024-11-19 08:05:02.742536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.026 [2024-11-19 08:05:02.742571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.026 [2024-11-19 08:05:02.757267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.026 [2024-11-19 08:05:02.757301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.026 [2024-11-19 08:05:02.772081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.026 [2024-11-19 08:05:02.772122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.026 [2024-11-19 08:05:02.786902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.026 [2024-11-19 08:05:02.786939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.026 [2024-11-19 08:05:02.801840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.026 [2024-11-19 08:05:02.801881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.026 [2024-11-19 08:05:02.820559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.026 [2024-11-19 08:05:02.820599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.026 [2024-11-19 08:05:02.833776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.026 [2024-11-19 08:05:02.833810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.026 [2024-11-19 08:05:02.853340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.027 [2024-11-19 08:05:02.853383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.027 [2024-11-19 08:05:02.867170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.027 [2024-11-19 08:05:02.867210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.027 [2024-11-19 08:05:02.883587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.027 [2024-11-19 08:05:02.883627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.027 [2024-11-19 08:05:02.898312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.027 [2024-11-19 08:05:02.898352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.027 [2024-11-19 08:05:02.913296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.027 [2024-11-19 08:05:02.913336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.027 [2024-11-19 08:05:02.928200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.027 [2024-11-19 08:05:02.928235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.027 [2024-11-19 08:05:02.942909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.027 [2024-11-19 08:05:02.942946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.027 [2024-11-19 08:05:02.957299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.027 [2024-11-19 08:05:02.957339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.287 [2024-11-19 08:05:02.971987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.287 [2024-11-19 08:05:02.972021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.287 [2024-11-19 08:05:02.984920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.287 [2024-11-19 08:05:02.984957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.287 [2024-11-19 08:05:03.000837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.287 [2024-11-19 08:05:03.000872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.287 [2024-11-19 08:05:03.015733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.287 [2024-11-19 08:05:03.015784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.287 [2024-11-19 08:05:03.030714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.287 [2024-11-19 08:05:03.030768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.287 [2024-11-19 08:05:03.045666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.287 [2024-11-19 08:05:03.045709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.287 [2024-11-19 08:05:03.059945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.287 [2024-11-19 08:05:03.059980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.287 [2024-11-19 08:05:03.074347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.287 [2024-11-19 08:05:03.074386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.287 [2024-11-19 08:05:03.089653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.287 [2024-11-19 08:05:03.089701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.287 [2024-11-19 08:05:03.109042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.287 [2024-11-19 08:05:03.109092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.287 [2024-11-19 08:05:03.121707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.287 [2024-11-19 08:05:03.121746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.287 [2024-11-19 08:05:03.140826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.287 [2024-11-19 08:05:03.140868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.287 [2024-11-19 08:05:03.153821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.287 [2024-11-19 08:05:03.153855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.287 [2024-11-19 08:05:03.169671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.287 [2024-11-19 08:05:03.169718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.287 [2024-11-19 08:05:03.184426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.287 [2024-11-19 08:05:03.184458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.287 [2024-11-19 08:05:03.199117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.287 [2024-11-19 08:05:03.199150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.287 [2024-11-19 08:05:03.213119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.287 [2024-11-19 08:05:03.213158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.546 [2024-11-19 08:05:03.227869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.546 [2024-11-19 08:05:03.227905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.546 [2024-11-19 08:05:03.242902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.546 [2024-11-19 08:05:03.242941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.546 [2024-11-19 08:05:03.257960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.546 [2024-11-19 08:05:03.258011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.546 [2024-11-19 08:05:03.273179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.546 [2024-11-19 08:05:03.273219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.546 [2024-11-19 08:05:03.287848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.546 [2024-11-19 08:05:03.287889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.546 [2024-11-19 08:05:03.302614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.546 [2024-11-19 08:05:03.302652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.546 [2024-11-19 08:05:03.316508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.546 [2024-11-19 08:05:03.316540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.546 [2024-11-19 08:05:03.330091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.546 [2024-11-19 08:05:03.330132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.546 [2024-11-19 08:05:03.348422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.546 [2024-11-19 08:05:03.348455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.546 [2024-11-19 08:05:03.361468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.546 [2024-11-19 08:05:03.361508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.547 [2024-11-19 08:05:03.377362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.547 [2024-11-19 08:05:03.377402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.547 [2024-11-19 08:05:03.390273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.547 [2024-11-19 08:05:03.390312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.547 [2024-11-19 08:05:03.406312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.547 [2024-11-19 08:05:03.406351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.547 [2024-11-19 08:05:03.420920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.547 [2024-11-19 08:05:03.420964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.547 [2024-11-19 08:05:03.435294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.547 [2024-11-19 08:05:03.435334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.547 [2024-11-19 08:05:03.449435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.547 [2024-11-19 08:05:03.449475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.547 [2024-11-19 08:05:03.464124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.547 [2024-11-19 08:05:03.464156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.547 [2024-11-19 08:05:03.478428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.547 [2024-11-19 08:05:03.478468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.807 [2024-11-19 08:05:03.493268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.807 [2024-11-19 08:05:03.493308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.807 [2024-11-19 08:05:03.507327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.807 [2024-11-19 08:05:03.507359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.807 [2024-11-19 08:05:03.522332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.807 [2024-11-19 08:05:03.522364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.807 [2024-11-19 08:05:03.536226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.807 [2024-11-19 08:05:03.536265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.807 [2024-11-19 08:05:03.551785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.807 [2024-11-19 08:05:03.551818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.807 [2024-11-19 08:05:03.566448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.807 [2024-11-19 08:05:03.566479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.807 8597.00 IOPS, 67.16 MiB/s [2024-11-19T07:05:03.737Z] [2024-11-19 08:05:03.580490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.807 [2024-11-19 08:05:03.580538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.807 [2024-11-19 08:05:03.595941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.807 [2024-11-19 08:05:03.595988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.807 [2024-11-19 08:05:03.609718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.807 [2024-11-19 08:05:03.609770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.807 [2024-11-19 08:05:03.627911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.807 [2024-11-19 08:05:03.627950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.807 [2024-11-19 08:05:03.640988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.807 [2024-11-19 08:05:03.641028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.807 [2024-11-19 08:05:03.656700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.807 [2024-11-19 08:05:03.656739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.807 [2024-11-19 08:05:03.670742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.807 [2024-11-19 08:05:03.670793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.807 [2024-11-19 08:05:03.686317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.807 [2024-11-19 08:05:03.686356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.807 [2024-11-19 08:05:03.700564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.807 [2024-11-19 08:05:03.700612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.807 [2024-11-19 08:05:03.713264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.807 [2024-11-19 08:05:03.713296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:11.807 [2024-11-19 08:05:03.729017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:11.807 [2024-11-19 08:05:03.729050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.068 [2024-11-19 08:05:03.743302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.068 [2024-11-19 08:05:03.743343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.068 [2024-11-19 08:05:03.757947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.068 [2024-11-19 08:05:03.757995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.068 [2024-11-19 08:05:03.772249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.068 [2024-11-19 08:05:03.772281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.068 [2024-11-19 08:05:03.786765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.068 [2024-11-19 08:05:03.786801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.068 [2024-11-19 08:05:03.801349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.068 [2024-11-19 08:05:03.801389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.068 [2024-11-19 08:05:03.815979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.068 [2024-11-19 08:05:03.816011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.068 [2024-11-19 08:05:03.830465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.068 [2024-11-19 08:05:03.830505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.068 [2024-11-19 08:05:03.845110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.068 [2024-11-19 08:05:03.845149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.068 [2024-11-19 08:05:03.859513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.068 [2024-11-19 08:05:03.859552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.068 [2024-11-19 08:05:03.873925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.068 [2024-11-19 08:05:03.873960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.068 [2024-11-19 08:05:03.888551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.068 [2024-11-19 08:05:03.888591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.068 [2024-11-19 08:05:03.903245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.068 [2024-11-19 08:05:03.903277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.068 [2024-11-19 08:05:03.916861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.068 [2024-11-19 08:05:03.916895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.068 [2024-11-19 08:05:03.931821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.068 [2024-11-19 08:05:03.931854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.068 [2024-11-19 08:05:03.946463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.068 [2024-11-19 08:05:03.946496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.068 [2024-11-19 08:05:03.961098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.068 [2024-11-19 08:05:03.961131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.068 [2024-11-19 08:05:03.975795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.068 [2024-11-19 08:05:03.975831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.068 [2024-11-19 08:05:03.991356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.068 [2024-11-19 08:05:03.991390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.328 [2024-11-19 08:05:04.005933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.328 [2024-11-19 08:05:04.005968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.328 [2024-11-19 08:05:04.020555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.328 [2024-11-19 08:05:04.020588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.328 [2024-11-19 08:05:04.035385] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.329 [2024-11-19 08:05:04.035417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.329 [2024-11-19 08:05:04.049556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.329 [2024-11-19 08:05:04.049596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.329 [2024-11-19 08:05:04.064653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.329 [2024-11-19 08:05:04.064703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.329 [2024-11-19 08:05:04.079396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.329 [2024-11-19 08:05:04.079430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.329 [2024-11-19 08:05:04.094077] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.329 [2024-11-19 08:05:04.094111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.329 [2024-11-19 08:05:04.108951] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.329 [2024-11-19 08:05:04.108990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.329 [2024-11-19 08:05:04.123557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.329 [2024-11-19 08:05:04.123596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.329 [2024-11-19 08:05:04.138052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.329 [2024-11-19 08:05:04.138092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.329 [2024-11-19 08:05:04.152755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.329 [2024-11-19 08:05:04.152790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.329 [2024-11-19 08:05:04.166568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.329 [2024-11-19 08:05:04.166609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.329 [2024-11-19 08:05:04.184722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.329 [2024-11-19 08:05:04.184775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.329 [2024-11-19 08:05:04.198281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.329 [2024-11-19 08:05:04.198321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.329 [2024-11-19 08:05:04.214840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.329 [2024-11-19 08:05:04.214877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.329 [2024-11-19 08:05:04.229926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.329 [2024-11-19 08:05:04.229961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.329 [2024-11-19 08:05:04.244524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.329 [2024-11-19 08:05:04.244564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.329 [2024-11-19 08:05:04.258858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.329 [2024-11-19 08:05:04.258893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.588 [2024-11-19 08:05:04.274025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.588 [2024-11-19 08:05:04.274066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.588 [2024-11-19 08:05:04.289086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.588 [2024-11-19 08:05:04.289121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.588 [2024-11-19 08:05:04.303935] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.588 [2024-11-19 08:05:04.303986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.588 [2024-11-19 08:05:04.318797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.588 [2024-11-19 08:05:04.318834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.588 [2024-11-19 08:05:04.334008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.588 [2024-11-19 08:05:04.334049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.588 [2024-11-19 08:05:04.349280] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.588 [2024-11-19 08:05:04.349320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.588 [2024-11-19 08:05:04.364679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.588 [2024-11-19 08:05:04.364731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.588 [2024-11-19 08:05:04.379307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.588 [2024-11-19 08:05:04.379340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.588 [2024-11-19 08:05:04.393291] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.588 [2024-11-19 08:05:04.393331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.588 [2024-11-19 08:05:04.407992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.588 [2024-11-19 08:05:04.408026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.588 [2024-11-19 08:05:04.423259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.588 [2024-11-19 08:05:04.423299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.588 [2024-11-19 08:05:04.437929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.588 [2024-11-19 08:05:04.437980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.588 [2024-11-19 08:05:04.452383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.588 [2024-11-19 08:05:04.452425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.589 [2024-11-19 08:05:04.467757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.589 [2024-11-19 08:05:04.467794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.589 [2024-11-19 08:05:04.482559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.589 [2024-11-19 08:05:04.482596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.589 [2024-11-19 08:05:04.497326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.589 [2024-11-19 08:05:04.497370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.589 [2024-11-19 08:05:04.511795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.589 [2024-11-19 08:05:04.511829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.849 [2024-11-19 08:05:04.527285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.849 [2024-11-19 08:05:04.527326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.849 [2024-11-19 08:05:04.542341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.849 [2024-11-19 08:05:04.542381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.849 [2024-11-19 08:05:04.556923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.849 [2024-11-19 08:05:04.556960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.849 [2024-11-19 08:05:04.571302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.849 [2024-11-19 08:05:04.571337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.849 8633.50 IOPS, 67.45 MiB/s [2024-11-19T07:05:04.779Z] [2024-11-19 08:05:04.584979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.849 [2024-11-19 08:05:04.585013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.849 [2024-11-19 08:05:04.599733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.849 [2024-11-19 08:05:04.599785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.849 [2024-11-19 08:05:04.614592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.849 [2024-11-19 08:05:04.614634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.849 [2024-11-19 08:05:04.629895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.849 [2024-11-19 08:05:04.629931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.849 [2024-11-19 08:05:04.645418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.849 [2024-11-19 08:05:04.645459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.849 [2024-11-19 08:05:04.660715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.849 [2024-11-19 08:05:04.660782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.849 [2024-11-19 08:05:04.675718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.849 [2024-11-19 08:05:04.675770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.849 [2024-11-19 08:05:04.690188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.849 [2024-11-19 08:05:04.690228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.849 [2024-11-19 08:05:04.705302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.849 [2024-11-19 08:05:04.705341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.849 [2024-11-19 08:05:04.719943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.849 [2024-11-19 08:05:04.719984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.849 [2024-11-19 08:05:04.734370] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.849 [2024-11-19 08:05:04.734410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.849 [2024-11-19 08:05:04.748783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.849 [2024-11-19 08:05:04.748818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.849 [2024-11-19 08:05:04.763344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.849 [2024-11-19 08:05:04.763384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:12.849 [2024-11-19 08:05:04.777951] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:12.849 [2024-11-19 08:05:04.777991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.107 [2024-11-19 08:05:04.792463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.107 [2024-11-19 08:05:04.792497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.107 [2024-11-19 08:05:04.806610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.107 [2024-11-19 08:05:04.806658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.107 [2024-11-19 08:05:04.820597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.107 [2024-11-19 08:05:04.820636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.107 [2024-11-19 08:05:04.837368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.107 [2024-11-19 08:05:04.837408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.107 [2024-11-19 08:05:04.852311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.107 [2024-11-19 08:05:04.852345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.107 [2024-11-19 08:05:04.866937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.107 [2024-11-19 08:05:04.866976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.107 [2024-11-19 08:05:04.881464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.107 [2024-11-19 08:05:04.881505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.107 [2024-11-19 08:05:04.896288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.107 [2024-11-19 08:05:04.896322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.107 [2024-11-19 08:05:04.910516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.107 [2024-11-19 08:05:04.910557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.107 [2024-11-19 08:05:04.925939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.107 [2024-11-19 08:05:04.925989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.107 [2024-11-19 08:05:04.940932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.107 [2024-11-19 08:05:04.940985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.107 [2024-11-19 08:05:04.956141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.107 [2024-11-19 08:05:04.956181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.107 [2024-11-19 08:05:04.971176] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.107 [2024-11-19 08:05:04.971210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.107 [2024-11-19 08:05:04.985670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.107 [2024-11-19 08:05:04.985719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.107 [2024-11-19 08:05:05.000450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.107 [2024-11-19 08:05:05.000490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.107 [2024-11-19 08:05:05.014921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.107 [2024-11-19 08:05:05.014990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.107 [2024-11-19 08:05:05.029166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.107 [2024-11-19 08:05:05.029206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.365 [2024-11-19 08:05:05.042784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.365 [2024-11-19 08:05:05.042821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.365 [2024-11-19 08:05:05.059017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.366 [2024-11-19 08:05:05.059051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.366 [2024-11-19 08:05:05.073291] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.366 [2024-11-19 08:05:05.073332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.366 [2024-11-19 08:05:05.088081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.366 [2024-11-19 08:05:05.088131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.366 [2024-11-19 08:05:05.102099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.366 [2024-11-19 08:05:05.102138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.366 [2024-11-19 08:05:05.118761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.366 [2024-11-19 08:05:05.118814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.366 [2024-11-19 08:05:05.132958] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.366 [2024-11-19 08:05:05.133006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.366 [2024-11-19 08:05:05.145779] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.366 [2024-11-19 08:05:05.145814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.366 [2024-11-19 08:05:05.161529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.366 [2024-11-19 08:05:05.161568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.366 [2024-11-19 08:05:05.176176] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.366 [2024-11-19 08:05:05.176217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.366 [2024-11-19 08:05:05.190414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.366 [2024-11-19 08:05:05.190454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.366 [2024-11-19 08:05:05.204781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.366 [2024-11-19 08:05:05.204815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.366 [2024-11-19 08:05:05.219084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.366 [2024-11-19 08:05:05.219124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.366 [2024-11-19 08:05:05.234008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.366 [2024-11-19 08:05:05.234060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.366 [2024-11-19 08:05:05.248519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.366 [2024-11-19 08:05:05.248558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.366 [2024-11-19 08:05:05.262412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.366 [2024-11-19 08:05:05.262451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.366 [2024-11-19 08:05:05.277039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.366 [2024-11-19 08:05:05.277078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.366 [2024-11-19 08:05:05.292091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.366 [2024-11-19 08:05:05.292123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.624 [2024-11-19 08:05:05.306630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.624 [2024-11-19 08:05:05.306672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.624 [2024-11-19 08:05:05.321348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.624 [2024-11-19 08:05:05.321380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.624 [2024-11-19 08:05:05.335715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.624 [2024-11-19 08:05:05.335768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.624 [2024-11-19 08:05:05.350164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.624 [2024-11-19 08:05:05.350204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.624 [2024-11-19 08:05:05.364173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.624 [2024-11-19 08:05:05.364214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.624 [2024-11-19 08:05:05.379061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.624 [2024-11-19 08:05:05.379100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.624 [2024-11-19 08:05:05.394436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.624 [2024-11-19 08:05:05.394475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.624 [2024-11-19 08:05:05.408719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.624 [2024-11-19 08:05:05.408770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.624 [2024-11-19 08:05:05.422562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.624 [2024-11-19 08:05:05.422600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.624 [2024-11-19 08:05:05.437276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.624 [2024-11-19 08:05:05.437315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.624 [2024-11-19 08:05:05.451874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.624 [2024-11-19 08:05:05.451909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.624 [2024-11-19 08:05:05.466637] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.624 [2024-11-19 08:05:05.466677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.624 [2024-11-19 08:05:05.480360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.624 [2024-11-19 08:05:05.480399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.624 [2024-11-19 08:05:05.493559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.624 [2024-11-19 08:05:05.493591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.624 [2024-11-19 08:05:05.512530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.625 [2024-11-19 08:05:05.512562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.625 [2024-11-19 08:05:05.524948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.625 [2024-11-19 08:05:05.524982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.625 [2024-11-19 08:05:05.540754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.625 [2024-11-19 08:05:05.540788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.625 [2024-11-19 08:05:05.555050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.625 [2024-11-19 08:05:05.555083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.884 [2024-11-19 08:05:05.569250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.884 [2024-11-19 08:05:05.569289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.884 8643.00 IOPS, 67.52 MiB/s [2024-11-19T07:05:05.814Z] [2024-11-19 08:05:05.583683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.884 [2024-11-19 08:05:05.583731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.884 [2024-11-19 08:05:05.597499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.884 [2024-11-19 08:05:05.597538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.884 [2024-11-19 08:05:05.611909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.884 [2024-11-19 08:05:05.611944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.884 [2024-11-19 08:05:05.626372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.884 [2024-11-19 08:05:05.626404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.884 [2024-11-19 08:05:05.639786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.884 [2024-11-19 08:05:05.639819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.884 [2024-11-19 08:05:05.654902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.884 [2024-11-19 08:05:05.654934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.884 [2024-11-19 08:05:05.668762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.884 [2024-11-19 08:05:05.668796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.884 [2024-11-19 08:05:05.683396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.884 [2024-11-19 08:05:05.683437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.885 [2024-11-19 08:05:05.697322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.885 [2024-11-19 08:05:05.697355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.885 [2024-11-19 08:05:05.712145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.885 [2024-11-19 08:05:05.712186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.885 [2024-11-19 08:05:05.728834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.885 [2024-11-19 08:05:05.728870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.885 [2024-11-19 08:05:05.743241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.885 [2024-11-19 08:05:05.743282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.885 [2024-11-19 08:05:05.757159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.885 [2024-11-19 08:05:05.757199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.885 [2024-11-19 08:05:05.770049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.885 [2024-11-19 08:05:05.770088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.885 [2024-11-19 08:05:05.785496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.885 [2024-11-19 08:05:05.785536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.885 [2024-11-19 08:05:05.798855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.885 [2024-11-19 08:05:05.798889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:13.885 [2024-11-19 08:05:05.813992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:13.885 [2024-11-19 08:05:05.814032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.145 [2024-11-19 08:05:05.828512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.145 [2024-11-19 08:05:05.828554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.145 [2024-11-19 08:05:05.843071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.145 [2024-11-19 08:05:05.843104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.145 [2024-11-19 08:05:05.857817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.145 [2024-11-19 08:05:05.857851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.145 [2024-11-19 08:05:05.871803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.145 [2024-11-19 08:05:05.871835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.145 [2024-11-19 08:05:05.886632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.145 [2024-11-19 08:05:05.886664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.145 [2024-11-19 08:05:05.900996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.145 [2024-11-19 08:05:05.901028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.145 [2024-11-19 08:05:05.915530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.145 [2024-11-19 08:05:05.915569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.145 [2024-11-19 08:05:05.930018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.145 [2024-11-19 08:05:05.930065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.145 [2024-11-19 08:05:05.948137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.145 [2024-11-19 08:05:05.948178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.145 [2024-11-19 08:05:05.961768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.145 [2024-11-19 08:05:05.961803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.145 [2024-11-19 08:05:05.977494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.145 [2024-11-19 08:05:05.977526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.145 [2024-11-19 08:05:05.991590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.145 [2024-11-19 08:05:05.991629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.145 [2024-11-19 08:05:06.005950] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.145 [2024-11-19 08:05:06.005999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.145 [2024-11-19 08:05:06.020793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.145 [2024-11-19 08:05:06.020826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.145 [2024-11-19 08:05:06.035110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.145 [2024-11-19 08:05:06.035149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.145 [2024-11-19 08:05:06.049241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.145 [2024-11-19 08:05:06.049280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.145 [2024-11-19 08:05:06.062908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.145 [2024-11-19 08:05:06.062942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.145 [2024-11-19 08:05:06.076388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.145 [2024-11-19 08:05:06.076421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.405 [2024-11-19 08:05:06.092843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.405 [2024-11-19 08:05:06.092876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.405 [2024-11-19 08:05:06.107321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.405 [2024-11-19 08:05:06.107360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.405 [2024-11-19 08:05:06.122152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.405 [2024-11-19 08:05:06.122184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.405 [2024-11-19 08:05:06.136908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.405 [2024-11-19 08:05:06.136944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.405 [2024-11-19 08:05:06.152085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.405 [2024-11-19 08:05:06.152118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.405 [2024-11-19 08:05:06.166525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.405 [2024-11-19 08:05:06.166558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.405 [2024-11-19 08:05:06.181309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.405 [2024-11-19 08:05:06.181341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.405 [2024-11-19 08:05:06.195786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.405 [2024-11-19 08:05:06.195819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.405 [2024-11-19 08:05:06.209932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.405 [2024-11-19 08:05:06.209981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.405 [2024-11-19 08:05:06.227471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.405 [2024-11-19 08:05:06.227504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.405 [2024-11-19 08:05:06.240038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.405 [2024-11-19 08:05:06.240070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.405 [2024-11-19 08:05:06.255674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.405 [2024-11-19 08:05:06.255725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.405 [2024-11-19 08:05:06.269657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.405 [2024-11-19 08:05:06.269713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.405 [2024-11-19 08:05:06.289916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.405 [2024-11-19 08:05:06.289950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.405 [2024-11-19 08:05:06.302837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.405 [2024-11-19 08:05:06.302870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.405 [2024-11-19 08:05:06.318885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.405 [2024-11-19 08:05:06.318918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.405 [2024-11-19 08:05:06.333157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.405 [2024-11-19 08:05:06.333195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.665 [2024-11-19 08:05:06.347658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.665 [2024-11-19 08:05:06.347711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.665 [2024-11-19 08:05:06.361470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.665 [2024-11-19 08:05:06.361510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.665 [2024-11-19 08:05:06.376206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.665 [2024-11-19 08:05:06.376245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.665 [2024-11-19 08:05:06.391205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.665 [2024-11-19 08:05:06.391245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.665 [2024-11-19 08:05:06.406138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.665 [2024-11-19 08:05:06.406177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.665 [2024-11-19 08:05:06.420149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.665 [2024-11-19 08:05:06.420188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.665 [2024-11-19 08:05:06.434953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.665 [2024-11-19 08:05:06.434992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.665 [2024-11-19 08:05:06.449256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.665 [2024-11-19 08:05:06.449294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.665 [2024-11-19 08:05:06.462841] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.665 [2024-11-19 08:05:06.462883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.665 [2024-11-19 08:05:06.479072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.665 [2024-11-19 08:05:06.479112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.665 [2024-11-19 08:05:06.494307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.665 [2024-11-19 08:05:06.494348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.665 [2024-11-19 08:05:06.508660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.665 [2024-11-19 08:05:06.508708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.665 [2024-11-19 08:05:06.523139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.665 [2024-11-19 08:05:06.523171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.665 [2024-11-19 08:05:06.537192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.665 [2024-11-19 08:05:06.537232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.665 [2024-11-19 08:05:06.551569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.665 [2024-11-19 08:05:06.551601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.665 [2024-11-19 08:05:06.566053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.665 [2024-11-19 08:05:06.566093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.665 8686.25 IOPS, 67.86 MiB/s [2024-11-19T07:05:06.595Z] [2024-11-19 08:05:06.583628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.665 [2024-11-19 08:05:06.583661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.665 [2024-11-19 08:05:06.596920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.665 [2024-11-19 08:05:06.596957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.924 [2024-11-19 08:05:06.612829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.924 [2024-11-19 08:05:06.612862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.924 [2024-11-19 08:05:06.627531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.924 [2024-11-19 08:05:06.627562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.924 [2024-11-19 08:05:06.642278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.924 [2024-11-19 08:05:06.642310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.924 [2024-11-19 08:05:06.656560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.924 [2024-11-19 08:05:06.656599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.924 [2024-11-19 08:05:06.671264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.924 [2024-11-19 08:05:06.671303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.924 [2024-11-19 08:05:06.686158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.924 [2024-11-19 08:05:06.686197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.924 [2024-11-19 08:05:06.700464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.924 [2024-11-19 08:05:06.700503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.924 [2024-11-19 08:05:06.715197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.924 [2024-11-19 08:05:06.715235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.924 [2024-11-19 08:05:06.730106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.925 [2024-11-19 08:05:06.730138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.925 [2024-11-19 08:05:06.746421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.925 [2024-11-19 08:05:06.746477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.925 [2024-11-19 08:05:06.758437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.925 [2024-11-19 08:05:06.758476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.925 [2024-11-19 08:05:06.774356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.925 [2024-11-19 08:05:06.774389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.925 [2024-11-19 08:05:06.788425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.925 [2024-11-19 08:05:06.788465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.925 [2024-11-19 08:05:06.801891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.925 [2024-11-19 08:05:06.801926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.925 [2024-11-19 08:05:06.818965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.925 [2024-11-19 08:05:06.819007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.925 [2024-11-19 08:05:06.832930] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.925 [2024-11-19 08:05:06.832994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:14.925 [2024-11-19 08:05:06.848858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:14.925 [2024-11-19 08:05:06.848899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.183 [2024-11-19 08:05:06.864320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.183 [2024-11-19 08:05:06.864360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.183 [2024-11-19 08:05:06.880074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.183 [2024-11-19 08:05:06.880108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.183 [2024-11-19 08:05:06.895227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.183 [2024-11-19 08:05:06.895262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.183 [2024-11-19 08:05:06.909837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.183 [2024-11-19 08:05:06.909870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.183 [2024-11-19 08:05:06.924308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.183 [2024-11-19 08:05:06.924347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.183 [2024-11-19 08:05:06.939074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.183 [2024-11-19 08:05:06.939114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.183 [2024-11-19 08:05:06.952997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.183 [2024-11-19 08:05:06.953036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.183 [2024-11-19 08:05:06.966301] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.183 [2024-11-19 08:05:06.966334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.183 [2024-11-19 08:05:06.982102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.183 [2024-11-19 08:05:06.982135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.183 [2024-11-19 08:05:06.996275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.183 [2024-11-19 08:05:06.996314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.183 [2024-11-19 08:05:07.010594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.183 [2024-11-19 08:05:07.010633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.183 [2024-11-19 08:05:07.024954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.183 [2024-11-19 08:05:07.025016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.183 [2024-11-19 08:05:07.039940] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.183 [2024-11-19 08:05:07.039988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.183 [2024-11-19 08:05:07.054922] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.183 [2024-11-19 08:05:07.054961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.183 [2024-11-19 08:05:07.069069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.183 [2024-11-19 08:05:07.069108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.183 [2024-11-19 08:05:07.083493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.183 [2024-11-19 08:05:07.083533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.183 [2024-11-19 08:05:07.097872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.183 [2024-11-19 08:05:07.097906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.183 [2024-11-19 08:05:07.112401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.183 [2024-11-19 08:05:07.112438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.442 [2024-11-19 08:05:07.126879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.442 [2024-11-19 08:05:07.126915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.442 [2024-11-19 08:05:07.141467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.442 [2024-11-19 08:05:07.141506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.442 [2024-11-19 08:05:07.156283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.442 [2024-11-19 08:05:07.156315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.442 [2024-11-19 08:05:07.171042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.442 [2024-11-19 08:05:07.171074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.442 [2024-11-19 08:05:07.185837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.442 [2024-11-19 08:05:07.185873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.442 [2024-11-19 08:05:07.205352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.442 [2024-11-19 08:05:07.205385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.442 [2024-11-19 08:05:07.217991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.442 [2024-11-19 08:05:07.218030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.442 [2024-11-19 08:05:07.234249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.442 [2024-11-19 08:05:07.234288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.442 [2024-11-19 08:05:07.248789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.442 [2024-11-19 08:05:07.248822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.442 [2024-11-19 08:05:07.263701] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.442 [2024-11-19 08:05:07.263735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.442 [2024-11-19 08:05:07.279116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.442 [2024-11-19 08:05:07.279156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.442 [2024-11-19 08:05:07.294124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.442 [2024-11-19 08:05:07.294164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.442 [2024-11-19 08:05:07.309230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.442 [2024-11-19 08:05:07.309262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.442 [2024-11-19 08:05:07.323993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.442 [2024-11-19 08:05:07.324041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.442 [2024-11-19 08:05:07.339288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.442 [2024-11-19 08:05:07.339327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.442 [2024-11-19 08:05:07.354597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.442 [2024-11-19 08:05:07.354637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.442 [2024-11-19 08:05:07.369090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.442 [2024-11-19 08:05:07.369129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.700 [2024-11-19 08:05:07.383946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.700 [2024-11-19 08:05:07.383995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.700 [2024-11-19 08:05:07.398898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.700 [2024-11-19 08:05:07.398935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.700 [2024-11-19 08:05:07.414002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.700 [2024-11-19 08:05:07.414041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.700 [2024-11-19 08:05:07.428921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.700 [2024-11-19 08:05:07.428959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.700 [2024-11-19 08:05:07.443372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.700 [2024-11-19 08:05:07.443419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.700 [2024-11-19 08:05:07.457350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.700 [2024-11-19 08:05:07.457381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.700 [2024-11-19 08:05:07.471257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.700 [2024-11-19 08:05:07.471297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.700 [2024-11-19 08:05:07.486020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.700 [2024-11-19 08:05:07.486059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.700 [2024-11-19 08:05:07.500729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.700 [2024-11-19 08:05:07.500780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.700 [2024-11-19 08:05:07.515260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.700 [2024-11-19 08:05:07.515299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.700 [2024-11-19 08:05:07.530989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.700 [2024-11-19 08:05:07.531021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.700 [2024-11-19 08:05:07.545984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.700 [2024-11-19 08:05:07.546016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.700 [2024-11-19 08:05:07.561308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.700 [2024-11-19 08:05:07.561347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.700 [2024-11-19 08:05:07.575259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.700 [2024-11-19 08:05:07.575297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.700 8674.40 IOPS, 67.77 MiB/s [2024-11-19T07:05:07.630Z] [2024-11-19 08:05:07.588103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.700 [2024-11-19 08:05:07.588136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.700 [2024-11-19 08:05:07.628898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.700 [2024-11-19 08:05:07.628932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.700 00:41:15.700 Latency(us) 00:41:15.700 [2024-11-19T07:05:07.630Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:15.700 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:41:15.700 Nvme1n1 : 5.05 8613.69 67.29 0.00 0.00 14719.87 3762.25 54370.61 00:41:15.700 [2024-11-19T07:05:07.630Z] =================================================================================================================== 00:41:15.700 [2024-11-19T07:05:07.630Z] Total : 8613.69 67.29 0.00 0.00 14719.87 3762.25 54370.61 00:41:15.700 [2024-11-19 08:05:07.633132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.700 [2024-11-19 08:05:07.633166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.959 [2024-11-19 08:05:07.641112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.959 [2024-11-19 08:05:07.641149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.959 [2024-11-19 08:05:07.649125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.959 [2024-11-19 08:05:07.649161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.959 [2024-11-19 08:05:07.657118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.959 [2024-11-19 08:05:07.657152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.959 [2024-11-19 08:05:07.665104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.959 [2024-11-19 08:05:07.665137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.959 [2024-11-19 08:05:07.673116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.959 [2024-11-19 08:05:07.673149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.959 [2024-11-19 08:05:07.681176] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.959 [2024-11-19 08:05:07.681227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.959 [2024-11-19 08:05:07.689185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.959 [2024-11-19 08:05:07.689246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.959 [2024-11-19 08:05:07.697180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.959 [2024-11-19 08:05:07.697216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.959 [2024-11-19 08:05:07.705094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.959 [2024-11-19 08:05:07.705126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.959 [2024-11-19 08:05:07.713148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.959 [2024-11-19 08:05:07.713181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.959 [2024-11-19 08:05:07.721123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.959 [2024-11-19 08:05:07.721155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.959 [2024-11-19 08:05:07.729101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.959 [2024-11-19 08:05:07.729132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.959 [2024-11-19 08:05:07.737117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.959 [2024-11-19 08:05:07.737157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.959 [2024-11-19 08:05:07.745115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.959 [2024-11-19 08:05:07.745146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.959 [2024-11-19 08:05:07.753098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.959 [2024-11-19 08:05:07.753129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.959 [2024-11-19 08:05:07.761115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.959 [2024-11-19 08:05:07.761147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.959 [2024-11-19 08:05:07.769087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.959 [2024-11-19 08:05:07.769119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.959 [2024-11-19 08:05:07.777193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.959 [2024-11-19 08:05:07.777251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.959 [2024-11-19 08:05:07.785209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.959 [2024-11-19 08:05:07.785265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.959 [2024-11-19 08:05:07.793130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.959 [2024-11-19 08:05:07.793162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.959 [2024-11-19 08:05:07.801115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.959 [2024-11-19 08:05:07.801146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.959 [2024-11-19 08:05:07.809122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.959 [2024-11-19 08:05:07.809153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.959 [2024-11-19 08:05:07.817103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.959 [2024-11-19 08:05:07.817135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.959 [2024-11-19 08:05:07.825111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.959 [2024-11-19 08:05:07.825142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.959 [2024-11-19 08:05:07.833098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.959 [2024-11-19 08:05:07.833129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.959 [2024-11-19 08:05:07.841114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.959 [2024-11-19 08:05:07.841146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.959 [2024-11-19 08:05:07.849122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.959 [2024-11-19 08:05:07.849153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.959 [2024-11-19 08:05:07.857130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.959 [2024-11-19 08:05:07.857162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.959 [2024-11-19 08:05:07.865110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.959 [2024-11-19 08:05:07.865142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.959 [2024-11-19 08:05:07.873263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.959 [2024-11-19 08:05:07.873295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.959 [2024-11-19 08:05:07.881128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.959 [2024-11-19 08:05:07.881160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:15.959 [2024-11-19 08:05:07.889131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:15.959 [2024-11-19 08:05:07.889173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.218 [2024-11-19 08:05:07.897105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.218 [2024-11-19 08:05:07.897138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.218 [2024-11-19 08:05:07.905125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.218 [2024-11-19 08:05:07.905158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.218 [2024-11-19 08:05:07.913118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.218 [2024-11-19 08:05:07.913150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.218 [2024-11-19 08:05:07.921103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.218 [2024-11-19 08:05:07.921135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.218 [2024-11-19 08:05:07.929122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.218 [2024-11-19 08:05:07.929155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.218 [2024-11-19 08:05:07.937195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.218 [2024-11-19 08:05:07.937250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.218 [2024-11-19 08:05:07.945194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.218 [2024-11-19 08:05:07.945249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.218 [2024-11-19 08:05:07.953148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.218 [2024-11-19 08:05:07.953181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.218 [2024-11-19 08:05:07.965099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.218 [2024-11-19 08:05:07.965132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.218 [2024-11-19 08:05:07.973125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.218 [2024-11-19 08:05:07.973158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.218 [2024-11-19 08:05:07.981123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.218 [2024-11-19 08:05:07.981166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.218 [2024-11-19 08:05:07.989131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.218 [2024-11-19 08:05:07.989171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.218 [2024-11-19 08:05:07.997146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.218 [2024-11-19 08:05:07.997197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.218 [2024-11-19 08:05:08.005261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.218 [2024-11-19 08:05:08.005320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.218 [2024-11-19 08:05:08.013207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.218 [2024-11-19 08:05:08.013265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.218 [2024-11-19 08:05:08.021215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.218 [2024-11-19 08:05:08.021266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.218 [2024-11-19 08:05:08.029114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.218 [2024-11-19 08:05:08.029145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.218 [2024-11-19 08:05:08.037133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.218 [2024-11-19 08:05:08.037165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.218 [2024-11-19 08:05:08.045116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.218 [2024-11-19 08:05:08.045156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.218 [2024-11-19 08:05:08.053102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.218 [2024-11-19 08:05:08.053141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.218 [2024-11-19 08:05:08.061142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.218 [2024-11-19 08:05:08.061174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.218 [2024-11-19 08:05:08.069124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.218 [2024-11-19 08:05:08.069156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.218 [2024-11-19 08:05:08.077122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.218 [2024-11-19 08:05:08.077153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.218 [2024-11-19 08:05:08.085114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.218 [2024-11-19 08:05:08.085145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.218 [2024-11-19 08:05:08.093092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.218 [2024-11-19 08:05:08.093124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.218 [2024-11-19 08:05:08.101111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.218 [2024-11-19 08:05:08.101143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.218 [2024-11-19 08:05:08.109124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.218 [2024-11-19 08:05:08.109155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.218 [2024-11-19 08:05:08.117093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.219 [2024-11-19 08:05:08.117124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.219 [2024-11-19 08:05:08.125108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.219 [2024-11-19 08:05:08.125139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.219 [2024-11-19 08:05:08.133119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.219 [2024-11-19 08:05:08.133150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.219 [2024-11-19 08:05:08.141092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.219 [2024-11-19 08:05:08.141123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.219 [2024-11-19 08:05:08.149124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.219 [2024-11-19 08:05:08.149160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.477 [2024-11-19 08:05:08.157107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.477 [2024-11-19 08:05:08.157140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.477 [2024-11-19 08:05:08.165113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.477 [2024-11-19 08:05:08.165146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.477 [2024-11-19 08:05:08.173077] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.477 [2024-11-19 08:05:08.173104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.477 [2024-11-19 08:05:08.181214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.477 [2024-11-19 08:05:08.181277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.477 [2024-11-19 08:05:08.189203] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.477 [2024-11-19 08:05:08.189260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.477 [2024-11-19 08:05:08.197135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.477 [2024-11-19 08:05:08.197174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.477 [2024-11-19 08:05:08.205098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.477 [2024-11-19 08:05:08.205129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.477 [2024-11-19 08:05:08.213120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.477 [2024-11-19 08:05:08.213151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.477 [2024-11-19 08:05:08.221101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.477 [2024-11-19 08:05:08.221132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.477 [2024-11-19 08:05:08.229122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.477 [2024-11-19 08:05:08.229154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.477 [2024-11-19 08:05:08.237115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.477 [2024-11-19 08:05:08.237146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.477 [2024-11-19 08:05:08.245104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.477 [2024-11-19 08:05:08.245136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.477 [2024-11-19 08:05:08.253116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.477 [2024-11-19 08:05:08.253147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.477 [2024-11-19 08:05:08.261119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.477 [2024-11-19 08:05:08.261151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.477 [2024-11-19 08:05:08.269092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.477 [2024-11-19 08:05:08.269125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.477 [2024-11-19 08:05:08.277135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.477 [2024-11-19 08:05:08.277167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.477 [2024-11-19 08:05:08.285101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.477 [2024-11-19 08:05:08.285133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.477 [2024-11-19 08:05:08.293122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.477 [2024-11-19 08:05:08.293154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.477 [2024-11-19 08:05:08.301132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.477 [2024-11-19 08:05:08.301169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.477 [2024-11-19 08:05:08.309174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.477 [2024-11-19 08:05:08.309248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.477 [2024-11-19 08:05:08.317146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.477 [2024-11-19 08:05:08.317178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.477 [2024-11-19 08:05:08.325114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.477 [2024-11-19 08:05:08.325146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.477 [2024-11-19 08:05:08.333097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.477 [2024-11-19 08:05:08.333128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.477 [2024-11-19 08:05:08.341123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.477 [2024-11-19 08:05:08.341154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.477 [2024-11-19 08:05:08.349065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.478 [2024-11-19 08:05:08.349093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.478 [2024-11-19 08:05:08.357120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.478 [2024-11-19 08:05:08.357153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.478 [2024-11-19 08:05:08.365115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.478 [2024-11-19 08:05:08.365147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.478 [2024-11-19 08:05:08.373126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.478 [2024-11-19 08:05:08.373158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.478 [2024-11-19 08:05:08.381114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.478 [2024-11-19 08:05:08.381146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.478 [2024-11-19 08:05:08.389113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.478 [2024-11-19 08:05:08.389144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.478 [2024-11-19 08:05:08.397121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.478 [2024-11-19 08:05:08.397165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.478 [2024-11-19 08:05:08.405178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.478 [2024-11-19 08:05:08.405226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.738 [2024-11-19 08:05:08.413102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.738 [2024-11-19 08:05:08.413135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.738 [2024-11-19 08:05:08.421132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.738 [2024-11-19 08:05:08.421166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.738 [2024-11-19 08:05:08.429128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.738 [2024-11-19 08:05:08.429160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.738 [2024-11-19 08:05:08.437111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.738 [2024-11-19 08:05:08.437153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.738 [2024-11-19 08:05:08.445129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.738 [2024-11-19 08:05:08.445170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.738 [2024-11-19 08:05:08.453116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.738 [2024-11-19 08:05:08.453149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.738 [2024-11-19 08:05:08.461102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.738 [2024-11-19 08:05:08.461133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.738 [2024-11-19 08:05:08.469138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.738 [2024-11-19 08:05:08.469170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.738 [2024-11-19 08:05:08.477072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.738 [2024-11-19 08:05:08.477100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.738 [2024-11-19 08:05:08.485127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.738 [2024-11-19 08:05:08.485160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.738 [2024-11-19 08:05:08.493111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.738 [2024-11-19 08:05:08.493143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.738 [2024-11-19 08:05:08.501118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.738 [2024-11-19 08:05:08.501152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.738 [2024-11-19 08:05:08.509118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.738 [2024-11-19 08:05:08.509150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.738 [2024-11-19 08:05:08.517164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.738 [2024-11-19 08:05:08.517197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.738 [2024-11-19 08:05:08.525114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.738 [2024-11-19 08:05:08.525146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.738 [2024-11-19 08:05:08.533126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.738 [2024-11-19 08:05:08.533158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.738 [2024-11-19 08:05:08.541115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:16.738 [2024-11-19 08:05:08.541148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.738 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3179136) - No such process 00:41:16.738 08:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3179136 00:41:16.738 08:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:41:16.738 08:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:16.738 08:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:16.738 08:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:16.738 08:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:41:16.738 08:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:16.738 08:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:16.738 delay0 00:41:16.738 08:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:16.738 08:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:41:16.738 08:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:16.738 08:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:16.738 08:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:16.738 08:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:41:16.998 [2024-11-19 08:05:08.719055] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:41:25.126 Initializing NVMe Controllers 00:41:25.126 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:41:25.126 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:41:25.126 Initialization complete. Launching workers. 00:41:25.126 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 225, failed: 17898 00:41:25.126 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 17984, failed to submit 139 00:41:25.126 success 17913, unsuccessful 71, failed 0 00:41:25.126 08:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:41:25.126 08:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:41:25.126 08:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:25.126 08:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:41:25.126 08:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:25.126 08:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:41:25.126 08:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:25.126 08:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:25.126 rmmod nvme_tcp 00:41:25.126 rmmod nvme_fabrics 00:41:25.126 rmmod nvme_keyring 00:41:25.126 08:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:25.126 08:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:41:25.127 08:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:41:25.127 08:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 3177664 ']' 00:41:25.127 08:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 3177664 00:41:25.127 08:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 3177664 ']' 00:41:25.127 08:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 3177664 00:41:25.127 08:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:41:25.127 08:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:25.127 08:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3177664 00:41:25.127 08:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:41:25.127 08:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:41:25.127 08:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3177664' 00:41:25.127 killing process with pid 3177664 00:41:25.127 08:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 3177664 00:41:25.127 08:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 3177664 00:41:25.388 08:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:25.388 08:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:25.388 08:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:25.388 08:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:41:25.388 08:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:25.388 08:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:41:25.388 08:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:41:25.388 08:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:25.388 08:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:25.388 08:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:25.388 08:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:25.388 08:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:27.296 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:27.296 00:41:27.296 real 0m33.012s 00:41:27.296 user 0m46.234s 00:41:27.296 sys 0m10.690s 00:41:27.296 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:27.296 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:27.296 ************************************ 00:41:27.296 END TEST nvmf_zcopy 00:41:27.296 ************************************ 00:41:27.296 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:41:27.296 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:41:27.296 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:27.296 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:27.296 ************************************ 00:41:27.296 START TEST nvmf_nmic 00:41:27.296 ************************************ 00:41:27.296 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:41:27.296 * Looking for test storage... 00:41:27.554 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:27.554 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:41:27.554 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:41:27.554 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:41:27.554 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:41:27.554 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:27.554 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:27.554 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:27.554 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:41:27.554 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:41:27.554 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:41:27.554 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:41:27.554 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:41:27.554 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:41:27.554 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:41:27.554 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:27.554 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:41:27.554 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:41:27.554 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:27.554 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:27.554 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:41:27.554 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:41:27.554 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:27.554 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:41:27.554 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:41:27.554 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:41:27.554 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:41:27.554 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:27.554 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:41:27.554 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:41:27.554 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:27.554 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:27.554 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:41:27.554 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:27.554 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:41:27.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:27.554 --rc genhtml_branch_coverage=1 00:41:27.554 --rc genhtml_function_coverage=1 00:41:27.554 --rc genhtml_legend=1 00:41:27.554 --rc geninfo_all_blocks=1 00:41:27.554 --rc geninfo_unexecuted_blocks=1 00:41:27.554 00:41:27.554 ' 00:41:27.554 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:41:27.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:27.554 --rc genhtml_branch_coverage=1 00:41:27.554 --rc genhtml_function_coverage=1 00:41:27.554 --rc genhtml_legend=1 00:41:27.554 --rc geninfo_all_blocks=1 00:41:27.554 --rc geninfo_unexecuted_blocks=1 00:41:27.554 00:41:27.554 ' 00:41:27.554 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:41:27.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:27.554 --rc genhtml_branch_coverage=1 00:41:27.554 --rc genhtml_function_coverage=1 00:41:27.554 --rc genhtml_legend=1 00:41:27.554 --rc geninfo_all_blocks=1 00:41:27.554 --rc geninfo_unexecuted_blocks=1 00:41:27.554 00:41:27.554 ' 00:41:27.554 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:41:27.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:27.554 --rc genhtml_branch_coverage=1 00:41:27.554 --rc genhtml_function_coverage=1 00:41:27.554 --rc genhtml_legend=1 00:41:27.554 --rc geninfo_all_blocks=1 00:41:27.554 --rc geninfo_unexecuted_blocks=1 00:41:27.554 00:41:27.554 ' 00:41:27.554 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:27.554 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:41:27.554 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:27.554 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:27.554 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:27.554 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:27.554 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:27.555 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:27.555 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:27.555 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:27.555 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:27.555 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:27.555 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:41:27.555 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:41:27.555 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:27.555 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:27.555 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:27.555 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:27.555 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:27.555 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:41:27.555 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:27.555 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:27.555 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:27.555 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:27.555 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:27.555 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:27.555 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:41:27.555 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:27.555 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:41:27.555 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:27.555 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:27.555 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:27.555 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:27.555 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:27.555 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:27.555 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:27.555 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:27.555 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:27.555 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:27.555 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:41:27.555 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:41:27.555 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:41:27.555 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:27.555 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:27.555 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:27.555 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:27.555 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:27.555 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:27.555 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:27.555 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:27.555 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:27.555 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:27.555 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:41:27.555 08:05:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:29.463 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:29.463 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:41:29.463 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:29.463 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:29.463 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:29.463 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:41:29.464 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:41:29.464 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:41:29.464 Found net devices under 0000:0a:00.0: cvl_0_0 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:41:29.464 Found net devices under 0000:0a:00.1: cvl_0_1 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:29.464 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:29.464 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:41:29.464 00:41:29.464 --- 10.0.0.2 ping statistics --- 00:41:29.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:29.464 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:41:29.464 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:29.464 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:29.464 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:41:29.464 00:41:29.464 --- 10.0.0.1 ping statistics --- 00:41:29.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:29.464 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:41:29.465 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:29.465 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:41:29.465 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:29.465 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:29.465 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:29.465 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:29.465 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:29.465 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:29.465 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:29.723 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:41:29.723 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:29.723 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:29.723 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:29.723 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=3182770 00:41:29.723 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:41:29.723 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 3182770 00:41:29.723 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 3182770 ']' 00:41:29.723 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:29.723 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:29.723 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:29.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:29.723 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:29.723 08:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:29.723 [2024-11-19 08:05:21.514922] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:29.723 [2024-11-19 08:05:21.517371] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:41:29.723 [2024-11-19 08:05:21.517480] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:29.984 [2024-11-19 08:05:21.680622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:29.984 [2024-11-19 08:05:21.875429] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:29.984 [2024-11-19 08:05:21.875511] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:29.984 [2024-11-19 08:05:21.875545] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:29.984 [2024-11-19 08:05:21.875569] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:29.984 [2024-11-19 08:05:21.875595] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:29.984 [2024-11-19 08:05:21.878864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:29.984 [2024-11-19 08:05:21.878928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:41:29.984 [2024-11-19 08:05:21.878975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:29.984 [2024-11-19 08:05:21.878982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:41:30.554 [2024-11-19 08:05:22.314486] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:30.554 [2024-11-19 08:05:22.322951] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:41:30.554 [2024-11-19 08:05:22.323109] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:41:30.554 [2024-11-19 08:05:22.324345] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:30.554 [2024-11-19 08:05:22.324710] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:41:30.554 08:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:30.554 08:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:41:30.554 08:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:30.554 08:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:30.554 08:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:30.814 08:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:30.814 08:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:30.814 08:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:30.814 08:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:30.814 [2024-11-19 08:05:22.504188] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:30.814 08:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:30.814 08:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:41:30.814 08:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:30.814 08:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:30.814 Malloc0 00:41:30.814 08:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:30.814 08:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:41:30.814 08:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:30.814 08:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:30.814 08:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:30.814 08:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:41:30.814 08:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:30.814 08:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:30.814 08:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:30.814 08:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:30.814 08:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:30.814 08:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:30.814 [2024-11-19 08:05:22.632572] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:30.814 08:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:30.814 08:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:41:30.814 test case1: single bdev can't be used in multiple subsystems 00:41:30.815 08:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:41:30.815 08:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:30.815 08:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:30.815 08:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:30.815 08:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:41:30.815 08:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:30.815 08:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:30.815 08:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:30.815 08:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:41:30.815 08:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:41:30.815 08:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:30.815 08:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:30.815 [2024-11-19 08:05:22.656094] bdev.c:8180:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:41:30.815 [2024-11-19 08:05:22.656156] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:41:30.815 [2024-11-19 08:05:22.656191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:30.815 request: 00:41:30.815 { 00:41:30.815 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:41:30.815 "namespace": { 00:41:30.815 "bdev_name": "Malloc0", 00:41:30.815 "no_auto_visible": false 00:41:30.815 }, 00:41:30.815 "method": "nvmf_subsystem_add_ns", 00:41:30.815 "req_id": 1 00:41:30.815 } 00:41:30.815 Got JSON-RPC error response 00:41:30.815 response: 00:41:30.815 { 00:41:30.815 "code": -32602, 00:41:30.815 "message": "Invalid parameters" 00:41:30.815 } 00:41:30.815 08:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:41:30.815 08:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:41:30.815 08:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:41:30.815 08:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:41:30.815 Adding namespace failed - expected result. 00:41:30.815 08:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:41:30.815 test case2: host connect to nvmf target in multiple paths 00:41:30.815 08:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:41:30.815 08:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:30.815 08:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:30.815 [2024-11-19 08:05:22.664221] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:41:30.815 08:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:30.815 08:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:41:31.073 08:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:41:31.333 08:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:41:31.333 08:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:41:31.333 08:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:41:31.333 08:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:41:31.333 08:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:41:33.933 08:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:41:33.933 08:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:41:33.933 08:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:41:33.933 08:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:41:33.933 08:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:41:33.933 08:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:41:33.933 08:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:41:33.933 [global] 00:41:33.933 thread=1 00:41:33.933 invalidate=1 00:41:33.933 rw=write 00:41:33.933 time_based=1 00:41:33.933 runtime=1 00:41:33.933 ioengine=libaio 00:41:33.933 direct=1 00:41:33.933 bs=4096 00:41:33.933 iodepth=1 00:41:33.933 norandommap=0 00:41:33.933 numjobs=1 00:41:33.933 00:41:33.933 verify_dump=1 00:41:33.933 verify_backlog=512 00:41:33.933 verify_state_save=0 00:41:33.933 do_verify=1 00:41:33.933 verify=crc32c-intel 00:41:33.933 [job0] 00:41:33.933 filename=/dev/nvme0n1 00:41:33.933 Could not set queue depth (nvme0n1) 00:41:33.933 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:33.933 fio-3.35 00:41:33.933 Starting 1 thread 00:41:34.869 00:41:34.869 job0: (groupid=0, jobs=1): err= 0: pid=3183396: Tue Nov 19 08:05:26 2024 00:41:34.869 read: IOPS=21, BW=86.3KiB/s (88.3kB/s)(88.0KiB/1020msec) 00:41:34.869 slat (nsec): min=10430, max=36432, avg=27831.05, stdev=9045.29 00:41:34.869 clat (usec): min=374, max=42037, avg=39492.53, stdev=8753.89 00:41:34.869 lat (usec): min=395, max=42054, avg=39520.36, stdev=8755.62 00:41:34.869 clat percentiles (usec): 00:41:34.869 | 1.00th=[ 375], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:41:34.869 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:41:34.869 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:41:34.869 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:41:34.869 | 99.99th=[42206] 00:41:34.869 write: IOPS=501, BW=2008KiB/s (2056kB/s)(2048KiB/1020msec); 0 zone resets 00:41:34.869 slat (usec): min=7, max=26391, avg=68.80, stdev=1165.59 00:41:34.869 clat (usec): min=170, max=390, avg=216.83, stdev=24.15 00:41:34.869 lat (usec): min=178, max=26627, avg=285.63, stdev=1166.80 00:41:34.869 clat percentiles (usec): 00:41:34.869 | 1.00th=[ 176], 5.00th=[ 184], 10.00th=[ 188], 20.00th=[ 196], 00:41:34.869 | 30.00th=[ 202], 40.00th=[ 210], 50.00th=[ 219], 60.00th=[ 223], 00:41:34.869 | 70.00th=[ 229], 80.00th=[ 237], 90.00th=[ 243], 95.00th=[ 260], 00:41:34.869 | 99.00th=[ 277], 99.50th=[ 281], 99.90th=[ 392], 99.95th=[ 392], 00:41:34.869 | 99.99th=[ 392] 00:41:34.869 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:41:34.869 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:41:34.869 lat (usec) : 250=90.64%, 500=5.43% 00:41:34.869 lat (msec) : 50=3.93% 00:41:34.869 cpu : usr=0.59%, sys=1.18%, ctx=538, majf=0, minf=1 00:41:34.869 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:34.869 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:34.869 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:34.869 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:34.869 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:34.869 00:41:34.869 Run status group 0 (all jobs): 00:41:34.869 READ: bw=86.3KiB/s (88.3kB/s), 86.3KiB/s-86.3KiB/s (88.3kB/s-88.3kB/s), io=88.0KiB (90.1kB), run=1020-1020msec 00:41:34.869 WRITE: bw=2008KiB/s (2056kB/s), 2008KiB/s-2008KiB/s (2056kB/s-2056kB/s), io=2048KiB (2097kB), run=1020-1020msec 00:41:34.869 00:41:34.869 Disk stats (read/write): 00:41:34.869 nvme0n1: ios=45/512, merge=0/0, ticks=1736/108, in_queue=1844, util=98.50% 00:41:34.869 08:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:41:35.127 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:41:35.127 08:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:41:35.127 08:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:41:35.127 08:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:41:35.127 08:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:35.127 08:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:41:35.127 08:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:35.127 08:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:41:35.127 08:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:41:35.127 08:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:41:35.127 08:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:35.127 08:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:41:35.127 08:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:35.127 08:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:41:35.127 08:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:35.127 08:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:35.127 rmmod nvme_tcp 00:41:35.127 rmmod nvme_fabrics 00:41:35.127 rmmod nvme_keyring 00:41:35.127 08:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:35.127 08:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:41:35.127 08:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:41:35.127 08:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 3182770 ']' 00:41:35.127 08:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 3182770 00:41:35.127 08:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 3182770 ']' 00:41:35.127 08:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 3182770 00:41:35.127 08:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:41:35.127 08:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:35.127 08:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3182770 00:41:35.127 08:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:35.127 08:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:35.127 08:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3182770' 00:41:35.127 killing process with pid 3182770 00:41:35.127 08:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 3182770 00:41:35.127 08:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 3182770 00:41:36.501 08:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:36.501 08:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:36.501 08:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:36.501 08:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:41:36.501 08:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:41:36.501 08:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:36.501 08:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:41:36.501 08:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:36.501 08:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:36.501 08:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:36.501 08:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:36.501 08:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:39.040 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:39.040 00:41:39.040 real 0m11.185s 00:41:39.040 user 0m19.512s 00:41:39.040 sys 0m3.682s 00:41:39.040 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:39.040 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:39.040 ************************************ 00:41:39.040 END TEST nvmf_nmic 00:41:39.040 ************************************ 00:41:39.040 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:41:39.040 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:41:39.040 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:39.040 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:39.040 ************************************ 00:41:39.040 START TEST nvmf_fio_target 00:41:39.040 ************************************ 00:41:39.040 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:41:39.040 * Looking for test storage... 00:41:39.040 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:39.040 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:41:39.040 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:41:39.040 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:41:39.040 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:41:39.040 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:39.040 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:39.040 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:39.040 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:41:39.040 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:41:39.040 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:41:39.040 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:41:39.040 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:41:39.040 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:41:39.040 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:41:39.040 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:39.040 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:41:39.040 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:41:39.040 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:39.040 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:39.040 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:41:39.040 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:41:39.040 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:39.040 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:41:39.040 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:41:39.040 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:41:39.040 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:41:39.040 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:39.040 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:41:39.040 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:41:39.040 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:39.040 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:39.040 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:41:39.040 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:39.040 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:41:39.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:39.040 --rc genhtml_branch_coverage=1 00:41:39.040 --rc genhtml_function_coverage=1 00:41:39.040 --rc genhtml_legend=1 00:41:39.040 --rc geninfo_all_blocks=1 00:41:39.040 --rc geninfo_unexecuted_blocks=1 00:41:39.040 00:41:39.040 ' 00:41:39.040 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:41:39.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:39.040 --rc genhtml_branch_coverage=1 00:41:39.040 --rc genhtml_function_coverage=1 00:41:39.040 --rc genhtml_legend=1 00:41:39.040 --rc geninfo_all_blocks=1 00:41:39.040 --rc geninfo_unexecuted_blocks=1 00:41:39.040 00:41:39.040 ' 00:41:39.040 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:41:39.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:39.040 --rc genhtml_branch_coverage=1 00:41:39.040 --rc genhtml_function_coverage=1 00:41:39.040 --rc genhtml_legend=1 00:41:39.040 --rc geninfo_all_blocks=1 00:41:39.040 --rc geninfo_unexecuted_blocks=1 00:41:39.040 00:41:39.040 ' 00:41:39.040 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:41:39.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:39.040 --rc genhtml_branch_coverage=1 00:41:39.040 --rc genhtml_function_coverage=1 00:41:39.040 --rc genhtml_legend=1 00:41:39.040 --rc geninfo_all_blocks=1 00:41:39.040 --rc geninfo_unexecuted_blocks=1 00:41:39.040 00:41:39.040 ' 00:41:39.040 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:39.040 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:41:39.040 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:39.040 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:39.040 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:39.040 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:39.040 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:39.040 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:39.040 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:39.040 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:39.040 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:39.041 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:39.041 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:41:39.041 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:41:39.041 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:39.041 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:39.041 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:39.041 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:39.041 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:39.041 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:41:39.041 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:39.041 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:39.041 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:39.041 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:39.041 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:39.041 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:39.041 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:41:39.041 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:39.041 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:41:39.041 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:39.041 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:39.041 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:39.041 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:39.041 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:39.041 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:39.041 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:39.041 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:39.041 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:39.041 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:39.041 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:41:39.041 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:41:39.041 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:39.041 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:41:39.041 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:39.041 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:39.041 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:39.041 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:39.041 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:39.041 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:39.041 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:39.041 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:39.041 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:39.041 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:39.041 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:41:39.041 08:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:41:40.943 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:40.943 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:41:40.943 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:40.943 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:40.943 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:40.943 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:40.943 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:40.943 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:41:40.943 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:40.943 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:41:40.943 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:41:40.943 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:41:40.943 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:41:40.943 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:41:40.943 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:41:40.943 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:40.943 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:40.943 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:40.943 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:40.943 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:40.943 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:40.943 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:40.943 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:40.943 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:40.943 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:40.943 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:40.943 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:40.943 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:40.943 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:40.943 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:40.943 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:40.943 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:40.943 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:40.943 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:40.943 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:41:40.943 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:41:40.943 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:40.943 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:40.943 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:40.943 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:40.943 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:40.943 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:40.943 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:41:40.943 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:41:40.943 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:40.944 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:40.944 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:40.944 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:40.944 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:40.944 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:40.944 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:40.944 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:40.944 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:40.944 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:40.944 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:40.944 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:40.944 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:40.944 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:40.944 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:40.944 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:41:40.944 Found net devices under 0000:0a:00.0: cvl_0_0 00:41:40.944 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:40.944 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:40.944 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:40.944 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:40.944 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:40.944 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:40.944 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:40.944 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:40.944 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:41:40.944 Found net devices under 0000:0a:00.1: cvl_0_1 00:41:40.944 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:40.944 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:40.944 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:41:40.944 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:40.944 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:40.944 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:40.944 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:40.944 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:40.944 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:40.944 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:40.944 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:40.944 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:40.944 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:40.944 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:40.944 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:40.944 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:40.944 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:40.944 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:40.944 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:40.944 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:40.944 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:40.944 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:40.944 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:40.944 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:40.944 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:40.944 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:40.944 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:40.944 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:40.944 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:40.944 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:40.944 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.271 ms 00:41:40.944 00:41:40.944 --- 10.0.0.2 ping statistics --- 00:41:40.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:40.944 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:41:40.944 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:40.944 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:40.944 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:41:40.944 00:41:40.944 --- 10.0.0.1 ping statistics --- 00:41:40.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:40.944 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:41:40.944 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:40.944 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:41:40.944 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:40.944 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:40.944 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:40.944 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:40.944 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:40.944 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:40.944 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:40.944 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:41:40.944 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:40.944 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:40.944 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:41:40.944 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=3185611 00:41:40.944 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:41:40.944 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 3185611 00:41:40.944 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 3185611 ']' 00:41:40.944 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:40.944 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:40.944 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:40.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:40.944 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:40.944 08:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:41:41.204 [2024-11-19 08:05:32.941212] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:41.204 [2024-11-19 08:05:32.944195] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:41:41.204 [2024-11-19 08:05:32.944306] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:41.204 [2024-11-19 08:05:33.103445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:41.463 [2024-11-19 08:05:33.231603] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:41.463 [2024-11-19 08:05:33.231697] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:41.463 [2024-11-19 08:05:33.231725] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:41.463 [2024-11-19 08:05:33.231760] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:41.463 [2024-11-19 08:05:33.231779] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:41.463 [2024-11-19 08:05:33.234480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:41.463 [2024-11-19 08:05:33.237752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:41:41.463 [2024-11-19 08:05:33.237861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:41.463 [2024-11-19 08:05:33.237869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:41:41.722 [2024-11-19 08:05:33.605451] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:41.722 [2024-11-19 08:05:33.621000] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:41:41.722 [2024-11-19 08:05:33.621186] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:41:41.722 [2024-11-19 08:05:33.621976] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:41.722 [2024-11-19 08:05:33.622281] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:41:41.982 08:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:41.982 08:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:41:41.982 08:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:41.982 08:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:41.982 08:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:41:41.982 08:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:41.982 08:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:41:42.242 [2024-11-19 08:05:34.154953] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:42.501 08:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:42.759 08:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:41:42.759 08:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:43.016 08:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:41:43.016 08:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:43.581 08:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:41:43.581 08:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:43.840 08:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:41:43.840 08:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:41:44.098 08:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:44.667 08:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:41:44.667 08:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:44.925 08:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:41:44.925 08:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:45.491 08:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:41:45.491 08:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:41:45.491 08:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:41:46.057 08:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:41:46.057 08:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:41:46.315 08:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:41:46.315 08:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:41:46.574 08:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:46.832 [2024-11-19 08:05:38.635180] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:46.832 08:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:41:47.089 08:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:41:47.655 08:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:41:47.655 08:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:41:47.655 08:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:41:47.655 08:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:41:47.655 08:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:41:47.655 08:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:41:47.655 08:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:41:50.188 08:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:41:50.188 08:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:41:50.188 08:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:41:50.188 08:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:41:50.188 08:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:41:50.188 08:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:41:50.188 08:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:41:50.188 [global] 00:41:50.188 thread=1 00:41:50.188 invalidate=1 00:41:50.188 rw=write 00:41:50.188 time_based=1 00:41:50.188 runtime=1 00:41:50.188 ioengine=libaio 00:41:50.188 direct=1 00:41:50.188 bs=4096 00:41:50.188 iodepth=1 00:41:50.188 norandommap=0 00:41:50.188 numjobs=1 00:41:50.188 00:41:50.188 verify_dump=1 00:41:50.188 verify_backlog=512 00:41:50.188 verify_state_save=0 00:41:50.188 do_verify=1 00:41:50.188 verify=crc32c-intel 00:41:50.188 [job0] 00:41:50.188 filename=/dev/nvme0n1 00:41:50.188 [job1] 00:41:50.188 filename=/dev/nvme0n2 00:41:50.188 [job2] 00:41:50.188 filename=/dev/nvme0n3 00:41:50.188 [job3] 00:41:50.188 filename=/dev/nvme0n4 00:41:50.188 Could not set queue depth (nvme0n1) 00:41:50.188 Could not set queue depth (nvme0n2) 00:41:50.188 Could not set queue depth (nvme0n3) 00:41:50.188 Could not set queue depth (nvme0n4) 00:41:50.188 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:50.188 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:50.188 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:50.188 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:50.188 fio-3.35 00:41:50.188 Starting 4 threads 00:41:51.563 00:41:51.563 job0: (groupid=0, jobs=1): err= 0: pid=3186801: Tue Nov 19 08:05:43 2024 00:41:51.564 read: IOPS=1536, BW=6146KiB/s (6293kB/s)(6152KiB/1001msec) 00:41:51.564 slat (nsec): min=5269, max=26003, avg=6722.01, stdev=2417.67 00:41:51.564 clat (usec): min=252, max=431, avg=280.71, stdev=17.22 00:41:51.564 lat (usec): min=257, max=438, avg=287.43, stdev=17.60 00:41:51.564 clat percentiles (usec): 00:41:51.564 | 1.00th=[ 255], 5.00th=[ 260], 10.00th=[ 262], 20.00th=[ 269], 00:41:51.564 | 30.00th=[ 273], 40.00th=[ 277], 50.00th=[ 281], 60.00th=[ 285], 00:41:51.564 | 70.00th=[ 289], 80.00th=[ 293], 90.00th=[ 302], 95.00th=[ 306], 00:41:51.564 | 99.00th=[ 326], 99.50th=[ 392], 99.90th=[ 429], 99.95th=[ 433], 00:41:51.564 | 99.99th=[ 433] 00:41:51.564 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:41:51.564 slat (nsec): min=7057, max=43527, avg=10428.00, stdev=4500.26 00:41:51.564 clat (usec): min=189, max=629, avg=257.29, stdev=84.25 00:41:51.564 lat (usec): min=196, max=646, avg=267.72, stdev=86.80 00:41:51.564 clat percentiles (usec): 00:41:51.564 | 1.00th=[ 194], 5.00th=[ 196], 10.00th=[ 198], 20.00th=[ 202], 00:41:51.564 | 30.00th=[ 208], 40.00th=[ 212], 50.00th=[ 217], 60.00th=[ 223], 00:41:51.564 | 70.00th=[ 233], 80.00th=[ 338], 90.00th=[ 408], 95.00th=[ 441], 00:41:51.564 | 99.00th=[ 478], 99.50th=[ 502], 99.90th=[ 553], 99.95th=[ 603], 00:41:51.564 | 99.99th=[ 627] 00:41:51.564 bw ( KiB/s): min= 8192, max= 8192, per=50.30%, avg=8192.00, stdev= 0.00, samples=1 00:41:51.564 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:41:51.564 lat (usec) : 250=42.36%, 500=57.33%, 750=0.31% 00:41:51.564 cpu : usr=2.50%, sys=4.00%, ctx=3587, majf=0, minf=1 00:41:51.564 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:51.564 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:51.564 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:51.564 issued rwts: total=1538,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:51.564 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:51.564 job1: (groupid=0, jobs=1): err= 0: pid=3186802: Tue Nov 19 08:05:43 2024 00:41:51.564 read: IOPS=20, BW=83.5KiB/s (85.5kB/s)(84.0KiB/1006msec) 00:41:51.564 slat (nsec): min=6476, max=30398, avg=14579.33, stdev=4107.07 00:41:51.564 clat (usec): min=40964, max=42002, avg=41599.49, stdev=492.93 00:41:51.564 lat (usec): min=40977, max=42016, avg=41614.07, stdev=493.00 00:41:51.564 clat percentiles (usec): 00:41:51.564 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:41:51.564 | 30.00th=[41157], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:41:51.564 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:41:51.564 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:41:51.564 | 99.99th=[42206] 00:41:51.564 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:41:51.564 slat (nsec): min=5979, max=31513, avg=8689.13, stdev=4451.74 00:41:51.564 clat (usec): min=200, max=504, avg=243.95, stdev=26.82 00:41:51.564 lat (usec): min=206, max=521, avg=252.64, stdev=27.47 00:41:51.564 clat percentiles (usec): 00:41:51.564 | 1.00th=[ 212], 5.00th=[ 219], 10.00th=[ 225], 20.00th=[ 231], 00:41:51.564 | 30.00th=[ 233], 40.00th=[ 237], 50.00th=[ 241], 60.00th=[ 243], 00:41:51.564 | 70.00th=[ 247], 80.00th=[ 253], 90.00th=[ 265], 95.00th=[ 273], 00:41:51.564 | 99.00th=[ 383], 99.50th=[ 424], 99.90th=[ 506], 99.95th=[ 506], 00:41:51.564 | 99.99th=[ 506] 00:41:51.564 bw ( KiB/s): min= 4096, max= 4096, per=25.15%, avg=4096.00, stdev= 0.00, samples=1 00:41:51.564 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:41:51.564 lat (usec) : 250=73.92%, 500=21.95%, 750=0.19% 00:41:51.564 lat (msec) : 50=3.94% 00:41:51.564 cpu : usr=0.30%, sys=0.30%, ctx=535, majf=0, minf=1 00:41:51.564 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:51.564 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:51.564 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:51.564 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:51.564 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:51.564 job2: (groupid=0, jobs=1): err= 0: pid=3186809: Tue Nov 19 08:05:43 2024 00:41:51.564 read: IOPS=257, BW=1031KiB/s (1056kB/s)(1032KiB/1001msec) 00:41:51.564 slat (nsec): min=6014, max=28850, avg=11610.27, stdev=4351.26 00:41:51.564 clat (usec): min=287, max=41149, avg=3049.07, stdev=9837.82 00:41:51.564 lat (usec): min=300, max=41177, avg=3060.68, stdev=9838.82 00:41:51.564 clat percentiles (usec): 00:41:51.564 | 1.00th=[ 297], 5.00th=[ 343], 10.00th=[ 383], 20.00th=[ 416], 00:41:51.564 | 30.00th=[ 433], 40.00th=[ 449], 50.00th=[ 465], 60.00th=[ 482], 00:41:51.564 | 70.00th=[ 498], 80.00th=[ 515], 90.00th=[ 578], 95.00th=[40633], 00:41:51.564 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:41:51.564 | 99.99th=[41157] 00:41:51.564 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:41:51.564 slat (nsec): min=6179, max=42185, avg=14921.71, stdev=5254.33 00:41:51.564 clat (usec): min=208, max=633, avg=391.25, stdev=81.51 00:41:51.564 lat (usec): min=224, max=665, avg=406.17, stdev=80.80 00:41:51.564 clat percentiles (usec): 00:41:51.564 | 1.00th=[ 223], 5.00th=[ 237], 10.00th=[ 249], 20.00th=[ 302], 00:41:51.564 | 30.00th=[ 379], 40.00th=[ 392], 50.00th=[ 412], 60.00th=[ 429], 00:41:51.564 | 70.00th=[ 441], 80.00th=[ 453], 90.00th=[ 474], 95.00th=[ 498], 00:41:51.564 | 99.00th=[ 537], 99.50th=[ 553], 99.90th=[ 635], 99.95th=[ 635], 00:41:51.564 | 99.99th=[ 635] 00:41:51.564 bw ( KiB/s): min= 4096, max= 4096, per=25.15%, avg=4096.00, stdev= 0.00, samples=1 00:41:51.564 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:41:51.564 lat (usec) : 250=7.01%, 500=79.87%, 750=10.91% 00:41:51.564 lat (msec) : 50=2.21% 00:41:51.564 cpu : usr=0.70%, sys=0.80%, ctx=770, majf=0, minf=2 00:41:51.564 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:51.564 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:51.564 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:51.564 issued rwts: total=258,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:51.564 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:51.564 job3: (groupid=0, jobs=1): err= 0: pid=3186810: Tue Nov 19 08:05:43 2024 00:41:51.564 read: IOPS=825, BW=3301KiB/s (3380kB/s)(3304KiB/1001msec) 00:41:51.564 slat (nsec): min=4750, max=37218, avg=10830.13, stdev=4081.11 00:41:51.564 clat (usec): min=262, max=41134, avg=883.77, stdev=4442.60 00:41:51.564 lat (usec): min=268, max=41142, avg=894.60, stdev=4442.81 00:41:51.564 clat percentiles (usec): 00:41:51.564 | 1.00th=[ 269], 5.00th=[ 273], 10.00th=[ 281], 20.00th=[ 293], 00:41:51.564 | 30.00th=[ 314], 40.00th=[ 359], 50.00th=[ 383], 60.00th=[ 420], 00:41:51.564 | 70.00th=[ 453], 80.00th=[ 482], 90.00th=[ 537], 95.00th=[ 578], 00:41:51.564 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:41:51.564 | 99.99th=[41157] 00:41:51.564 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:41:51.564 slat (nsec): min=6020, max=39283, avg=9237.73, stdev=4317.86 00:41:51.564 clat (usec): min=177, max=1307, avg=239.85, stdev=71.68 00:41:51.564 lat (usec): min=184, max=1315, avg=249.09, stdev=72.86 00:41:51.564 clat percentiles (usec): 00:41:51.564 | 1.00th=[ 186], 5.00th=[ 192], 10.00th=[ 196], 20.00th=[ 204], 00:41:51.564 | 30.00th=[ 210], 40.00th=[ 219], 50.00th=[ 225], 60.00th=[ 231], 00:41:51.564 | 70.00th=[ 237], 80.00th=[ 245], 90.00th=[ 318], 95.00th=[ 379], 00:41:51.564 | 99.00th=[ 416], 99.50th=[ 449], 99.90th=[ 930], 99.95th=[ 1303], 00:41:51.564 | 99.99th=[ 1303] 00:41:51.564 bw ( KiB/s): min= 4096, max= 4096, per=25.15%, avg=4096.00, stdev= 0.00, samples=1 00:41:51.564 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:41:51.564 lat (usec) : 250=46.00%, 500=46.76%, 750=6.49%, 1000=0.16% 00:41:51.564 lat (msec) : 2=0.05%, 50=0.54% 00:41:51.564 cpu : usr=1.00%, sys=1.80%, ctx=1852, majf=0, minf=1 00:41:51.564 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:51.564 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:51.564 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:51.565 issued rwts: total=826,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:51.565 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:51.565 00:41:51.565 Run status group 0 (all jobs): 00:41:51.565 READ: bw=10.3MiB/s (10.8MB/s), 83.5KiB/s-6146KiB/s (85.5kB/s-6293kB/s), io=10.3MiB (10.8MB), run=1001-1006msec 00:41:51.565 WRITE: bw=15.9MiB/s (16.7MB/s), 2036KiB/s-8184KiB/s (2085kB/s-8380kB/s), io=16.0MiB (16.8MB), run=1001-1006msec 00:41:51.565 00:41:51.565 Disk stats (read/write): 00:41:51.565 nvme0n1: ios=1454/1536, merge=0/0, ticks=1384/402, in_queue=1786, util=97.70% 00:41:51.565 nvme0n2: ios=41/512, merge=0/0, ticks=1691/124, in_queue=1815, util=97.86% 00:41:51.565 nvme0n3: ios=24/512, merge=0/0, ticks=639/197, in_queue=836, util=88.91% 00:41:51.565 nvme0n4: ios=659/1024, merge=0/0, ticks=1720/240, in_queue=1960, util=97.78% 00:41:51.565 08:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:41:51.565 [global] 00:41:51.565 thread=1 00:41:51.565 invalidate=1 00:41:51.565 rw=randwrite 00:41:51.565 time_based=1 00:41:51.565 runtime=1 00:41:51.565 ioengine=libaio 00:41:51.565 direct=1 00:41:51.565 bs=4096 00:41:51.565 iodepth=1 00:41:51.565 norandommap=0 00:41:51.565 numjobs=1 00:41:51.565 00:41:51.565 verify_dump=1 00:41:51.565 verify_backlog=512 00:41:51.565 verify_state_save=0 00:41:51.565 do_verify=1 00:41:51.565 verify=crc32c-intel 00:41:51.565 [job0] 00:41:51.565 filename=/dev/nvme0n1 00:41:51.565 [job1] 00:41:51.565 filename=/dev/nvme0n2 00:41:51.565 [job2] 00:41:51.565 filename=/dev/nvme0n3 00:41:51.565 [job3] 00:41:51.565 filename=/dev/nvme0n4 00:41:51.565 Could not set queue depth (nvme0n1) 00:41:51.565 Could not set queue depth (nvme0n2) 00:41:51.565 Could not set queue depth (nvme0n3) 00:41:51.565 Could not set queue depth (nvme0n4) 00:41:51.565 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:51.565 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:51.565 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:51.565 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:51.565 fio-3.35 00:41:51.565 Starting 4 threads 00:41:52.944 00:41:52.944 job0: (groupid=0, jobs=1): err= 0: pid=3187032: Tue Nov 19 08:05:44 2024 00:41:52.944 read: IOPS=19, BW=79.1KiB/s (81.0kB/s)(80.0KiB/1011msec) 00:41:52.944 slat (nsec): min=15178, max=36159, avg=22349.45, stdev=8023.68 00:41:52.944 clat (usec): min=40816, max=41438, avg=40992.81, stdev=150.29 00:41:52.944 lat (usec): min=40838, max=41457, avg=41015.16, stdev=148.36 00:41:52.944 clat percentiles (usec): 00:41:52.944 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:41:52.944 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:41:52.944 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:41:52.944 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:41:52.944 | 99.99th=[41681] 00:41:52.944 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:41:52.944 slat (nsec): min=6845, max=69773, avg=23505.36, stdev=11503.78 00:41:52.944 clat (usec): min=209, max=547, avg=341.65, stdev=79.97 00:41:52.944 lat (usec): min=225, max=559, avg=365.16, stdev=78.92 00:41:52.944 clat percentiles (usec): 00:41:52.944 | 1.00th=[ 219], 5.00th=[ 227], 10.00th=[ 235], 20.00th=[ 251], 00:41:52.944 | 30.00th=[ 281], 40.00th=[ 322], 50.00th=[ 347], 60.00th=[ 375], 00:41:52.944 | 70.00th=[ 396], 80.00th=[ 416], 90.00th=[ 445], 95.00th=[ 465], 00:41:52.944 | 99.00th=[ 506], 99.50th=[ 519], 99.90th=[ 545], 99.95th=[ 545], 00:41:52.944 | 99.99th=[ 545] 00:41:52.944 bw ( KiB/s): min= 4087, max= 4087, per=22.54%, avg=4087.00, stdev= 0.00, samples=1 00:41:52.944 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:41:52.944 lat (usec) : 250=18.42%, 500=76.69%, 750=1.13% 00:41:52.944 lat (msec) : 50=3.76% 00:41:52.944 cpu : usr=0.89%, sys=0.89%, ctx=535, majf=0, minf=1 00:41:52.944 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:52.944 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:52.944 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:52.944 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:52.944 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:52.944 job1: (groupid=0, jobs=1): err= 0: pid=3187033: Tue Nov 19 08:05:44 2024 00:41:52.944 read: IOPS=1637, BW=6549KiB/s (6707kB/s)(6556KiB/1001msec) 00:41:52.944 slat (nsec): min=5792, max=46044, avg=11519.72, stdev=4742.42 00:41:52.944 clat (usec): min=241, max=593, avg=283.40, stdev=28.64 00:41:52.944 lat (usec): min=249, max=611, avg=294.92, stdev=31.61 00:41:52.944 clat percentiles (usec): 00:41:52.944 | 1.00th=[ 247], 5.00th=[ 251], 10.00th=[ 255], 20.00th=[ 260], 00:41:52.945 | 30.00th=[ 265], 40.00th=[ 269], 50.00th=[ 277], 60.00th=[ 289], 00:41:52.945 | 70.00th=[ 297], 80.00th=[ 306], 90.00th=[ 318], 95.00th=[ 330], 00:41:52.945 | 99.00th=[ 383], 99.50th=[ 392], 99.90th=[ 486], 99.95th=[ 594], 00:41:52.945 | 99.99th=[ 594] 00:41:52.945 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:41:52.945 slat (nsec): min=7096, max=53535, avg=15828.26, stdev=6852.50 00:41:52.945 clat (usec): min=179, max=1675, avg=228.85, stdev=68.08 00:41:52.945 lat (usec): min=187, max=1685, avg=244.68, stdev=70.00 00:41:52.945 clat percentiles (usec): 00:41:52.945 | 1.00th=[ 184], 5.00th=[ 186], 10.00th=[ 190], 20.00th=[ 194], 00:41:52.945 | 30.00th=[ 200], 40.00th=[ 212], 50.00th=[ 217], 60.00th=[ 223], 00:41:52.945 | 70.00th=[ 231], 80.00th=[ 253], 90.00th=[ 273], 95.00th=[ 293], 00:41:52.945 | 99.00th=[ 437], 99.50th=[ 490], 99.90th=[ 1004], 99.95th=[ 1188], 00:41:52.945 | 99.99th=[ 1680] 00:41:52.945 bw ( KiB/s): min= 8192, max= 8192, per=45.18%, avg=8192.00, stdev= 0.00, samples=1 00:41:52.945 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:41:52.945 lat (usec) : 250=45.02%, 500=54.71%, 750=0.08%, 1000=0.11% 00:41:52.945 lat (msec) : 2=0.08% 00:41:52.945 cpu : usr=3.80%, sys=7.10%, ctx=3688, majf=0, minf=2 00:41:52.945 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:52.945 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:52.945 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:52.945 issued rwts: total=1639,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:52.945 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:52.945 job2: (groupid=0, jobs=1): err= 0: pid=3187034: Tue Nov 19 08:05:44 2024 00:41:52.945 read: IOPS=21, BW=87.6KiB/s (89.8kB/s)(88.0KiB/1004msec) 00:41:52.945 slat (nsec): min=15041, max=37675, avg=23044.68, stdev=8675.64 00:41:52.945 clat (usec): min=337, max=41200, avg=39134.25, stdev=8665.50 00:41:52.945 lat (usec): min=356, max=41219, avg=39157.29, stdev=8666.37 00:41:52.945 clat percentiles (usec): 00:41:52.945 | 1.00th=[ 338], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:41:52.945 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:41:52.945 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:41:52.945 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:41:52.945 | 99.99th=[41157] 00:41:52.945 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:41:52.945 slat (nsec): min=7840, max=46678, avg=19467.50, stdev=7435.97 00:41:52.945 clat (usec): min=209, max=378, avg=252.61, stdev=18.37 00:41:52.945 lat (usec): min=218, max=388, avg=272.08, stdev=20.89 00:41:52.945 clat percentiles (usec): 00:41:52.945 | 1.00th=[ 219], 5.00th=[ 225], 10.00th=[ 231], 20.00th=[ 241], 00:41:52.945 | 30.00th=[ 245], 40.00th=[ 249], 50.00th=[ 251], 60.00th=[ 255], 00:41:52.945 | 70.00th=[ 260], 80.00th=[ 265], 90.00th=[ 273], 95.00th=[ 277], 00:41:52.945 | 99.00th=[ 314], 99.50th=[ 330], 99.90th=[ 379], 99.95th=[ 379], 00:41:52.945 | 99.99th=[ 379] 00:41:52.945 bw ( KiB/s): min= 4096, max= 4096, per=22.59%, avg=4096.00, stdev= 0.00, samples=1 00:41:52.945 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:41:52.945 lat (usec) : 250=44.01%, 500=52.06% 00:41:52.945 lat (msec) : 50=3.93% 00:41:52.945 cpu : usr=0.90%, sys=1.00%, ctx=535, majf=0, minf=1 00:41:52.945 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:52.945 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:52.945 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:52.945 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:52.945 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:52.945 job3: (groupid=0, jobs=1): err= 0: pid=3187035: Tue Nov 19 08:05:44 2024 00:41:52.945 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:41:52.945 slat (nsec): min=6185, max=49823, avg=14722.58, stdev=5565.55 00:41:52.945 clat (usec): min=286, max=42005, avg=498.29, stdev=2575.37 00:41:52.945 lat (usec): min=293, max=42015, avg=513.02, stdev=2575.13 00:41:52.945 clat percentiles (usec): 00:41:52.945 | 1.00th=[ 289], 5.00th=[ 293], 10.00th=[ 297], 20.00th=[ 306], 00:41:52.945 | 30.00th=[ 322], 40.00th=[ 330], 50.00th=[ 334], 60.00th=[ 338], 00:41:52.945 | 70.00th=[ 343], 80.00th=[ 347], 90.00th=[ 359], 95.00th=[ 420], 00:41:52.945 | 99.00th=[ 578], 99.50th=[ 619], 99.90th=[41681], 99.95th=[42206], 00:41:52.945 | 99.99th=[42206] 00:41:52.945 write: IOPS=1509, BW=6038KiB/s (6183kB/s)(6044KiB/1001msec); 0 zone resets 00:41:52.945 slat (nsec): min=7814, max=64625, avg=17049.81, stdev=9345.59 00:41:52.945 clat (usec): min=212, max=539, avg=289.11, stdev=65.12 00:41:52.945 lat (usec): min=222, max=579, avg=306.16, stdev=70.47 00:41:52.945 clat percentiles (usec): 00:41:52.945 | 1.00th=[ 217], 5.00th=[ 221], 10.00th=[ 223], 20.00th=[ 229], 00:41:52.945 | 30.00th=[ 235], 40.00th=[ 247], 50.00th=[ 281], 60.00th=[ 297], 00:41:52.945 | 70.00th=[ 318], 80.00th=[ 347], 90.00th=[ 379], 95.00th=[ 420], 00:41:52.945 | 99.00th=[ 474], 99.50th=[ 494], 99.90th=[ 506], 99.95th=[ 537], 00:41:52.945 | 99.99th=[ 537] 00:41:52.945 bw ( KiB/s): min= 4096, max= 4096, per=22.59%, avg=4096.00, stdev= 0.00, samples=1 00:41:52.945 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:41:52.945 lat (usec) : 250=24.73%, 500=74.36%, 750=0.75% 00:41:52.945 lat (msec) : 50=0.16% 00:41:52.945 cpu : usr=3.20%, sys=5.30%, ctx=2536, majf=0, minf=1 00:41:52.945 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:52.945 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:52.945 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:52.945 issued rwts: total=1024,1511,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:52.945 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:52.945 00:41:52.945 Run status group 0 (all jobs): 00:41:52.945 READ: bw=10.5MiB/s (11.0MB/s), 79.1KiB/s-6549KiB/s (81.0kB/s-6707kB/s), io=10.6MiB (11.1MB), run=1001-1011msec 00:41:52.945 WRITE: bw=17.7MiB/s (18.6MB/s), 2026KiB/s-8184KiB/s (2074kB/s-8380kB/s), io=17.9MiB (18.8MB), run=1001-1011msec 00:41:52.945 00:41:52.945 Disk stats (read/write): 00:41:52.945 nvme0n1: ios=68/512, merge=0/0, ticks=1380/169, in_queue=1549, util=98.50% 00:41:52.945 nvme0n2: ios=1518/1536, merge=0/0, ticks=437/357, in_queue=794, util=87.51% 00:41:52.945 nvme0n3: ios=77/512, merge=0/0, ticks=1198/122, in_queue=1320, util=98.75% 00:41:52.945 nvme0n4: ios=999/1024, merge=0/0, ticks=651/297, in_queue=948, util=98.85% 00:41:52.945 08:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:41:52.945 [global] 00:41:52.945 thread=1 00:41:52.945 invalidate=1 00:41:52.945 rw=write 00:41:52.945 time_based=1 00:41:52.945 runtime=1 00:41:52.945 ioengine=libaio 00:41:52.945 direct=1 00:41:52.945 bs=4096 00:41:52.945 iodepth=128 00:41:52.945 norandommap=0 00:41:52.945 numjobs=1 00:41:52.945 00:41:52.945 verify_dump=1 00:41:52.945 verify_backlog=512 00:41:52.945 verify_state_save=0 00:41:52.945 do_verify=1 00:41:52.945 verify=crc32c-intel 00:41:52.945 [job0] 00:41:52.945 filename=/dev/nvme0n1 00:41:52.945 [job1] 00:41:52.945 filename=/dev/nvme0n2 00:41:52.945 [job2] 00:41:52.945 filename=/dev/nvme0n3 00:41:52.945 [job3] 00:41:52.945 filename=/dev/nvme0n4 00:41:52.945 Could not set queue depth (nvme0n1) 00:41:52.945 Could not set queue depth (nvme0n2) 00:41:52.945 Could not set queue depth (nvme0n3) 00:41:52.946 Could not set queue depth (nvme0n4) 00:41:52.946 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:52.946 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:52.946 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:52.946 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:52.946 fio-3.35 00:41:52.946 Starting 4 threads 00:41:54.323 00:41:54.323 job0: (groupid=0, jobs=1): err= 0: pid=3187261: Tue Nov 19 08:05:45 2024 00:41:54.323 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:41:54.323 slat (usec): min=3, max=6105, avg=99.04, stdev=636.00 00:41:54.323 clat (usec): min=8350, max=19652, avg=13203.63, stdev=2072.67 00:41:54.323 lat (usec): min=8358, max=19666, avg=13302.67, stdev=2114.56 00:41:54.323 clat percentiles (usec): 00:41:54.323 | 1.00th=[ 9372], 5.00th=[10421], 10.00th=[11338], 20.00th=[11731], 00:41:54.323 | 30.00th=[11994], 40.00th=[12256], 50.00th=[12649], 60.00th=[12780], 00:41:54.323 | 70.00th=[13566], 80.00th=[14877], 90.00th=[16712], 95.00th=[17433], 00:41:54.323 | 99.00th=[18482], 99.50th=[18744], 99.90th=[18744], 99.95th=[18744], 00:41:54.323 | 99.99th=[19530] 00:41:54.323 write: IOPS=5077, BW=19.8MiB/s (20.8MB/s)(19.9MiB/1003msec); 0 zone resets 00:41:54.323 slat (usec): min=4, max=6058, avg=98.78, stdev=578.45 00:41:54.323 clat (usec): min=2628, max=19414, avg=12972.25, stdev=1717.38 00:41:54.323 lat (usec): min=2637, max=20077, avg=13071.03, stdev=1785.14 00:41:54.323 clat percentiles (usec): 00:41:54.323 | 1.00th=[ 7242], 5.00th=[10945], 10.00th=[11994], 20.00th=[12256], 00:41:54.323 | 30.00th=[12518], 40.00th=[12649], 50.00th=[12911], 60.00th=[13042], 00:41:54.323 | 70.00th=[13304], 80.00th=[13960], 90.00th=[14222], 95.00th=[15270], 00:41:54.323 | 99.00th=[18482], 99.50th=[19268], 99.90th=[19268], 99.95th=[19268], 00:41:54.323 | 99.99th=[19530] 00:41:54.323 bw ( KiB/s): min=19248, max=20480, per=36.06%, avg=19864.00, stdev=871.16, samples=2 00:41:54.323 iops : min= 4812, max= 5120, avg=4966.00, stdev=217.79, samples=2 00:41:54.323 lat (msec) : 4=0.35%, 10=2.96%, 20=96.69% 00:41:54.323 cpu : usr=4.39%, sys=9.08%, ctx=379, majf=0, minf=1 00:41:54.323 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:41:54.323 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:54.323 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:54.323 issued rwts: total=4608,5093,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:54.323 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:54.323 job1: (groupid=0, jobs=1): err= 0: pid=3187269: Tue Nov 19 08:05:45 2024 00:41:54.323 read: IOPS=2225, BW=8900KiB/s (9114kB/s)(8936KiB/1004msec) 00:41:54.323 slat (usec): min=2, max=28864, avg=164.61, stdev=1136.33 00:41:54.323 clat (usec): min=2527, max=64425, avg=20791.59, stdev=9976.33 00:41:54.323 lat (usec): min=6779, max=64430, avg=20956.20, stdev=10052.13 00:41:54.323 clat percentiles (usec): 00:41:54.323 | 1.00th=[ 6915], 5.00th=[12125], 10.00th=[15270], 20.00th=[16909], 00:41:54.323 | 30.00th=[17433], 40.00th=[17433], 50.00th=[17695], 60.00th=[17695], 00:41:54.323 | 70.00th=[18220], 80.00th=[20317], 90.00th=[32375], 95.00th=[50070], 00:41:54.323 | 99.00th=[61604], 99.50th=[61604], 99.90th=[64226], 99.95th=[64226], 00:41:54.323 | 99.99th=[64226] 00:41:54.323 write: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec); 0 zone resets 00:41:54.323 slat (usec): min=3, max=27903, avg=242.11, stdev=1471.67 00:41:54.323 clat (usec): min=4996, max=83185, avg=31594.55, stdev=17758.24 00:41:54.323 lat (usec): min=5003, max=83190, avg=31836.66, stdev=17872.66 00:41:54.323 clat percentiles (usec): 00:41:54.323 | 1.00th=[ 5997], 5.00th=[14222], 10.00th=[15795], 20.00th=[17171], 00:41:54.323 | 30.00th=[19530], 40.00th=[27132], 50.00th=[28967], 60.00th=[29754], 00:41:54.323 | 70.00th=[31065], 80.00th=[33162], 90.00th=[64750], 95.00th=[71828], 00:41:54.323 | 99.00th=[83362], 99.50th=[83362], 99.90th=[83362], 99.95th=[83362], 00:41:54.323 | 99.99th=[83362] 00:41:54.323 bw ( KiB/s): min=10192, max=10232, per=18.54%, avg=10212.00, stdev=28.28, samples=2 00:41:54.323 iops : min= 2548, max= 2558, avg=2553.00, stdev= 7.07, samples=2 00:41:54.323 lat (msec) : 4=0.02%, 10=3.02%, 20=48.89%, 50=38.44%, 100=9.62% 00:41:54.323 cpu : usr=1.79%, sys=2.39%, ctx=258, majf=0, minf=1 00:41:54.323 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:41:54.323 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:54.323 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:54.323 issued rwts: total=2234,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:54.323 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:54.323 job2: (groupid=0, jobs=1): err= 0: pid=3187288: Tue Nov 19 08:05:45 2024 00:41:54.323 read: IOPS=2017, BW=8071KiB/s (8265kB/s)(8192KiB/1015msec) 00:41:54.323 slat (usec): min=2, max=18782, avg=176.67, stdev=1163.56 00:41:54.323 clat (usec): min=7414, max=45238, avg=21029.83, stdev=7802.72 00:41:54.323 lat (usec): min=7437, max=45270, avg=21206.50, stdev=7871.00 00:41:54.323 clat percentiles (usec): 00:41:54.323 | 1.00th=[ 9372], 5.00th=[11600], 10.00th=[13173], 20.00th=[16057], 00:41:54.323 | 30.00th=[16319], 40.00th=[16450], 50.00th=[16712], 60.00th=[20841], 00:41:54.323 | 70.00th=[23462], 80.00th=[28443], 90.00th=[33817], 95.00th=[36439], 00:41:54.323 | 99.00th=[43254], 99.50th=[43254], 99.90th=[45351], 99.95th=[45351], 00:41:54.323 | 99.99th=[45351] 00:41:54.323 write: IOPS=2197, BW=8788KiB/s (8999kB/s)(8920KiB/1015msec); 0 zone resets 00:41:54.323 slat (usec): min=4, max=24544, avg=276.86, stdev=1358.85 00:41:54.323 clat (usec): min=1119, max=86724, avg=38432.71, stdev=17985.02 00:41:54.323 lat (usec): min=1129, max=86732, avg=38709.57, stdev=18113.29 00:41:54.323 clat percentiles (usec): 00:41:54.323 | 1.00th=[ 5604], 5.00th=[13698], 10.00th=[22938], 20.00th=[28443], 00:41:54.323 | 30.00th=[29492], 40.00th=[30802], 50.00th=[31065], 60.00th=[33162], 00:41:54.323 | 70.00th=[42206], 80.00th=[53740], 90.00th=[67634], 95.00th=[78119], 00:41:54.323 | 99.00th=[86508], 99.50th=[86508], 99.90th=[86508], 99.95th=[86508], 00:41:54.323 | 99.99th=[86508] 00:41:54.323 bw ( KiB/s): min= 8192, max= 8624, per=15.26%, avg=8408.00, stdev=305.47, samples=2 00:41:54.323 iops : min= 2048, max= 2156, avg=2102.00, stdev=76.37, samples=2 00:41:54.323 lat (msec) : 2=0.05%, 4=0.30%, 10=2.45%, 20=30.18%, 50=54.91% 00:41:54.323 lat (msec) : 100=12.11% 00:41:54.323 cpu : usr=2.27%, sys=4.14%, ctx=251, majf=0, minf=1 00:41:54.323 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:41:54.323 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:54.323 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:54.323 issued rwts: total=2048,2230,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:54.323 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:54.324 job3: (groupid=0, jobs=1): err= 0: pid=3187299: Tue Nov 19 08:05:45 2024 00:41:54.324 read: IOPS=3918, BW=15.3MiB/s (16.1MB/s)(15.4MiB/1006msec) 00:41:54.324 slat (usec): min=2, max=30666, avg=118.93, stdev=807.84 00:41:54.324 clat (usec): min=4340, max=48501, avg=15837.86, stdev=5013.94 00:41:54.324 lat (usec): min=4803, max=48512, avg=15956.79, stdev=5030.62 00:41:54.324 clat percentiles (usec): 00:41:54.324 | 1.00th=[ 6521], 5.00th=[11207], 10.00th=[12518], 20.00th=[13566], 00:41:54.324 | 30.00th=[14615], 40.00th=[15008], 50.00th=[15139], 60.00th=[15401], 00:41:54.324 | 70.00th=[15926], 80.00th=[17171], 90.00th=[18220], 95.00th=[20055], 00:41:54.324 | 99.00th=[36439], 99.50th=[48497], 99.90th=[48497], 99.95th=[48497], 00:41:54.324 | 99.99th=[48497] 00:41:54.324 write: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec); 0 zone resets 00:41:54.324 slat (usec): min=3, max=12541, avg=115.60, stdev=646.19 00:41:54.324 clat (usec): min=647, max=50175, avg=15916.81, stdev=3589.73 00:41:54.324 lat (usec): min=655, max=50195, avg=16032.41, stdev=3610.43 00:41:54.324 clat percentiles (usec): 00:41:54.324 | 1.00th=[ 9634], 5.00th=[12387], 10.00th=[13698], 20.00th=[14746], 00:41:54.324 | 30.00th=[15008], 40.00th=[15270], 50.00th=[15401], 60.00th=[15664], 00:41:54.324 | 70.00th=[16057], 80.00th=[16450], 90.00th=[18220], 95.00th=[20055], 00:41:54.324 | 99.00th=[34866], 99.50th=[41157], 99.90th=[46400], 99.95th=[46400], 00:41:54.324 | 99.99th=[50070] 00:41:54.324 bw ( KiB/s): min=16384, max=16416, per=29.77%, avg=16400.00, stdev=22.63, samples=2 00:41:54.324 iops : min= 4096, max= 4104, avg=4100.00, stdev= 5.66, samples=2 00:41:54.324 lat (usec) : 750=0.04% 00:41:54.324 lat (msec) : 4=0.19%, 10=2.10%, 20=92.68%, 50=4.98%, 100=0.01% 00:41:54.324 cpu : usr=4.28%, sys=6.87%, ctx=408, majf=0, minf=1 00:41:54.324 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:41:54.324 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:54.324 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:54.324 issued rwts: total=3942,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:54.324 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:54.324 00:41:54.324 Run status group 0 (all jobs): 00:41:54.324 READ: bw=49.4MiB/s (51.8MB/s), 8071KiB/s-17.9MiB/s (8265kB/s-18.8MB/s), io=50.1MiB (52.6MB), run=1003-1015msec 00:41:54.324 WRITE: bw=53.8MiB/s (56.4MB/s), 8788KiB/s-19.8MiB/s (8999kB/s-20.8MB/s), io=54.6MiB (57.3MB), run=1003-1015msec 00:41:54.324 00:41:54.324 Disk stats (read/write): 00:41:54.324 nvme0n1: ios=4074/4096, merge=0/0, ticks=26894/25253, in_queue=52147, util=97.80% 00:41:54.324 nvme0n2: ios=2098/2087, merge=0/0, ticks=23719/38332, in_queue=62051, util=97.76% 00:41:54.324 nvme0n3: ios=1584/1791, merge=0/0, ticks=32000/70666, in_queue=102666, util=98.12% 00:41:54.324 nvme0n4: ios=3370/3584, merge=0/0, ticks=21039/24104, in_queue=45143, util=97.26% 00:41:54.324 08:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:41:54.324 [global] 00:41:54.324 thread=1 00:41:54.324 invalidate=1 00:41:54.324 rw=randwrite 00:41:54.324 time_based=1 00:41:54.324 runtime=1 00:41:54.324 ioengine=libaio 00:41:54.324 direct=1 00:41:54.324 bs=4096 00:41:54.324 iodepth=128 00:41:54.324 norandommap=0 00:41:54.324 numjobs=1 00:41:54.324 00:41:54.324 verify_dump=1 00:41:54.324 verify_backlog=512 00:41:54.324 verify_state_save=0 00:41:54.324 do_verify=1 00:41:54.324 verify=crc32c-intel 00:41:54.324 [job0] 00:41:54.324 filename=/dev/nvme0n1 00:41:54.324 [job1] 00:41:54.324 filename=/dev/nvme0n2 00:41:54.324 [job2] 00:41:54.324 filename=/dev/nvme0n3 00:41:54.324 [job3] 00:41:54.324 filename=/dev/nvme0n4 00:41:54.324 Could not set queue depth (nvme0n1) 00:41:54.324 Could not set queue depth (nvme0n2) 00:41:54.324 Could not set queue depth (nvme0n3) 00:41:54.324 Could not set queue depth (nvme0n4) 00:41:54.324 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:54.324 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:54.324 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:54.324 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:54.324 fio-3.35 00:41:54.324 Starting 4 threads 00:41:55.701 00:41:55.701 job0: (groupid=0, jobs=1): err= 0: pid=3187607: Tue Nov 19 08:05:47 2024 00:41:55.701 read: IOPS=2039, BW=8159KiB/s (8355kB/s)(8192KiB/1004msec) 00:41:55.701 slat (usec): min=2, max=13503, avg=170.04, stdev=1027.03 00:41:55.701 clat (usec): min=5100, max=50130, avg=20030.35, stdev=8010.50 00:41:55.701 lat (usec): min=5108, max=50137, avg=20200.38, stdev=8077.65 00:41:55.701 clat percentiles (usec): 00:41:55.701 | 1.00th=[ 8356], 5.00th=[13566], 10.00th=[13698], 20.00th=[14615], 00:41:55.701 | 30.00th=[14877], 40.00th=[15795], 50.00th=[16712], 60.00th=[20055], 00:41:55.701 | 70.00th=[21627], 80.00th=[24511], 90.00th=[32113], 95.00th=[40109], 00:41:55.701 | 99.00th=[45351], 99.50th=[47449], 99.90th=[50070], 99.95th=[50070], 00:41:55.701 | 99.99th=[50070] 00:41:55.701 write: IOPS=2398, BW=9594KiB/s (9824kB/s)(9632KiB/1004msec); 0 zone resets 00:41:55.702 slat (usec): min=3, max=16877, avg=256.86, stdev=1158.58 00:41:55.702 clat (msec): min=2, max=108, avg=35.73, stdev=25.80 00:41:55.702 lat (msec): min=3, max=108, avg=35.99, stdev=25.96 00:41:55.702 clat percentiles (msec): 00:41:55.702 | 1.00th=[ 5], 5.00th=[ 10], 10.00th=[ 12], 20.00th=[ 14], 00:41:55.702 | 30.00th=[ 15], 40.00th=[ 21], 50.00th=[ 27], 60.00th=[ 33], 00:41:55.702 | 70.00th=[ 47], 80.00th=[ 59], 90.00th=[ 78], 95.00th=[ 88], 00:41:55.702 | 99.00th=[ 106], 99.50th=[ 107], 99.90th=[ 109], 99.95th=[ 109], 00:41:55.702 | 99.99th=[ 109] 00:41:55.702 bw ( KiB/s): min= 8536, max= 9712, per=19.70%, avg=9124.00, stdev=831.56, samples=2 00:41:55.702 iops : min= 2134, max= 2428, avg=2281.00, stdev=207.89, samples=2 00:41:55.702 lat (msec) : 4=0.47%, 10=3.23%, 20=44.25%, 50=37.81%, 100=12.88% 00:41:55.702 lat (msec) : 250=1.35% 00:41:55.702 cpu : usr=2.79%, sys=3.79%, ctx=304, majf=0, minf=1 00:41:55.702 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:41:55.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:55.702 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:55.702 issued rwts: total=2048,2408,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:55.702 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:55.702 job1: (groupid=0, jobs=1): err= 0: pid=3187608: Tue Nov 19 08:05:47 2024 00:41:55.702 read: IOPS=1663, BW=6653KiB/s (6813kB/s)(6680KiB/1004msec) 00:41:55.702 slat (nsec): min=1975, max=29866k, avg=200759.37, stdev=1472153.36 00:41:55.702 clat (usec): min=2352, max=50027, avg=27348.70, stdev=7654.84 00:41:55.702 lat (usec): min=6407, max=50981, avg=27549.46, stdev=7678.51 00:41:55.702 clat percentiles (usec): 00:41:55.702 | 1.00th=[ 9372], 5.00th=[15664], 10.00th=[16581], 20.00th=[21103], 00:41:55.702 | 30.00th=[25297], 40.00th=[26346], 50.00th=[27395], 60.00th=[27657], 00:41:55.702 | 70.00th=[28967], 80.00th=[32375], 90.00th=[34866], 95.00th=[43779], 00:41:55.702 | 99.00th=[49546], 99.50th=[49546], 99.90th=[50070], 99.95th=[50070], 00:41:55.702 | 99.99th=[50070] 00:41:55.702 write: IOPS=2039, BW=8159KiB/s (8355kB/s)(8192KiB/1004msec); 0 zone resets 00:41:55.702 slat (usec): min=3, max=23824, avg=289.61, stdev=1595.54 00:41:55.702 clat (usec): min=946, max=84195, avg=39938.73, stdev=18136.03 00:41:55.702 lat (usec): min=953, max=84210, avg=40228.35, stdev=18250.86 00:41:55.702 clat percentiles (usec): 00:41:55.702 | 1.00th=[15270], 5.00th=[19268], 10.00th=[20841], 20.00th=[23725], 00:41:55.702 | 30.00th=[26084], 40.00th=[31065], 50.00th=[35390], 60.00th=[42206], 00:41:55.702 | 70.00th=[44303], 80.00th=[56361], 90.00th=[70779], 95.00th=[76022], 00:41:55.702 | 99.00th=[80217], 99.50th=[81265], 99.90th=[84411], 99.95th=[84411], 00:41:55.702 | 99.99th=[84411] 00:41:55.702 bw ( KiB/s): min= 8192, max= 8192, per=17.69%, avg=8192.00, stdev= 0.00, samples=2 00:41:55.702 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:41:55.702 lat (usec) : 1000=0.05% 00:41:55.702 lat (msec) : 4=0.03%, 10=0.48%, 20=10.44%, 50=75.82%, 100=13.18% 00:41:55.702 cpu : usr=1.00%, sys=2.29%, ctx=216, majf=0, minf=1 00:41:55.702 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:41:55.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:55.702 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:55.702 issued rwts: total=1670,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:55.702 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:55.702 job2: (groupid=0, jobs=1): err= 0: pid=3187611: Tue Nov 19 08:05:47 2024 00:41:55.702 read: IOPS=3093, BW=12.1MiB/s (12.7MB/s)(12.1MiB/1002msec) 00:41:55.702 slat (usec): min=2, max=25883, avg=143.86, stdev=1116.34 00:41:55.702 clat (usec): min=1484, max=46847, avg=18052.21, stdev=5950.94 00:41:55.702 lat (usec): min=4822, max=50940, avg=18196.08, stdev=6048.96 00:41:55.702 clat percentiles (usec): 00:41:55.702 | 1.00th=[ 7504], 5.00th=[10028], 10.00th=[12125], 20.00th=[13698], 00:41:55.702 | 30.00th=[14091], 40.00th=[15270], 50.00th=[17695], 60.00th=[19006], 00:41:55.702 | 70.00th=[19530], 80.00th=[21627], 90.00th=[26346], 95.00th=[27657], 00:41:55.702 | 99.00th=[41157], 99.50th=[46924], 99.90th=[46924], 99.95th=[46924], 00:41:55.702 | 99.99th=[46924] 00:41:55.702 write: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec); 0 zone resets 00:41:55.702 slat (usec): min=3, max=27358, avg=129.84, stdev=1012.02 00:41:55.702 clat (usec): min=819, max=96868, avg=19817.31, stdev=12591.60 00:41:55.702 lat (usec): min=835, max=96882, avg=19947.14, stdev=12648.74 00:41:55.702 clat percentiles (usec): 00:41:55.702 | 1.00th=[ 3392], 5.00th=[ 8029], 10.00th=[ 9241], 20.00th=[12518], 00:41:55.702 | 30.00th=[14746], 40.00th=[15139], 50.00th=[15926], 60.00th=[17695], 00:41:55.702 | 70.00th=[19268], 80.00th=[26084], 90.00th=[31851], 95.00th=[47449], 00:41:55.702 | 99.00th=[70779], 99.50th=[92799], 99.90th=[96994], 99.95th=[96994], 00:41:55.702 | 99.99th=[96994] 00:41:55.702 bw ( KiB/s): min=13672, max=14208, per=30.10%, avg=13940.00, stdev=379.01, samples=2 00:41:55.702 iops : min= 3418, max= 3552, avg=3485.00, stdev=94.75, samples=2 00:41:55.702 lat (usec) : 1000=0.04% 00:41:55.702 lat (msec) : 2=0.22%, 4=0.33%, 10=6.96%, 20=66.29%, 50=23.80% 00:41:55.702 lat (msec) : 100=2.35% 00:41:55.702 cpu : usr=2.50%, sys=3.70%, ctx=255, majf=0, minf=1 00:41:55.702 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:41:55.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:55.702 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:55.702 issued rwts: total=3100,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:55.702 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:55.702 job3: (groupid=0, jobs=1): err= 0: pid=3187612: Tue Nov 19 08:05:47 2024 00:41:55.702 read: IOPS=3556, BW=13.9MiB/s (14.6MB/s)(13.9MiB/1004msec) 00:41:55.702 slat (usec): min=3, max=36668, avg=157.48, stdev=1478.94 00:41:55.702 clat (usec): min=2271, max=67614, avg=19615.32, stdev=7426.89 00:41:55.702 lat (usec): min=4017, max=69878, avg=19772.80, stdev=7552.36 00:41:55.702 clat percentiles (usec): 00:41:55.702 | 1.00th=[ 5604], 5.00th=[13173], 10.00th=[13435], 20.00th=[13960], 00:41:55.702 | 30.00th=[14353], 40.00th=[15401], 50.00th=[16581], 60.00th=[17957], 00:41:55.702 | 70.00th=[23725], 80.00th=[25035], 90.00th=[28705], 95.00th=[32900], 00:41:55.702 | 99.00th=[42206], 99.50th=[42206], 99.90th=[65274], 99.95th=[65274], 00:41:55.702 | 99.99th=[67634] 00:41:55.702 write: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec); 0 zone resets 00:41:55.702 slat (usec): min=3, max=26552, avg=113.76, stdev=902.62 00:41:55.702 clat (usec): min=2650, max=49689, avg=15978.90, stdev=4766.09 00:41:55.702 lat (usec): min=2659, max=49736, avg=16092.66, stdev=4818.48 00:41:55.702 clat percentiles (usec): 00:41:55.702 | 1.00th=[ 4359], 5.00th=[ 8455], 10.00th=[10290], 20.00th=[12649], 00:41:55.702 | 30.00th=[14746], 40.00th=[15533], 50.00th=[15795], 60.00th=[16057], 00:41:55.702 | 70.00th=[16909], 80.00th=[18220], 90.00th=[22152], 95.00th=[24249], 00:41:55.702 | 99.00th=[31327], 99.50th=[31327], 99.90th=[31327], 99.95th=[33817], 00:41:55.702 | 99.99th=[49546] 00:41:55.702 bw ( KiB/s): min=12288, max=16384, per=30.96%, avg=14336.00, stdev=2896.31, samples=2 00:41:55.702 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:41:55.702 lat (msec) : 4=0.39%, 10=3.52%, 20=70.38%, 50=25.62%, 100=0.08% 00:41:55.702 cpu : usr=4.49%, sys=5.38%, ctx=288, majf=0, minf=1 00:41:55.702 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:41:55.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:55.702 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:55.702 issued rwts: total=3571,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:55.702 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:55.702 00:41:55.702 Run status group 0 (all jobs): 00:41:55.702 READ: bw=40.4MiB/s (42.4MB/s), 6653KiB/s-13.9MiB/s (6813kB/s-14.6MB/s), io=40.6MiB (42.6MB), run=1002-1004msec 00:41:55.702 WRITE: bw=45.2MiB/s (47.4MB/s), 8159KiB/s-14.0MiB/s (8355kB/s-14.7MB/s), io=45.4MiB (47.6MB), run=1002-1004msec 00:41:55.702 00:41:55.702 Disk stats (read/write): 00:41:55.702 nvme0n1: ios=1579/1871, merge=0/0, ticks=30113/73491, in_queue=103604, util=96.59% 00:41:55.702 nvme0n2: ios=1561/1567, merge=0/0, ticks=24255/37290, in_queue=61545, util=87.41% 00:41:55.702 nvme0n3: ios=2586/2830, merge=0/0, ticks=32454/37739, in_queue=70193, util=97.60% 00:41:55.703 nvme0n4: ios=3078/3303, merge=0/0, ticks=52918/50824, in_queue=103742, util=91.38% 00:41:55.703 08:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:41:55.703 08:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3187747 00:41:55.703 08:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:41:55.703 08:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:41:55.703 [global] 00:41:55.703 thread=1 00:41:55.703 invalidate=1 00:41:55.703 rw=read 00:41:55.703 time_based=1 00:41:55.703 runtime=10 00:41:55.703 ioengine=libaio 00:41:55.703 direct=1 00:41:55.703 bs=4096 00:41:55.703 iodepth=1 00:41:55.703 norandommap=1 00:41:55.703 numjobs=1 00:41:55.703 00:41:55.703 [job0] 00:41:55.703 filename=/dev/nvme0n1 00:41:55.703 [job1] 00:41:55.703 filename=/dev/nvme0n2 00:41:55.703 [job2] 00:41:55.703 filename=/dev/nvme0n3 00:41:55.703 [job3] 00:41:55.703 filename=/dev/nvme0n4 00:41:55.703 Could not set queue depth (nvme0n1) 00:41:55.703 Could not set queue depth (nvme0n2) 00:41:55.703 Could not set queue depth (nvme0n3) 00:41:55.703 Could not set queue depth (nvme0n4) 00:41:55.703 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:55.703 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:55.703 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:55.703 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:55.703 fio-3.35 00:41:55.703 Starting 4 threads 00:41:58.986 08:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:41:58.986 08:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:41:58.986 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=14520320, buflen=4096 00:41:58.986 fio: pid=3187841, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:41:59.245 08:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:59.245 08:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:41:59.245 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=11735040, buflen=4096 00:41:59.245 fio: pid=3187840, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:41:59.503 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=2785280, buflen=4096 00:41:59.503 fio: pid=3187838, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:41:59.503 08:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:59.503 08:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:41:59.761 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=31510528, buflen=4096 00:41:59.761 fio: pid=3187839, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:42:00.019 08:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:42:00.019 08:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:42:00.019 00:42:00.019 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3187838: Tue Nov 19 08:05:51 2024 00:42:00.019 read: IOPS=193, BW=772KiB/s (791kB/s)(2720KiB/3522msec) 00:42:00.019 slat (usec): min=4, max=5858, avg=20.57, stdev=226.75 00:42:00.019 clat (usec): min=245, max=59756, avg=5122.35, stdev=13221.14 00:42:00.019 lat (usec): min=250, max=59791, avg=5142.91, stdev=13252.77 00:42:00.019 clat percentiles (usec): 00:42:00.019 | 1.00th=[ 253], 5.00th=[ 262], 10.00th=[ 269], 20.00th=[ 285], 00:42:00.019 | 30.00th=[ 302], 40.00th=[ 318], 50.00th=[ 343], 60.00th=[ 375], 00:42:00.019 | 70.00th=[ 383], 80.00th=[ 408], 90.00th=[41157], 95.00th=[41157], 00:42:00.019 | 99.00th=[42206], 99.50th=[42206], 99.90th=[59507], 99.95th=[59507], 00:42:00.019 | 99.99th=[59507] 00:42:00.019 bw ( KiB/s): min= 96, max= 4864, per=5.89%, avg=890.67, stdev=1946.53, samples=6 00:42:00.019 iops : min= 24, max= 1216, avg=222.67, stdev=486.63, samples=6 00:42:00.019 lat (usec) : 250=0.44%, 500=86.64%, 750=0.88% 00:42:00.019 lat (msec) : 2=0.29%, 50=11.45%, 100=0.15% 00:42:00.019 cpu : usr=0.11%, sys=0.20%, ctx=684, majf=0, minf=1 00:42:00.019 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:00.019 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:00.019 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:00.019 issued rwts: total=681,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:00.019 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:00.019 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=3187839: Tue Nov 19 08:05:51 2024 00:42:00.019 read: IOPS=1965, BW=7862KiB/s (8051kB/s)(30.1MiB/3914msec) 00:42:00.019 slat (usec): min=4, max=24820, avg=18.89, stdev=328.87 00:42:00.019 clat (usec): min=229, max=84041, avg=487.08, stdev=2449.02 00:42:00.019 lat (usec): min=235, max=84054, avg=505.05, stdev=2557.29 00:42:00.019 clat percentiles (usec): 00:42:00.019 | 1.00th=[ 239], 5.00th=[ 245], 10.00th=[ 249], 20.00th=[ 255], 00:42:00.019 | 30.00th=[ 273], 40.00th=[ 306], 50.00th=[ 338], 60.00th=[ 379], 00:42:00.019 | 70.00th=[ 404], 80.00th=[ 453], 90.00th=[ 510], 95.00th=[ 553], 00:42:00.019 | 99.00th=[ 668], 99.50th=[ 758], 99.90th=[41157], 99.95th=[41157], 00:42:00.019 | 99.99th=[84411] 00:42:00.019 bw ( KiB/s): min= 99, max=13752, per=58.12%, avg=8781.00, stdev=4382.08, samples=7 00:42:00.020 iops : min= 24, max= 3438, avg=2195.14, stdev=1095.77, samples=7 00:42:00.020 lat (usec) : 250=12.30%, 500=76.27%, 750=10.87%, 1000=0.25% 00:42:00.020 lat (msec) : 2=0.01%, 50=0.27%, 100=0.03% 00:42:00.020 cpu : usr=0.95%, sys=3.04%, ctx=7699, majf=0, minf=1 00:42:00.020 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:00.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:00.020 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:00.020 issued rwts: total=7694,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:00.020 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:00.020 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3187840: Tue Nov 19 08:05:51 2024 00:42:00.020 read: IOPS=885, BW=3541KiB/s (3626kB/s)(11.2MiB/3236msec) 00:42:00.020 slat (nsec): min=5194, max=66609, avg=15771.41, stdev=8681.79 00:42:00.020 clat (usec): min=299, max=44062, avg=1101.74, stdev=5330.66 00:42:00.020 lat (usec): min=306, max=44090, avg=1117.51, stdev=5330.86 00:42:00.020 clat percentiles (usec): 00:42:00.020 | 1.00th=[ 310], 5.00th=[ 322], 10.00th=[ 330], 20.00th=[ 347], 00:42:00.020 | 30.00th=[ 359], 40.00th=[ 375], 50.00th=[ 388], 60.00th=[ 400], 00:42:00.020 | 70.00th=[ 416], 80.00th=[ 433], 90.00th=[ 461], 95.00th=[ 478], 00:42:00.020 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41681], 00:42:00.020 | 99.99th=[44303] 00:42:00.020 bw ( KiB/s): min= 104, max=10472, per=25.23%, avg=3812.00, stdev=3854.17, samples=6 00:42:00.020 iops : min= 26, max= 2618, avg=953.00, stdev=963.54, samples=6 00:42:00.020 lat (usec) : 500=96.86%, 750=1.33% 00:42:00.020 lat (msec) : 10=0.03%, 50=1.74% 00:42:00.020 cpu : usr=0.56%, sys=1.58%, ctx=2867, majf=0, minf=2 00:42:00.020 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:00.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:00.020 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:00.020 issued rwts: total=2866,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:00.020 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:00.020 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3187841: Tue Nov 19 08:05:51 2024 00:42:00.020 read: IOPS=1202, BW=4810KiB/s (4925kB/s)(13.8MiB/2948msec) 00:42:00.020 slat (nsec): min=4369, max=41887, avg=12442.58, stdev=4660.59 00:42:00.020 clat (usec): min=247, max=41562, avg=809.72, stdev=3960.04 00:42:00.020 lat (usec): min=254, max=41574, avg=822.17, stdev=3960.24 00:42:00.020 clat percentiles (usec): 00:42:00.020 | 1.00th=[ 289], 5.00th=[ 310], 10.00th=[ 326], 20.00th=[ 359], 00:42:00.020 | 30.00th=[ 388], 40.00th=[ 396], 50.00th=[ 412], 60.00th=[ 429], 00:42:00.020 | 70.00th=[ 457], 80.00th=[ 478], 90.00th=[ 515], 95.00th=[ 562], 00:42:00.020 | 99.00th=[ 816], 99.50th=[41157], 99.90th=[41157], 99.95th=[41681], 00:42:00.020 | 99.99th=[41681] 00:42:00.020 bw ( KiB/s): min= 384, max= 9200, per=28.30%, avg=4276.80, stdev=3755.80, samples=5 00:42:00.020 iops : min= 96, max= 2300, avg=1069.20, stdev=938.95, samples=5 00:42:00.020 lat (usec) : 250=0.11%, 500=87.34%, 750=11.31%, 1000=0.25% 00:42:00.020 lat (msec) : 50=0.96% 00:42:00.020 cpu : usr=0.71%, sys=1.63%, ctx=3546, majf=0, minf=2 00:42:00.020 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:00.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:00.020 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:00.020 issued rwts: total=3546,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:00.020 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:00.020 00:42:00.020 Run status group 0 (all jobs): 00:42:00.020 READ: bw=14.8MiB/s (15.5MB/s), 772KiB/s-7862KiB/s (791kB/s-8051kB/s), io=57.7MiB (60.6MB), run=2948-3914msec 00:42:00.020 00:42:00.020 Disk stats (read/write): 00:42:00.020 nvme0n1: ios=676/0, merge=0/0, ticks=3313/0, in_queue=3313, util=95.85% 00:42:00.020 nvme0n2: ios=7731/0, merge=0/0, ticks=4195/0, in_queue=4195, util=98.92% 00:42:00.020 nvme0n3: ios=2910/0, merge=0/0, ticks=3361/0, in_queue=3361, util=99.75% 00:42:00.020 nvme0n4: ios=3465/0, merge=0/0, ticks=2774/0, in_queue=2774, util=96.75% 00:42:00.278 08:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:42:00.278 08:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:42:00.536 08:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:42:00.536 08:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:42:00.794 08:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:42:00.794 08:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:42:01.365 08:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:42:01.365 08:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:42:01.625 08:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:42:01.625 08:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 3187747 00:42:01.625 08:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:42:01.625 08:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:42:02.559 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:42:02.559 08:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:42:02.559 08:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:42:02.559 08:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:42:02.559 08:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:42:02.559 08:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:42:02.559 08:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:42:02.559 08:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:42:02.559 08:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:42:02.559 08:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:42:02.559 nvmf hotplug test: fio failed as expected 00:42:02.559 08:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:02.559 08:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:42:02.559 08:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:42:02.559 08:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:42:02.559 08:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:42:02.559 08:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:42:02.559 08:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:02.559 08:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:42:02.559 08:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:02.559 08:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:42:02.559 08:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:02.559 08:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:02.559 rmmod nvme_tcp 00:42:02.559 rmmod nvme_fabrics 00:42:02.847 rmmod nvme_keyring 00:42:02.847 08:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:02.847 08:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:42:02.847 08:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:42:02.847 08:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 3185611 ']' 00:42:02.847 08:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 3185611 00:42:02.847 08:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 3185611 ']' 00:42:02.847 08:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 3185611 00:42:02.847 08:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:42:02.847 08:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:02.847 08:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3185611 00:42:02.847 08:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:02.847 08:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:02.847 08:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3185611' 00:42:02.847 killing process with pid 3185611 00:42:02.847 08:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 3185611 00:42:02.847 08:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 3185611 00:42:04.251 08:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:42:04.251 08:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:04.251 08:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:04.251 08:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:42:04.251 08:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:42:04.251 08:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:04.251 08:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:42:04.251 08:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:04.251 08:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:04.251 08:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:04.251 08:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:04.251 08:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:06.158 08:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:06.158 00:42:06.158 real 0m27.375s 00:42:06.158 user 1m13.525s 00:42:06.158 sys 0m10.776s 00:42:06.158 08:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:06.158 08:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:42:06.158 ************************************ 00:42:06.158 END TEST nvmf_fio_target 00:42:06.158 ************************************ 00:42:06.158 08:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:42:06.158 08:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:42:06.158 08:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:06.158 08:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:42:06.158 ************************************ 00:42:06.158 START TEST nvmf_bdevio 00:42:06.158 ************************************ 00:42:06.158 08:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:42:06.158 * Looking for test storage... 00:42:06.158 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:06.158 08:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:42:06.158 08:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:42:06.158 08:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:42:06.158 08:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:42:06.158 08:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:06.158 08:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:06.158 08:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:06.158 08:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:42:06.158 08:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:42:06.158 08:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:42:06.158 08:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:42:06.158 08:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:42:06.158 08:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:42:06.158 08:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:42:06.158 08:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:06.158 08:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:42:06.158 08:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:42:06.158 08:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:06.158 08:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:06.158 08:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:42:06.158 08:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:42:06.158 08:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:06.158 08:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:42:06.158 08:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:42:06.158 08:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:42:06.158 08:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:42:06.158 08:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:06.158 08:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:42:06.158 08:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:42:06.158 08:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:06.158 08:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:06.158 08:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:42:06.158 08:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:06.158 08:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:42:06.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:06.158 --rc genhtml_branch_coverage=1 00:42:06.158 --rc genhtml_function_coverage=1 00:42:06.158 --rc genhtml_legend=1 00:42:06.158 --rc geninfo_all_blocks=1 00:42:06.158 --rc geninfo_unexecuted_blocks=1 00:42:06.158 00:42:06.158 ' 00:42:06.158 08:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:42:06.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:06.158 --rc genhtml_branch_coverage=1 00:42:06.158 --rc genhtml_function_coverage=1 00:42:06.158 --rc genhtml_legend=1 00:42:06.158 --rc geninfo_all_blocks=1 00:42:06.158 --rc geninfo_unexecuted_blocks=1 00:42:06.158 00:42:06.158 ' 00:42:06.158 08:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:42:06.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:06.158 --rc genhtml_branch_coverage=1 00:42:06.158 --rc genhtml_function_coverage=1 00:42:06.158 --rc genhtml_legend=1 00:42:06.158 --rc geninfo_all_blocks=1 00:42:06.158 --rc geninfo_unexecuted_blocks=1 00:42:06.158 00:42:06.158 ' 00:42:06.158 08:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:42:06.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:06.158 --rc genhtml_branch_coverage=1 00:42:06.158 --rc genhtml_function_coverage=1 00:42:06.158 --rc genhtml_legend=1 00:42:06.158 --rc geninfo_all_blocks=1 00:42:06.158 --rc geninfo_unexecuted_blocks=1 00:42:06.158 00:42:06.158 ' 00:42:06.158 08:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:06.158 08:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:42:06.158 08:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:06.158 08:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:06.158 08:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:06.158 08:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:06.158 08:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:06.158 08:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:06.158 08:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:06.158 08:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:06.158 08:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:06.158 08:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:06.158 08:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:06.158 08:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:42:06.158 08:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:06.158 08:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:06.158 08:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:06.158 08:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:06.158 08:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:06.158 08:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:42:06.158 08:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:06.159 08:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:06.159 08:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:06.159 08:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:06.159 08:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:06.159 08:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:06.159 08:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:42:06.159 08:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:06.159 08:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:42:06.159 08:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:06.159 08:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:06.159 08:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:06.159 08:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:06.159 08:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:06.159 08:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:42:06.159 08:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:42:06.159 08:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:06.159 08:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:06.159 08:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:06.159 08:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:42:06.159 08:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:42:06.159 08:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:42:06.159 08:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:06.159 08:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:06.159 08:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:06.159 08:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:06.159 08:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:06.159 08:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:06.159 08:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:06.159 08:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:06.159 08:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:42:06.159 08:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:42:06.159 08:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:42:06.159 08:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:08.687 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:08.687 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:42:08.687 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:08.687 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:08.687 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:08.687 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:08.687 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:08.687 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:42:08.687 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:08.687 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:42:08.687 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:42:08.687 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:42:08.687 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:42:08.687 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:42:08.687 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:42:08.687 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:08.687 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:08.687 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:08.687 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:08.687 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:08.687 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:08.687 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:08.687 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:08.687 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:08.687 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:08.687 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:08.687 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:08.687 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:08.687 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:08.687 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:08.687 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:08.687 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:08.687 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:08.687 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:08.687 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:42:08.687 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:42:08.687 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:08.687 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:08.687 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:08.687 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:08.687 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:08.687 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:08.687 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:42:08.687 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:42:08.687 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:08.687 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:08.687 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:08.687 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:08.687 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:08.687 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:08.687 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:08.687 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:08.687 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:08.687 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:08.687 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:08.687 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:08.687 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:08.687 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:08.687 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:08.688 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:42:08.688 Found net devices under 0000:0a:00.0: cvl_0_0 00:42:08.688 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:08.688 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:08.688 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:08.688 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:08.688 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:08.688 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:08.688 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:08.688 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:08.688 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:42:08.688 Found net devices under 0000:0a:00.1: cvl_0_1 00:42:08.688 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:08.688 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:42:08.688 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:42:08.688 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:42:08.688 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:42:08.688 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:42:08.688 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:08.688 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:08.688 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:08.688 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:08.688 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:08.688 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:08.688 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:08.688 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:08.688 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:08.688 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:08.688 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:08.688 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:08.688 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:08.688 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:08.688 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:08.688 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:08.688 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:08.688 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:08.688 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:08.688 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:08.688 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:08.688 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:08.688 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:08.688 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:08.688 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:42:08.688 00:42:08.688 --- 10.0.0.2 ping statistics --- 00:42:08.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:08.688 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:42:08.688 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:08.688 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:08.688 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:42:08.688 00:42:08.688 --- 10.0.0.1 ping statistics --- 00:42:08.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:08.688 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:42:08.688 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:08.688 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:42:08.688 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:42:08.688 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:08.688 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:08.688 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:08.688 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:08.688 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:08.688 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:08.688 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:42:08.688 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:08.688 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:08.688 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:08.688 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3190753 00:42:08.688 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:42:08.688 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3190753 00:42:08.688 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 3190753 ']' 00:42:08.688 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:08.688 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:08.688 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:08.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:08.688 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:08.688 08:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:08.688 [2024-11-19 08:06:00.262821] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:42:08.688 [2024-11-19 08:06:00.265826] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:42:08.688 [2024-11-19 08:06:00.265947] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:08.688 [2024-11-19 08:06:00.424172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:42:08.688 [2024-11-19 08:06:00.555007] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:08.688 [2024-11-19 08:06:00.555086] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:08.688 [2024-11-19 08:06:00.555125] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:08.688 [2024-11-19 08:06:00.555143] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:08.688 [2024-11-19 08:06:00.555161] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:08.688 [2024-11-19 08:06:00.557811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:42:08.688 [2024-11-19 08:06:00.557886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:42:08.688 [2024-11-19 08:06:00.557925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:42:08.688 [2024-11-19 08:06:00.557934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:42:09.256 [2024-11-19 08:06:00.910561] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:42:09.256 [2024-11-19 08:06:00.924047] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:42:09.256 [2024-11-19 08:06:00.924234] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:42:09.256 [2024-11-19 08:06:00.925074] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:42:09.256 [2024-11-19 08:06:00.925428] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:42:09.514 08:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:09.514 08:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:42:09.514 08:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:42:09.514 08:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:09.514 08:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:09.514 08:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:09.514 08:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:42:09.514 08:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:09.514 08:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:09.514 [2024-11-19 08:06:01.287020] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:09.514 08:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:09.514 08:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:42:09.514 08:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:09.514 08:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:09.514 Malloc0 00:42:09.514 08:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:09.514 08:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:42:09.514 08:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:09.514 08:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:09.514 08:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:09.514 08:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:42:09.514 08:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:09.514 08:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:09.514 08:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:09.514 08:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:09.514 08:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:09.514 08:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:09.514 [2024-11-19 08:06:01.391239] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:09.514 08:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:09.514 08:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:42:09.514 08:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:42:09.514 08:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:42:09.514 08:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:42:09.514 08:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:09.514 08:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:09.514 { 00:42:09.514 "params": { 00:42:09.514 "name": "Nvme$subsystem", 00:42:09.514 "trtype": "$TEST_TRANSPORT", 00:42:09.514 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:09.514 "adrfam": "ipv4", 00:42:09.514 "trsvcid": "$NVMF_PORT", 00:42:09.514 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:09.514 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:09.514 "hdgst": ${hdgst:-false}, 00:42:09.514 "ddgst": ${ddgst:-false} 00:42:09.514 }, 00:42:09.514 "method": "bdev_nvme_attach_controller" 00:42:09.514 } 00:42:09.514 EOF 00:42:09.514 )") 00:42:09.514 08:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:42:09.514 08:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:42:09.514 08:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:42:09.514 08:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:09.514 "params": { 00:42:09.514 "name": "Nvme1", 00:42:09.514 "trtype": "tcp", 00:42:09.514 "traddr": "10.0.0.2", 00:42:09.514 "adrfam": "ipv4", 00:42:09.514 "trsvcid": "4420", 00:42:09.514 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:09.514 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:09.514 "hdgst": false, 00:42:09.514 "ddgst": false 00:42:09.514 }, 00:42:09.514 "method": "bdev_nvme_attach_controller" 00:42:09.514 }' 00:42:09.773 [2024-11-19 08:06:01.480038] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:42:09.773 [2024-11-19 08:06:01.480162] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3190975 ] 00:42:09.773 [2024-11-19 08:06:01.620722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:42:10.032 [2024-11-19 08:06:01.752684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:10.032 [2024-11-19 08:06:01.752747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:10.032 [2024-11-19 08:06:01.752750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:42:10.290 I/O targets: 00:42:10.290 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:42:10.290 00:42:10.290 00:42:10.290 CUnit - A unit testing framework for C - Version 2.1-3 00:42:10.290 http://cunit.sourceforge.net/ 00:42:10.290 00:42:10.290 00:42:10.290 Suite: bdevio tests on: Nvme1n1 00:42:10.548 Test: blockdev write read block ...passed 00:42:10.548 Test: blockdev write zeroes read block ...passed 00:42:10.548 Test: blockdev write zeroes read no split ...passed 00:42:10.548 Test: blockdev write zeroes read split ...passed 00:42:10.548 Test: blockdev write zeroes read split partial ...passed 00:42:10.548 Test: blockdev reset ...[2024-11-19 08:06:02.373848] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:42:10.548 [2024-11-19 08:06:02.374030] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2f00 (9): Bad file descriptor 00:42:10.548 [2024-11-19 08:06:02.468423] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:42:10.548 passed 00:42:10.548 Test: blockdev write read 8 blocks ...passed 00:42:10.548 Test: blockdev write read size > 128k ...passed 00:42:10.548 Test: blockdev write read invalid size ...passed 00:42:10.808 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:42:10.808 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:42:10.808 Test: blockdev write read max offset ...passed 00:42:10.808 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:42:10.808 Test: blockdev writev readv 8 blocks ...passed 00:42:10.808 Test: blockdev writev readv 30 x 1block ...passed 00:42:10.808 Test: blockdev writev readv block ...passed 00:42:10.808 Test: blockdev writev readv size > 128k ...passed 00:42:10.808 Test: blockdev writev readv size > 128k in two iovs ...passed 00:42:10.808 Test: blockdev comparev and writev ...[2024-11-19 08:06:02.686352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:10.808 [2024-11-19 08:06:02.686405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:10.808 [2024-11-19 08:06:02.686444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:10.808 [2024-11-19 08:06:02.686472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:10.808 [2024-11-19 08:06:02.687064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:10.808 [2024-11-19 08:06:02.687098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:42:10.808 [2024-11-19 08:06:02.687132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:10.808 [2024-11-19 08:06:02.687159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:42:10.808 [2024-11-19 08:06:02.687732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:10.808 [2024-11-19 08:06:02.687771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:42:10.808 [2024-11-19 08:06:02.687806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:10.808 [2024-11-19 08:06:02.687832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:42:10.808 [2024-11-19 08:06:02.688401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:10.808 [2024-11-19 08:06:02.688434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:42:10.808 [2024-11-19 08:06:02.688468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:10.808 [2024-11-19 08:06:02.688494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:42:10.808 passed 00:42:11.068 Test: blockdev nvme passthru rw ...passed 00:42:11.068 Test: blockdev nvme passthru vendor specific ...[2024-11-19 08:06:02.771112] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:42:11.068 [2024-11-19 08:06:02.771154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:42:11.068 [2024-11-19 08:06:02.771410] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:42:11.068 [2024-11-19 08:06:02.771442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:42:11.068 [2024-11-19 08:06:02.771727] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:42:11.068 [2024-11-19 08:06:02.771759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:42:11.068 [2024-11-19 08:06:02.772026] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:42:11.068 [2024-11-19 08:06:02.772057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:42:11.068 passed 00:42:11.068 Test: blockdev nvme admin passthru ...passed 00:42:11.068 Test: blockdev copy ...passed 00:42:11.068 00:42:11.068 Run Summary: Type Total Ran Passed Failed Inactive 00:42:11.068 suites 1 1 n/a 0 0 00:42:11.068 tests 23 23 23 0 0 00:42:11.068 asserts 152 152 152 0 n/a 00:42:11.068 00:42:11.068 Elapsed time = 1.265 seconds 00:42:12.006 08:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:12.006 08:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:12.006 08:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:12.006 08:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:12.006 08:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:42:12.006 08:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:42:12.006 08:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:12.006 08:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:42:12.006 08:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:12.006 08:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:42:12.006 08:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:12.006 08:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:12.006 rmmod nvme_tcp 00:42:12.006 rmmod nvme_fabrics 00:42:12.006 rmmod nvme_keyring 00:42:12.006 08:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:12.006 08:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:42:12.006 08:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:42:12.006 08:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3190753 ']' 00:42:12.006 08:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3190753 00:42:12.006 08:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 3190753 ']' 00:42:12.006 08:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 3190753 00:42:12.006 08:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:42:12.006 08:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:12.006 08:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3190753 00:42:12.006 08:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:42:12.006 08:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:42:12.006 08:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3190753' 00:42:12.006 killing process with pid 3190753 00:42:12.006 08:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 3190753 00:42:12.006 08:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 3190753 00:42:13.383 08:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:42:13.383 08:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:13.383 08:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:13.383 08:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:42:13.383 08:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:42:13.383 08:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:13.383 08:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:42:13.383 08:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:13.383 08:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:13.383 08:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:13.383 08:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:13.383 08:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:15.292 08:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:15.292 00:42:15.292 real 0m9.307s 00:42:15.292 user 0m16.384s 00:42:15.292 sys 0m3.208s 00:42:15.292 08:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:15.292 08:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:15.292 ************************************ 00:42:15.292 END TEST nvmf_bdevio 00:42:15.292 ************************************ 00:42:15.292 08:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:42:15.292 00:42:15.292 real 4m28.563s 00:42:15.292 user 9m49.008s 00:42:15.292 sys 1m27.863s 00:42:15.292 08:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:15.292 08:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:42:15.292 ************************************ 00:42:15.292 END TEST nvmf_target_core_interrupt_mode 00:42:15.292 ************************************ 00:42:15.292 08:06:07 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:42:15.292 08:06:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:42:15.292 08:06:07 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:15.292 08:06:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:15.292 ************************************ 00:42:15.292 START TEST nvmf_interrupt 00:42:15.292 ************************************ 00:42:15.292 08:06:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:42:15.552 * Looking for test storage... 00:42:15.552 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:15.552 08:06:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:42:15.552 08:06:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:42:15.552 08:06:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:42:15.552 08:06:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:42:15.552 08:06:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:15.552 08:06:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:15.552 08:06:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:15.552 08:06:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:42:15.552 08:06:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:42:15.552 08:06:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:42:15.552 08:06:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:42:15.552 08:06:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:42:15.552 08:06:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:42:15.552 08:06:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:42:15.552 08:06:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:15.552 08:06:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:42:15.552 08:06:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:42:15.552 08:06:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:15.552 08:06:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:15.552 08:06:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:42:15.552 08:06:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:42:15.552 08:06:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:15.552 08:06:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:42:15.552 08:06:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:42:15.552 08:06:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:42:15.552 08:06:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:42:15.552 08:06:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:15.552 08:06:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:42:15.552 08:06:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:42:15.552 08:06:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:15.552 08:06:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:15.552 08:06:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:42:15.552 08:06:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:15.552 08:06:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:42:15.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:15.552 --rc genhtml_branch_coverage=1 00:42:15.552 --rc genhtml_function_coverage=1 00:42:15.552 --rc genhtml_legend=1 00:42:15.552 --rc geninfo_all_blocks=1 00:42:15.552 --rc geninfo_unexecuted_blocks=1 00:42:15.552 00:42:15.552 ' 00:42:15.552 08:06:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:42:15.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:15.552 --rc genhtml_branch_coverage=1 00:42:15.552 --rc genhtml_function_coverage=1 00:42:15.552 --rc genhtml_legend=1 00:42:15.552 --rc geninfo_all_blocks=1 00:42:15.552 --rc geninfo_unexecuted_blocks=1 00:42:15.552 00:42:15.552 ' 00:42:15.552 08:06:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:42:15.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:15.552 --rc genhtml_branch_coverage=1 00:42:15.552 --rc genhtml_function_coverage=1 00:42:15.552 --rc genhtml_legend=1 00:42:15.552 --rc geninfo_all_blocks=1 00:42:15.552 --rc geninfo_unexecuted_blocks=1 00:42:15.552 00:42:15.552 ' 00:42:15.552 08:06:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:42:15.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:15.552 --rc genhtml_branch_coverage=1 00:42:15.552 --rc genhtml_function_coverage=1 00:42:15.552 --rc genhtml_legend=1 00:42:15.552 --rc geninfo_all_blocks=1 00:42:15.552 --rc geninfo_unexecuted_blocks=1 00:42:15.552 00:42:15.552 ' 00:42:15.552 08:06:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:15.552 08:06:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:42:15.552 08:06:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:15.552 08:06:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:15.552 08:06:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:15.552 08:06:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:15.552 08:06:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:15.552 08:06:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:15.552 08:06:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:15.552 08:06:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:15.552 08:06:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:15.552 08:06:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:15.552 08:06:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:15.552 08:06:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:42:15.552 08:06:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:15.552 08:06:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:15.552 08:06:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:15.552 08:06:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:15.552 08:06:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:15.552 08:06:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:42:15.552 08:06:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:15.552 08:06:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:15.552 08:06:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:15.552 08:06:07 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:15.552 08:06:07 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:15.552 08:06:07 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:15.552 08:06:07 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:42:15.552 08:06:07 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:15.552 08:06:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:42:15.552 08:06:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:15.553 08:06:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:15.553 08:06:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:15.553 08:06:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:15.553 08:06:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:15.553 08:06:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:42:15.553 08:06:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:42:15.553 08:06:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:15.553 08:06:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:15.553 08:06:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:15.553 08:06:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:42:15.553 08:06:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:42:15.553 08:06:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:42:15.553 08:06:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:15.553 08:06:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:15.553 08:06:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:15.553 08:06:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:15.553 08:06:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:15.553 08:06:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:15.553 08:06:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:42:15.553 08:06:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:15.553 08:06:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:42:15.553 08:06:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:42:15.553 08:06:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:42:15.553 08:06:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:42:17.457 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:42:17.457 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:42:17.457 Found net devices under 0000:0a:00.0: cvl_0_0 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:42:17.457 Found net devices under 0000:0a:00.1: cvl_0_1 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:17.457 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:17.716 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:17.716 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:17.716 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:17.716 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:17.716 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:17.716 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:17.716 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:17.716 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:17.716 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:17.716 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:42:17.716 00:42:17.716 --- 10.0.0.2 ping statistics --- 00:42:17.716 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:17.716 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:42:17.716 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:17.716 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:17.716 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:42:17.716 00:42:17.716 --- 10.0.0.1 ping statistics --- 00:42:17.716 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:17.716 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:42:17.716 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:17.716 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:42:17.716 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:42:17.716 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:17.716 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:17.716 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:17.716 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:17.716 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:17.716 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:17.716 08:06:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:42:17.716 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:17.716 08:06:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:17.716 08:06:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:17.716 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=3193848 00:42:17.716 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:42:17.716 08:06:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 3193848 00:42:17.716 08:06:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 3193848 ']' 00:42:17.716 08:06:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:17.716 08:06:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:17.716 08:06:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:17.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:17.717 08:06:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:17.717 08:06:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:17.717 [2024-11-19 08:06:09.571335] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:42:17.717 [2024-11-19 08:06:09.573994] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:42:17.717 [2024-11-19 08:06:09.574102] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:17.975 [2024-11-19 08:06:09.716245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:42:17.975 [2024-11-19 08:06:09.846625] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:17.975 [2024-11-19 08:06:09.846726] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:17.975 [2024-11-19 08:06:09.846757] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:17.975 [2024-11-19 08:06:09.846778] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:17.975 [2024-11-19 08:06:09.846808] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:17.976 [2024-11-19 08:06:09.849399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:17.976 [2024-11-19 08:06:09.849409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:18.542 [2024-11-19 08:06:10.224582] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:42:18.542 [2024-11-19 08:06:10.225318] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:42:18.542 [2024-11-19 08:06:10.225661] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:42:18.801 08:06:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:18.801 08:06:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:42:18.801 08:06:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:42:18.801 08:06:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:18.801 08:06:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:18.801 08:06:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:18.801 08:06:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:42:18.801 08:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:42:18.801 08:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:42:18.801 08:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:42:18.801 5000+0 records in 00:42:18.801 5000+0 records out 00:42:18.801 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0120277 s, 851 MB/s 00:42:18.801 08:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:42:18.801 08:06:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:18.801 08:06:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:18.801 AIO0 00:42:18.801 08:06:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:18.801 08:06:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:42:18.801 08:06:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:18.801 08:06:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:18.802 [2024-11-19 08:06:10.657736] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:18.802 08:06:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:18.802 08:06:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:42:18.802 08:06:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:18.802 08:06:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:18.802 08:06:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:18.802 08:06:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:42:18.802 08:06:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:18.802 08:06:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:18.802 08:06:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:18.802 08:06:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:18.802 08:06:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:18.802 08:06:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:18.802 [2024-11-19 08:06:10.686811] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:18.802 08:06:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:18.802 08:06:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:42:18.802 08:06:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3193848 0 00:42:18.802 08:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3193848 0 idle 00:42:18.802 08:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3193848 00:42:18.802 08:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:42:18.802 08:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:42:18.802 08:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:42:18.802 08:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:18.802 08:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:42:18.802 08:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:42:18.802 08:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:18.802 08:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:18.802 08:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:18.802 08:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3193848 -w 256 00:42:18.802 08:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:42:19.060 08:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3193848 root 20 0 20.1t 196224 100992 S 0.0 0.3 0:00.75 reactor_0' 00:42:19.060 08:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3193848 root 20 0 20.1t 196224 100992 S 0.0 0.3 0:00.75 reactor_0 00:42:19.060 08:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:19.060 08:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:19.060 08:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:42:19.060 08:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:42:19.060 08:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:42:19.060 08:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:42:19.060 08:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:42:19.060 08:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:19.060 08:06:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:42:19.060 08:06:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3193848 1 00:42:19.060 08:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3193848 1 idle 00:42:19.060 08:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3193848 00:42:19.060 08:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:42:19.060 08:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:42:19.060 08:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:42:19.060 08:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:19.061 08:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:42:19.061 08:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:42:19.061 08:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:19.061 08:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:19.061 08:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:19.061 08:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3193848 -w 256 00:42:19.061 08:06:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:42:19.319 08:06:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3193855 root 20 0 20.1t 196224 100992 S 0.0 0.3 0:00.00 reactor_1' 00:42:19.319 08:06:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3193855 root 20 0 20.1t 196224 100992 S 0.0 0.3 0:00.00 reactor_1 00:42:19.319 08:06:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:19.319 08:06:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:19.319 08:06:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:42:19.319 08:06:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:42:19.319 08:06:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:42:19.319 08:06:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:42:19.319 08:06:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:42:19.319 08:06:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:19.319 08:06:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:42:19.319 08:06:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=3194021 00:42:19.319 08:06:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:42:19.319 08:06:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:42:19.319 08:06:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:42:19.319 08:06:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3193848 0 00:42:19.319 08:06:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3193848 0 busy 00:42:19.319 08:06:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3193848 00:42:19.319 08:06:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:42:19.319 08:06:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:42:19.319 08:06:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:42:19.319 08:06:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:19.319 08:06:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:42:19.319 08:06:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:19.319 08:06:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:19.319 08:06:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:19.319 08:06:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3193848 -w 256 00:42:19.319 08:06:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:42:19.319 08:06:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3193848 root 20 0 20.1t 197376 101760 S 6.7 0.3 0:00.76 reactor_0' 00:42:19.319 08:06:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3193848 root 20 0 20.1t 197376 101760 S 6.7 0.3 0:00.76 reactor_0 00:42:19.319 08:06:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:19.319 08:06:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:19.319 08:06:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:42:19.319 08:06:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:42:19.319 08:06:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:42:19.319 08:06:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:42:19.319 08:06:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:42:20.696 08:06:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:42:20.696 08:06:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:20.696 08:06:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3193848 -w 256 00:42:20.696 08:06:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:42:20.696 08:06:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3193848 root 20 0 20.1t 210048 102144 R 99.9 0.3 0:02.88 reactor_0' 00:42:20.696 08:06:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3193848 root 20 0 20.1t 210048 102144 R 99.9 0.3 0:02.88 reactor_0 00:42:20.696 08:06:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:20.696 08:06:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:20.696 08:06:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:42:20.696 08:06:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:42:20.696 08:06:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:42:20.696 08:06:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:42:20.696 08:06:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:42:20.696 08:06:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:20.696 08:06:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:42:20.696 08:06:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:42:20.696 08:06:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3193848 1 00:42:20.696 08:06:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3193848 1 busy 00:42:20.696 08:06:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3193848 00:42:20.696 08:06:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:42:20.696 08:06:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:42:20.696 08:06:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:42:20.696 08:06:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:20.696 08:06:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:42:20.696 08:06:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:20.696 08:06:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:20.696 08:06:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:20.696 08:06:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3193848 -w 256 00:42:20.696 08:06:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:42:20.696 08:06:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3193855 root 20 0 20.1t 210048 102144 R 93.3 0.3 0:01.19 reactor_1' 00:42:20.696 08:06:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3193855 root 20 0 20.1t 210048 102144 R 93.3 0.3 0:01.19 reactor_1 00:42:20.696 08:06:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:20.696 08:06:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:20.696 08:06:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=93.3 00:42:20.696 08:06:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=93 00:42:20.696 08:06:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:42:20.696 08:06:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:42:20.696 08:06:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:42:20.696 08:06:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:20.696 08:06:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 3194021 00:42:30.677 Initializing NVMe Controllers 00:42:30.677 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:42:30.677 Controller IO queue size 256, less than required. 00:42:30.677 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:42:30.677 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:42:30.677 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:42:30.677 Initialization complete. Launching workers. 00:42:30.677 ======================================================== 00:42:30.677 Latency(us) 00:42:30.677 Device Information : IOPS MiB/s Average min max 00:42:30.677 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 10103.80 39.47 25356.96 7306.76 38025.42 00:42:30.678 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 10767.90 42.06 23792.57 6315.95 29079.25 00:42:30.678 ======================================================== 00:42:30.678 Total : 20871.70 81.53 24549.88 6315.95 38025.42 00:42:30.678 00:42:30.678 08:06:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:42:30.678 08:06:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3193848 0 00:42:30.678 08:06:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3193848 0 idle 00:42:30.678 08:06:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3193848 00:42:30.678 08:06:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:42:30.678 08:06:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:42:30.678 08:06:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:42:30.678 08:06:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:30.678 08:06:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:42:30.678 08:06:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:42:30.678 08:06:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:30.678 08:06:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:30.678 08:06:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:30.678 08:06:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3193848 -w 256 00:42:30.678 08:06:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:42:30.678 08:06:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3193848 root 20 0 20.1t 210048 102144 S 0.0 0.3 0:20.20 reactor_0' 00:42:30.678 08:06:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3193848 root 20 0 20.1t 210048 102144 S 0.0 0.3 0:20.20 reactor_0 00:42:30.678 08:06:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:30.678 08:06:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:30.678 08:06:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:42:30.678 08:06:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:42:30.678 08:06:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:42:30.678 08:06:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:42:30.678 08:06:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:42:30.678 08:06:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:30.678 08:06:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:42:30.678 08:06:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3193848 1 00:42:30.678 08:06:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3193848 1 idle 00:42:30.678 08:06:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3193848 00:42:30.678 08:06:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:42:30.678 08:06:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:42:30.678 08:06:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:42:30.678 08:06:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:30.678 08:06:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:42:30.678 08:06:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:42:30.678 08:06:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:30.678 08:06:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:30.678 08:06:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:30.678 08:06:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3193848 -w 256 00:42:30.678 08:06:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:42:30.678 08:06:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3193855 root 20 0 20.1t 210048 102144 S 0.0 0.3 0:09.48 reactor_1' 00:42:30.678 08:06:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3193855 root 20 0 20.1t 210048 102144 S 0.0 0.3 0:09.48 reactor_1 00:42:30.678 08:06:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:30.678 08:06:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:30.678 08:06:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:42:30.678 08:06:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:42:30.678 08:06:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:42:30.678 08:06:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:42:30.678 08:06:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:42:30.678 08:06:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:30.678 08:06:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:42:30.678 08:06:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:42:30.678 08:06:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:42:30.678 08:06:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:42:30.678 08:06:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:42:30.678 08:06:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:42:32.580 08:06:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:42:32.580 08:06:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:42:32.580 08:06:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:42:32.580 08:06:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:42:32.580 08:06:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:42:32.580 08:06:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:42:32.580 08:06:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:42:32.580 08:06:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3193848 0 00:42:32.580 08:06:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3193848 0 idle 00:42:32.580 08:06:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3193848 00:42:32.580 08:06:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:42:32.580 08:06:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:42:32.580 08:06:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:42:32.580 08:06:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:32.580 08:06:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:42:32.580 08:06:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:42:32.580 08:06:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:32.580 08:06:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:32.580 08:06:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:32.580 08:06:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3193848 -w 256 00:42:32.580 08:06:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:42:32.580 08:06:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3193848 root 20 0 20.1t 237312 111360 S 0.0 0.4 0:20.39 reactor_0' 00:42:32.580 08:06:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3193848 root 20 0 20.1t 237312 111360 S 0.0 0.4 0:20.39 reactor_0 00:42:32.580 08:06:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:32.580 08:06:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:32.580 08:06:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:42:32.580 08:06:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:42:32.580 08:06:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:42:32.580 08:06:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:42:32.580 08:06:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:42:32.580 08:06:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:32.580 08:06:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:42:32.580 08:06:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3193848 1 00:42:32.580 08:06:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3193848 1 idle 00:42:32.580 08:06:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3193848 00:42:32.580 08:06:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:42:32.580 08:06:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:42:32.580 08:06:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:42:32.580 08:06:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:32.580 08:06:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:42:32.580 08:06:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:42:32.580 08:06:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:32.580 08:06:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:32.580 08:06:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:32.580 08:06:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3193848 -w 256 00:42:32.580 08:06:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:42:32.580 08:06:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3193855 root 20 0 20.1t 237312 111360 S 0.0 0.4 0:09.54 reactor_1' 00:42:32.580 08:06:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3193855 root 20 0 20.1t 237312 111360 S 0.0 0.4 0:09.54 reactor_1 00:42:32.580 08:06:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:32.580 08:06:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:32.580 08:06:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:42:32.580 08:06:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:42:32.580 08:06:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:42:32.580 08:06:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:42:32.580 08:06:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:42:32.580 08:06:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:32.580 08:06:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:42:32.839 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:42:32.839 08:06:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:42:32.839 08:06:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:42:32.839 08:06:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:42:32.839 08:06:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:42:32.839 08:06:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:42:32.839 08:06:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:42:32.839 08:06:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:42:32.839 08:06:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:42:32.839 08:06:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:42:32.839 08:06:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:32.839 08:06:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:42:32.839 08:06:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:32.839 08:06:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:42:32.839 08:06:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:32.839 08:06:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:32.839 rmmod nvme_tcp 00:42:32.839 rmmod nvme_fabrics 00:42:32.839 rmmod nvme_keyring 00:42:32.839 08:06:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:32.839 08:06:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:42:32.839 08:06:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:42:32.839 08:06:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 3193848 ']' 00:42:32.839 08:06:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 3193848 00:42:32.839 08:06:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 3193848 ']' 00:42:32.839 08:06:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 3193848 00:42:32.839 08:06:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:42:32.839 08:06:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:32.839 08:06:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3193848 00:42:33.097 08:06:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:33.097 08:06:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:33.097 08:06:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3193848' 00:42:33.097 killing process with pid 3193848 00:42:33.097 08:06:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 3193848 00:42:33.097 08:06:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 3193848 00:42:34.032 08:06:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:42:34.032 08:06:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:34.032 08:06:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:34.032 08:06:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:42:34.032 08:06:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:42:34.032 08:06:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:34.032 08:06:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:42:34.032 08:06:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:34.032 08:06:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:34.032 08:06:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:34.032 08:06:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:42:34.032 08:06:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:36.565 08:06:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:36.565 00:42:36.565 real 0m20.772s 00:42:36.565 user 0m37.516s 00:42:36.565 sys 0m7.326s 00:42:36.565 08:06:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:36.565 08:06:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:36.565 ************************************ 00:42:36.565 END TEST nvmf_interrupt 00:42:36.565 ************************************ 00:42:36.565 00:42:36.565 real 35m36.862s 00:42:36.565 user 93m22.424s 00:42:36.565 sys 7m56.481s 00:42:36.565 08:06:28 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:36.565 08:06:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:36.565 ************************************ 00:42:36.565 END TEST nvmf_tcp 00:42:36.565 ************************************ 00:42:36.565 08:06:28 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:42:36.565 08:06:28 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:42:36.565 08:06:28 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:42:36.565 08:06:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:36.565 08:06:28 -- common/autotest_common.sh@10 -- # set +x 00:42:36.565 ************************************ 00:42:36.565 START TEST spdkcli_nvmf_tcp 00:42:36.565 ************************************ 00:42:36.565 08:06:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:42:36.565 * Looking for test storage... 00:42:36.565 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:42:36.565 08:06:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:42:36.565 08:06:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:42:36.565 08:06:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:42:36.565 08:06:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:42:36.565 08:06:28 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:36.565 08:06:28 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:36.565 08:06:28 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:36.565 08:06:28 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:42:36.565 08:06:28 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:42:36.565 08:06:28 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:42:36.565 08:06:28 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:42:36.565 08:06:28 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:42:36.565 08:06:28 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:42:36.565 08:06:28 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:42:36.565 08:06:28 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:36.565 08:06:28 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:42:36.565 08:06:28 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:42:36.565 08:06:28 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:36.565 08:06:28 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:36.565 08:06:28 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:42:36.565 08:06:28 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:42:36.565 08:06:28 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:36.565 08:06:28 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:42:36.565 08:06:28 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:42:36.565 08:06:28 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:42:36.565 08:06:28 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:42:36.565 08:06:28 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:36.565 08:06:28 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:42:36.565 08:06:28 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:42:36.565 08:06:28 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:36.565 08:06:28 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:36.565 08:06:28 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:42:36.565 08:06:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:36.565 08:06:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:42:36.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:36.565 --rc genhtml_branch_coverage=1 00:42:36.565 --rc genhtml_function_coverage=1 00:42:36.565 --rc genhtml_legend=1 00:42:36.565 --rc geninfo_all_blocks=1 00:42:36.565 --rc geninfo_unexecuted_blocks=1 00:42:36.565 00:42:36.565 ' 00:42:36.565 08:06:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:42:36.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:36.565 --rc genhtml_branch_coverage=1 00:42:36.565 --rc genhtml_function_coverage=1 00:42:36.565 --rc genhtml_legend=1 00:42:36.565 --rc geninfo_all_blocks=1 00:42:36.565 --rc geninfo_unexecuted_blocks=1 00:42:36.565 00:42:36.565 ' 00:42:36.565 08:06:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:42:36.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:36.565 --rc genhtml_branch_coverage=1 00:42:36.565 --rc genhtml_function_coverage=1 00:42:36.565 --rc genhtml_legend=1 00:42:36.565 --rc geninfo_all_blocks=1 00:42:36.565 --rc geninfo_unexecuted_blocks=1 00:42:36.565 00:42:36.565 ' 00:42:36.565 08:06:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:42:36.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:36.565 --rc genhtml_branch_coverage=1 00:42:36.565 --rc genhtml_function_coverage=1 00:42:36.565 --rc genhtml_legend=1 00:42:36.565 --rc geninfo_all_blocks=1 00:42:36.565 --rc geninfo_unexecuted_blocks=1 00:42:36.565 00:42:36.565 ' 00:42:36.565 08:06:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:42:36.565 08:06:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:42:36.565 08:06:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:42:36.565 08:06:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:36.565 08:06:28 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:42:36.565 08:06:28 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:36.565 08:06:28 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:36.565 08:06:28 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:36.565 08:06:28 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:36.565 08:06:28 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:36.565 08:06:28 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:36.565 08:06:28 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:36.565 08:06:28 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:36.565 08:06:28 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:36.565 08:06:28 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:36.565 08:06:28 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:36.565 08:06:28 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:42:36.565 08:06:28 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:36.565 08:06:28 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:36.565 08:06:28 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:36.565 08:06:28 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:36.565 08:06:28 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:36.565 08:06:28 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:42:36.566 08:06:28 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:36.566 08:06:28 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:36.566 08:06:28 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:36.566 08:06:28 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:36.566 08:06:28 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:36.566 08:06:28 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:36.566 08:06:28 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:42:36.566 08:06:28 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:36.566 08:06:28 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:42:36.566 08:06:28 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:36.566 08:06:28 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:36.566 08:06:28 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:36.566 08:06:28 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:36.566 08:06:28 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:36.566 08:06:28 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:36.566 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:36.566 08:06:28 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:36.566 08:06:28 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:36.566 08:06:28 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:36.566 08:06:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:42:36.566 08:06:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:42:36.566 08:06:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:42:36.566 08:06:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:42:36.566 08:06:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:36.566 08:06:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:36.566 08:06:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:42:36.566 08:06:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3196158 00:42:36.566 08:06:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:42:36.566 08:06:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3196158 00:42:36.566 08:06:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 3196158 ']' 00:42:36.566 08:06:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:36.566 08:06:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:36.566 08:06:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:36.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:36.566 08:06:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:36.566 08:06:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:36.566 [2024-11-19 08:06:28.315199] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:42:36.566 [2024-11-19 08:06:28.315350] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3196158 ] 00:42:36.566 [2024-11-19 08:06:28.447387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:42:36.824 [2024-11-19 08:06:28.583117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:36.824 [2024-11-19 08:06:28.583120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:37.392 08:06:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:37.393 08:06:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:42:37.393 08:06:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:42:37.393 08:06:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:37.393 08:06:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:37.393 08:06:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:42:37.393 08:06:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:42:37.393 08:06:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:42:37.393 08:06:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:37.393 08:06:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:37.393 08:06:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:42:37.393 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:42:37.393 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:42:37.393 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:42:37.393 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:42:37.394 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:42:37.394 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:42:37.394 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:42:37.394 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:42:37.394 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:42:37.394 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:42:37.394 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:42:37.394 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:42:37.395 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:42:37.395 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:42:37.395 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:42:37.395 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:42:37.395 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:42:37.395 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:42:37.395 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:42:37.395 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:42:37.395 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:42:37.395 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:42:37.395 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:42:37.395 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:42:37.395 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:42:37.396 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:42:37.396 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:42:37.396 ' 00:42:40.743 [2024-11-19 08:06:32.071615] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:41.677 [2024-11-19 08:06:33.345591] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:42:44.205 [2024-11-19 08:06:35.693352] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:42:46.106 [2024-11-19 08:06:37.711858] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:42:47.477 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:42:47.477 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:42:47.477 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:42:47.477 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:42:47.477 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:42:47.477 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:42:47.477 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:42:47.478 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:42:47.478 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:42:47.478 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:42:47.478 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:42:47.478 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:42:47.478 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:42:47.478 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:42:47.478 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:42:47.478 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:42:47.478 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:42:47.478 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:42:47.478 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:42:47.478 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:42:47.478 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:42:47.478 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:42:47.478 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:42:47.478 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:42:47.478 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:42:47.478 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:42:47.478 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:42:47.478 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:42:47.478 08:06:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:42:47.478 08:06:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:47.478 08:06:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:47.478 08:06:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:42:47.478 08:06:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:47.478 08:06:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:47.478 08:06:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:42:47.478 08:06:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:42:48.043 08:06:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:42:48.043 08:06:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:42:48.043 08:06:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:42:48.043 08:06:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:48.043 08:06:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:48.043 08:06:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:42:48.043 08:06:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:48.043 08:06:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:48.043 08:06:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:42:48.043 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:42:48.043 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:42:48.043 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:42:48.043 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:42:48.043 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:42:48.043 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:42:48.043 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:42:48.043 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:42:48.043 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:42:48.043 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:42:48.043 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:42:48.043 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:42:48.043 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:42:48.043 ' 00:42:54.600 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:42:54.600 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:42:54.600 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:42:54.600 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:42:54.600 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:42:54.600 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:42:54.600 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:42:54.600 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:42:54.600 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:42:54.600 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:42:54.600 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:42:54.600 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:42:54.600 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:42:54.600 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:42:54.600 08:06:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:42:54.600 08:06:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:54.600 08:06:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:54.600 08:06:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3196158 00:42:54.600 08:06:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 3196158 ']' 00:42:54.600 08:06:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 3196158 00:42:54.600 08:06:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:42:54.600 08:06:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:54.600 08:06:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3196158 00:42:54.600 08:06:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:54.600 08:06:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:54.600 08:06:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3196158' 00:42:54.600 killing process with pid 3196158 00:42:54.600 08:06:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 3196158 00:42:54.600 08:06:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 3196158 00:42:55.167 08:06:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:42:55.167 08:06:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:42:55.167 08:06:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3196158 ']' 00:42:55.167 08:06:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3196158 00:42:55.167 08:06:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 3196158 ']' 00:42:55.167 08:06:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 3196158 00:42:55.167 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3196158) - No such process 00:42:55.167 08:06:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 3196158 is not found' 00:42:55.167 Process with pid 3196158 is not found 00:42:55.167 08:06:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:42:55.167 08:06:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:42:55.167 08:06:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:42:55.167 00:42:55.167 real 0m18.988s 00:42:55.167 user 0m39.764s 00:42:55.167 sys 0m1.034s 00:42:55.167 08:06:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:55.167 08:06:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:55.167 ************************************ 00:42:55.167 END TEST spdkcli_nvmf_tcp 00:42:55.167 ************************************ 00:42:55.167 08:06:47 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:42:55.167 08:06:47 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:42:55.167 08:06:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:55.167 08:06:47 -- common/autotest_common.sh@10 -- # set +x 00:42:55.426 ************************************ 00:42:55.426 START TEST nvmf_identify_passthru 00:42:55.426 ************************************ 00:42:55.426 08:06:47 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:42:55.426 * Looking for test storage... 00:42:55.426 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:55.426 08:06:47 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:42:55.426 08:06:47 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:42:55.426 08:06:47 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:42:55.426 08:06:47 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:42:55.426 08:06:47 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:55.426 08:06:47 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:55.426 08:06:47 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:55.426 08:06:47 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:42:55.426 08:06:47 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:42:55.426 08:06:47 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:42:55.426 08:06:47 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:42:55.426 08:06:47 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:42:55.426 08:06:47 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:42:55.426 08:06:47 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:42:55.426 08:06:47 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:55.426 08:06:47 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:42:55.426 08:06:47 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:42:55.426 08:06:47 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:55.426 08:06:47 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:55.426 08:06:47 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:42:55.426 08:06:47 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:42:55.426 08:06:47 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:55.426 08:06:47 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:42:55.426 08:06:47 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:42:55.426 08:06:47 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:42:55.426 08:06:47 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:42:55.426 08:06:47 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:55.426 08:06:47 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:42:55.426 08:06:47 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:42:55.426 08:06:47 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:55.426 08:06:47 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:55.426 08:06:47 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:42:55.426 08:06:47 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:55.426 08:06:47 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:42:55.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:55.426 --rc genhtml_branch_coverage=1 00:42:55.426 --rc genhtml_function_coverage=1 00:42:55.426 --rc genhtml_legend=1 00:42:55.426 --rc geninfo_all_blocks=1 00:42:55.426 --rc geninfo_unexecuted_blocks=1 00:42:55.426 00:42:55.426 ' 00:42:55.426 08:06:47 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:42:55.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:55.426 --rc genhtml_branch_coverage=1 00:42:55.426 --rc genhtml_function_coverage=1 00:42:55.426 --rc genhtml_legend=1 00:42:55.426 --rc geninfo_all_blocks=1 00:42:55.426 --rc geninfo_unexecuted_blocks=1 00:42:55.426 00:42:55.426 ' 00:42:55.426 08:06:47 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:42:55.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:55.426 --rc genhtml_branch_coverage=1 00:42:55.426 --rc genhtml_function_coverage=1 00:42:55.426 --rc genhtml_legend=1 00:42:55.426 --rc geninfo_all_blocks=1 00:42:55.426 --rc geninfo_unexecuted_blocks=1 00:42:55.426 00:42:55.426 ' 00:42:55.426 08:06:47 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:42:55.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:55.426 --rc genhtml_branch_coverage=1 00:42:55.427 --rc genhtml_function_coverage=1 00:42:55.427 --rc genhtml_legend=1 00:42:55.427 --rc geninfo_all_blocks=1 00:42:55.427 --rc geninfo_unexecuted_blocks=1 00:42:55.427 00:42:55.427 ' 00:42:55.427 08:06:47 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:55.427 08:06:47 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:42:55.427 08:06:47 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:55.427 08:06:47 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:55.427 08:06:47 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:55.427 08:06:47 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:55.427 08:06:47 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:55.427 08:06:47 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:55.427 08:06:47 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:55.427 08:06:47 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:55.427 08:06:47 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:55.427 08:06:47 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:55.427 08:06:47 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:55.427 08:06:47 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:42:55.427 08:06:47 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:55.427 08:06:47 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:55.427 08:06:47 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:55.427 08:06:47 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:55.427 08:06:47 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:55.427 08:06:47 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:42:55.427 08:06:47 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:55.427 08:06:47 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:55.427 08:06:47 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:55.427 08:06:47 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:55.427 08:06:47 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:55.427 08:06:47 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:55.427 08:06:47 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:42:55.427 08:06:47 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:55.427 08:06:47 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:42:55.427 08:06:47 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:55.427 08:06:47 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:55.427 08:06:47 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:55.427 08:06:47 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:55.427 08:06:47 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:55.427 08:06:47 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:55.427 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:55.427 08:06:47 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:55.427 08:06:47 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:55.427 08:06:47 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:55.427 08:06:47 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:55.427 08:06:47 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:42:55.427 08:06:47 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:55.427 08:06:47 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:55.427 08:06:47 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:55.427 08:06:47 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:55.427 08:06:47 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:55.427 08:06:47 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:55.427 08:06:47 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:42:55.427 08:06:47 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:55.427 08:06:47 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:42:55.427 08:06:47 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:55.427 08:06:47 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:55.427 08:06:47 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:55.427 08:06:47 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:55.427 08:06:47 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:55.427 08:06:47 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:55.427 08:06:47 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:55.427 08:06:47 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:55.427 08:06:47 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:42:55.427 08:06:47 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:42:55.427 08:06:47 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:42:55.427 08:06:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:57.960 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:57.960 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:42:57.960 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:57.960 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:57.960 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:57.960 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:57.960 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:57.960 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:42:57.960 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:57.960 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:42:57.960 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:42:57.960 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:42:57.960 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:42:57.960 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:42:57.960 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:42:57.960 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:57.960 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:57.960 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:57.960 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:57.960 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:57.960 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:57.960 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:57.960 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:57.960 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:57.960 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:57.960 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:57.960 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:57.960 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:57.960 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:57.960 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:57.960 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:57.960 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:57.960 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:57.960 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:57.960 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:42:57.960 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:42:57.960 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:57.960 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:57.960 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:57.960 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:57.960 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:57.960 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:57.961 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:42:57.961 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:42:57.961 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:57.961 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:57.961 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:57.961 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:57.961 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:57.961 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:57.961 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:57.961 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:57.961 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:57.961 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:57.961 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:57.961 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:57.961 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:57.961 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:57.961 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:57.961 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:42:57.961 Found net devices under 0000:0a:00.0: cvl_0_0 00:42:57.961 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:57.961 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:57.961 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:57.961 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:57.961 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:57.961 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:57.961 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:57.961 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:57.961 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:42:57.961 Found net devices under 0000:0a:00.1: cvl_0_1 00:42:57.961 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:57.961 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:42:57.961 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:42:57.961 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:42:57.961 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:42:57.961 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:42:57.961 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:57.961 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:57.961 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:57.961 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:57.961 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:57.961 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:57.961 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:57.961 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:57.961 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:57.961 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:57.961 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:57.961 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:57.961 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:57.961 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:57.961 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:57.961 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:57.961 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:57.961 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:57.961 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:57.961 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:57.961 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:57.961 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:57.961 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:57.961 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:57.961 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.318 ms 00:42:57.961 00:42:57.961 --- 10.0.0.2 ping statistics --- 00:42:57.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:57.961 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:42:57.961 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:57.961 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:57.961 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:42:57.961 00:42:57.961 --- 10.0.0.1 ping statistics --- 00:42:57.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:57.961 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:42:57.961 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:57.961 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:42:57.961 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:42:57.961 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:57.961 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:57.961 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:57.961 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:57.961 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:57.961 08:06:49 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:57.961 08:06:49 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:42:57.961 08:06:49 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:57.961 08:06:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:57.961 08:06:49 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:42:57.961 08:06:49 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:42:57.961 08:06:49 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:42:57.961 08:06:49 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:42:57.961 08:06:49 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:42:57.961 08:06:49 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:42:57.961 08:06:49 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:42:57.961 08:06:49 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:42:57.961 08:06:49 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:42:57.961 08:06:49 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:42:57.961 08:06:49 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:42:57.961 08:06:49 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:42:57.961 08:06:49 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:88:00.0 00:42:57.961 08:06:49 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:42:57.961 08:06:49 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:42:57.961 08:06:49 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:42:57.961 08:06:49 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:42:57.961 08:06:49 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:43:02.146 08:06:53 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:43:02.146 08:06:53 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:43:02.146 08:06:53 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:43:02.146 08:06:53 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:43:07.422 08:06:58 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:43:07.422 08:06:58 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:43:07.422 08:06:58 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:07.422 08:06:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:07.422 08:06:58 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:43:07.422 08:06:58 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:07.422 08:06:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:07.422 08:06:58 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3201166 00:43:07.422 08:06:58 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:43:07.422 08:06:58 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:43:07.422 08:06:58 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3201166 00:43:07.422 08:06:58 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 3201166 ']' 00:43:07.422 08:06:58 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:07.422 08:06:58 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:07.422 08:06:58 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:07.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:07.422 08:06:58 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:07.422 08:06:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:07.422 [2024-11-19 08:06:58.530931] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:43:07.422 [2024-11-19 08:06:58.531108] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:07.423 [2024-11-19 08:06:58.686969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:43:07.423 [2024-11-19 08:06:58.834312] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:07.423 [2024-11-19 08:06:58.834404] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:07.423 [2024-11-19 08:06:58.834430] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:07.423 [2024-11-19 08:06:58.834455] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:07.423 [2024-11-19 08:06:58.834475] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:07.423 [2024-11-19 08:06:58.837364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:07.423 [2024-11-19 08:06:58.837423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:43:07.423 [2024-11-19 08:06:58.837483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:07.423 [2024-11-19 08:06:58.837490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:43:07.679 08:06:59 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:07.679 08:06:59 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:43:07.679 08:06:59 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:43:07.679 08:06:59 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:07.679 08:06:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:07.679 INFO: Log level set to 20 00:43:07.679 INFO: Requests: 00:43:07.679 { 00:43:07.679 "jsonrpc": "2.0", 00:43:07.679 "method": "nvmf_set_config", 00:43:07.679 "id": 1, 00:43:07.679 "params": { 00:43:07.679 "admin_cmd_passthru": { 00:43:07.679 "identify_ctrlr": true 00:43:07.679 } 00:43:07.679 } 00:43:07.679 } 00:43:07.679 00:43:07.679 INFO: response: 00:43:07.679 { 00:43:07.679 "jsonrpc": "2.0", 00:43:07.679 "id": 1, 00:43:07.679 "result": true 00:43:07.679 } 00:43:07.679 00:43:07.679 08:06:59 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:07.679 08:06:59 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:43:07.679 08:06:59 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:07.679 08:06:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:07.679 INFO: Setting log level to 20 00:43:07.679 INFO: Setting log level to 20 00:43:07.679 INFO: Log level set to 20 00:43:07.679 INFO: Log level set to 20 00:43:07.679 INFO: Requests: 00:43:07.679 { 00:43:07.679 "jsonrpc": "2.0", 00:43:07.679 "method": "framework_start_init", 00:43:07.679 "id": 1 00:43:07.679 } 00:43:07.679 00:43:07.679 INFO: Requests: 00:43:07.679 { 00:43:07.679 "jsonrpc": "2.0", 00:43:07.679 "method": "framework_start_init", 00:43:07.679 "id": 1 00:43:07.679 } 00:43:07.679 00:43:07.937 [2024-11-19 08:06:59.850086] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:43:07.937 INFO: response: 00:43:07.937 { 00:43:07.937 "jsonrpc": "2.0", 00:43:07.937 "id": 1, 00:43:07.937 "result": true 00:43:07.937 } 00:43:07.937 00:43:07.937 INFO: response: 00:43:07.937 { 00:43:07.937 "jsonrpc": "2.0", 00:43:07.937 "id": 1, 00:43:07.937 "result": true 00:43:07.937 } 00:43:07.937 00:43:07.937 08:06:59 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:07.937 08:06:59 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:43:07.937 08:06:59 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:07.937 08:06:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:07.937 INFO: Setting log level to 40 00:43:07.937 INFO: Setting log level to 40 00:43:07.937 INFO: Setting log level to 40 00:43:07.937 [2024-11-19 08:06:59.863058] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:08.194 08:06:59 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:08.194 08:06:59 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:43:08.194 08:06:59 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:08.194 08:06:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:08.194 08:06:59 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:43:08.194 08:06:59 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:08.194 08:06:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:11.474 Nvme0n1 00:43:11.474 08:07:02 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:11.474 08:07:02 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:43:11.474 08:07:02 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:11.474 08:07:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:11.474 08:07:02 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:11.474 08:07:02 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:43:11.474 08:07:02 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:11.474 08:07:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:11.475 08:07:02 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:11.475 08:07:02 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:11.475 08:07:02 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:11.475 08:07:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:11.475 [2024-11-19 08:07:02.821748] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:11.475 08:07:02 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:11.475 08:07:02 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:43:11.475 08:07:02 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:11.475 08:07:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:11.475 [ 00:43:11.475 { 00:43:11.475 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:43:11.475 "subtype": "Discovery", 00:43:11.475 "listen_addresses": [], 00:43:11.475 "allow_any_host": true, 00:43:11.475 "hosts": [] 00:43:11.475 }, 00:43:11.475 { 00:43:11.475 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:43:11.475 "subtype": "NVMe", 00:43:11.475 "listen_addresses": [ 00:43:11.475 { 00:43:11.475 "trtype": "TCP", 00:43:11.475 "adrfam": "IPv4", 00:43:11.475 "traddr": "10.0.0.2", 00:43:11.475 "trsvcid": "4420" 00:43:11.475 } 00:43:11.475 ], 00:43:11.475 "allow_any_host": true, 00:43:11.475 "hosts": [], 00:43:11.475 "serial_number": "SPDK00000000000001", 00:43:11.475 "model_number": "SPDK bdev Controller", 00:43:11.475 "max_namespaces": 1, 00:43:11.475 "min_cntlid": 1, 00:43:11.475 "max_cntlid": 65519, 00:43:11.475 "namespaces": [ 00:43:11.475 { 00:43:11.475 "nsid": 1, 00:43:11.475 "bdev_name": "Nvme0n1", 00:43:11.475 "name": "Nvme0n1", 00:43:11.475 "nguid": "35E6277BFF1A49B1B23B19C6DF542569", 00:43:11.475 "uuid": "35e6277b-ff1a-49b1-b23b-19c6df542569" 00:43:11.475 } 00:43:11.475 ] 00:43:11.475 } 00:43:11.475 ] 00:43:11.475 08:07:02 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:11.475 08:07:02 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:43:11.475 08:07:02 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:43:11.475 08:07:02 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:43:11.475 08:07:03 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:43:11.475 08:07:03 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:43:11.475 08:07:03 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:43:11.475 08:07:03 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:43:11.734 08:07:03 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:43:11.734 08:07:03 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:43:11.734 08:07:03 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:43:11.734 08:07:03 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:11.734 08:07:03 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:11.734 08:07:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:11.734 08:07:03 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:11.734 08:07:03 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:43:11.734 08:07:03 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:43:11.734 08:07:03 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:43:11.734 08:07:03 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:43:11.734 08:07:03 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:11.734 08:07:03 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:43:11.734 08:07:03 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:11.734 08:07:03 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:11.734 rmmod nvme_tcp 00:43:11.734 rmmod nvme_fabrics 00:43:11.734 rmmod nvme_keyring 00:43:11.734 08:07:03 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:11.734 08:07:03 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:43:11.734 08:07:03 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:43:11.734 08:07:03 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 3201166 ']' 00:43:11.734 08:07:03 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 3201166 00:43:11.734 08:07:03 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 3201166 ']' 00:43:11.734 08:07:03 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 3201166 00:43:11.734 08:07:03 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:43:11.734 08:07:03 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:11.734 08:07:03 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3201166 00:43:11.734 08:07:03 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:11.734 08:07:03 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:11.734 08:07:03 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3201166' 00:43:11.734 killing process with pid 3201166 00:43:11.734 08:07:03 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 3201166 00:43:11.734 08:07:03 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 3201166 00:43:14.266 08:07:06 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:43:14.266 08:07:06 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:43:14.266 08:07:06 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:43:14.266 08:07:06 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:43:14.266 08:07:06 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:43:14.266 08:07:06 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:43:14.266 08:07:06 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:43:14.266 08:07:06 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:14.266 08:07:06 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:14.266 08:07:06 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:14.266 08:07:06 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:43:14.266 08:07:06 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:16.800 08:07:08 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:16.800 00:43:16.800 real 0m21.060s 00:43:16.800 user 0m34.302s 00:43:16.800 sys 0m3.614s 00:43:16.800 08:07:08 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:16.800 08:07:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:16.800 ************************************ 00:43:16.800 END TEST nvmf_identify_passthru 00:43:16.800 ************************************ 00:43:16.800 08:07:08 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:43:16.800 08:07:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:16.800 08:07:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:16.800 08:07:08 -- common/autotest_common.sh@10 -- # set +x 00:43:16.800 ************************************ 00:43:16.800 START TEST nvmf_dif 00:43:16.800 ************************************ 00:43:16.800 08:07:08 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:43:16.800 * Looking for test storage... 00:43:16.800 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:16.800 08:07:08 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:43:16.800 08:07:08 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:43:16.800 08:07:08 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:43:16.800 08:07:08 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:43:16.800 08:07:08 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:16.800 08:07:08 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:16.800 08:07:08 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:16.800 08:07:08 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:43:16.800 08:07:08 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:43:16.800 08:07:08 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:43:16.800 08:07:08 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:43:16.800 08:07:08 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:43:16.800 08:07:08 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:43:16.800 08:07:08 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:43:16.800 08:07:08 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:16.800 08:07:08 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:43:16.800 08:07:08 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:43:16.800 08:07:08 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:16.800 08:07:08 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:16.800 08:07:08 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:43:16.800 08:07:08 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:43:16.800 08:07:08 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:16.800 08:07:08 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:43:16.800 08:07:08 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:43:16.800 08:07:08 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:43:16.801 08:07:08 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:43:16.801 08:07:08 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:16.801 08:07:08 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:43:16.801 08:07:08 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:43:16.801 08:07:08 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:16.801 08:07:08 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:16.801 08:07:08 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:43:16.801 08:07:08 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:16.801 08:07:08 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:43:16.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:16.801 --rc genhtml_branch_coverage=1 00:43:16.801 --rc genhtml_function_coverage=1 00:43:16.801 --rc genhtml_legend=1 00:43:16.801 --rc geninfo_all_blocks=1 00:43:16.801 --rc geninfo_unexecuted_blocks=1 00:43:16.801 00:43:16.801 ' 00:43:16.801 08:07:08 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:43:16.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:16.801 --rc genhtml_branch_coverage=1 00:43:16.801 --rc genhtml_function_coverage=1 00:43:16.801 --rc genhtml_legend=1 00:43:16.801 --rc geninfo_all_blocks=1 00:43:16.801 --rc geninfo_unexecuted_blocks=1 00:43:16.801 00:43:16.801 ' 00:43:16.801 08:07:08 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:43:16.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:16.801 --rc genhtml_branch_coverage=1 00:43:16.801 --rc genhtml_function_coverage=1 00:43:16.801 --rc genhtml_legend=1 00:43:16.801 --rc geninfo_all_blocks=1 00:43:16.801 --rc geninfo_unexecuted_blocks=1 00:43:16.801 00:43:16.801 ' 00:43:16.801 08:07:08 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:43:16.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:16.801 --rc genhtml_branch_coverage=1 00:43:16.801 --rc genhtml_function_coverage=1 00:43:16.801 --rc genhtml_legend=1 00:43:16.801 --rc geninfo_all_blocks=1 00:43:16.801 --rc geninfo_unexecuted_blocks=1 00:43:16.801 00:43:16.801 ' 00:43:16.801 08:07:08 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:16.801 08:07:08 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:43:16.801 08:07:08 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:16.801 08:07:08 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:16.801 08:07:08 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:16.801 08:07:08 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:16.801 08:07:08 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:16.801 08:07:08 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:16.801 08:07:08 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:16.801 08:07:08 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:16.801 08:07:08 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:16.801 08:07:08 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:16.801 08:07:08 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:43:16.801 08:07:08 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:43:16.801 08:07:08 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:16.801 08:07:08 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:16.801 08:07:08 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:16.801 08:07:08 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:16.801 08:07:08 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:16.801 08:07:08 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:43:16.801 08:07:08 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:16.801 08:07:08 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:16.801 08:07:08 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:16.801 08:07:08 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:16.801 08:07:08 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:16.801 08:07:08 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:16.801 08:07:08 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:43:16.801 08:07:08 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:16.801 08:07:08 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:43:16.801 08:07:08 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:16.801 08:07:08 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:16.801 08:07:08 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:16.801 08:07:08 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:16.801 08:07:08 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:16.801 08:07:08 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:43:16.801 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:43:16.801 08:07:08 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:16.801 08:07:08 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:16.801 08:07:08 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:16.801 08:07:08 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:43:16.801 08:07:08 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:43:16.801 08:07:08 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:43:16.801 08:07:08 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:43:16.801 08:07:08 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:43:16.801 08:07:08 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:43:16.801 08:07:08 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:16.801 08:07:08 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:43:16.801 08:07:08 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:43:16.801 08:07:08 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:43:16.801 08:07:08 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:16.801 08:07:08 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:43:16.801 08:07:08 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:16.801 08:07:08 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:43:16.801 08:07:08 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:43:16.801 08:07:08 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:43:16.801 08:07:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:18.705 08:07:10 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:18.705 08:07:10 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:43:18.705 08:07:10 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:18.705 08:07:10 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:18.705 08:07:10 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:18.705 08:07:10 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:18.705 08:07:10 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:18.705 08:07:10 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:43:18.705 08:07:10 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:18.705 08:07:10 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:43:18.705 08:07:10 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:43:18.705 08:07:10 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:43:18.705 08:07:10 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:43:18.705 08:07:10 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:43:18.705 08:07:10 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:43:18.705 08:07:10 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:18.705 08:07:10 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:18.705 08:07:10 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:18.705 08:07:10 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:18.705 08:07:10 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:18.705 08:07:10 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:18.705 08:07:10 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:18.705 08:07:10 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:43:18.705 08:07:10 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:18.705 08:07:10 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:18.705 08:07:10 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:18.705 08:07:10 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:18.705 08:07:10 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:43:18.705 08:07:10 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:43:18.705 08:07:10 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:43:18.705 08:07:10 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:43:18.706 08:07:10 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:43:18.706 08:07:10 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:43:18.706 08:07:10 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:18.706 08:07:10 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:43:18.706 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:43:18.706 08:07:10 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:18.706 08:07:10 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:18.706 08:07:10 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:18.706 08:07:10 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:18.706 08:07:10 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:18.706 08:07:10 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:18.706 08:07:10 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:43:18.706 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:43:18.706 08:07:10 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:18.706 08:07:10 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:18.706 08:07:10 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:18.706 08:07:10 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:18.706 08:07:10 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:18.706 08:07:10 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:43:18.706 08:07:10 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:43:18.706 08:07:10 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:43:18.706 08:07:10 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:18.706 08:07:10 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:18.706 08:07:10 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:18.706 08:07:10 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:18.706 08:07:10 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:18.706 08:07:10 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:18.706 08:07:10 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:18.706 08:07:10 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:43:18.706 Found net devices under 0000:0a:00.0: cvl_0_0 00:43:18.706 08:07:10 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:18.706 08:07:10 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:18.706 08:07:10 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:18.706 08:07:10 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:18.706 08:07:10 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:18.706 08:07:10 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:18.706 08:07:10 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:18.706 08:07:10 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:18.706 08:07:10 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:43:18.706 Found net devices under 0000:0a:00.1: cvl_0_1 00:43:18.706 08:07:10 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:18.706 08:07:10 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:43:18.706 08:07:10 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:43:18.706 08:07:10 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:43:18.706 08:07:10 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:43:18.706 08:07:10 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:43:18.706 08:07:10 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:18.706 08:07:10 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:18.706 08:07:10 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:18.706 08:07:10 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:18.706 08:07:10 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:18.706 08:07:10 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:18.706 08:07:10 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:18.706 08:07:10 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:18.706 08:07:10 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:18.706 08:07:10 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:18.706 08:07:10 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:18.706 08:07:10 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:18.706 08:07:10 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:18.706 08:07:10 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:18.706 08:07:10 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:18.706 08:07:10 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:18.706 08:07:10 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:18.706 08:07:10 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:18.706 08:07:10 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:18.706 08:07:10 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:18.706 08:07:10 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:18.706 08:07:10 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:18.706 08:07:10 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:18.706 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:18.706 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:43:18.706 00:43:18.706 --- 10.0.0.2 ping statistics --- 00:43:18.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:18.706 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:43:18.706 08:07:10 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:18.706 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:18.706 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:43:18.706 00:43:18.706 --- 10.0.0.1 ping statistics --- 00:43:18.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:18.706 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:43:18.706 08:07:10 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:18.706 08:07:10 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:43:18.706 08:07:10 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:43:18.706 08:07:10 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:43:20.130 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:43:20.130 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:43:20.130 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:43:20.130 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:43:20.130 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:43:20.130 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:43:20.130 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:43:20.130 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:43:20.130 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:43:20.130 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:43:20.130 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:43:20.130 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:43:20.130 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:43:20.130 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:43:20.130 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:43:20.130 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:43:20.130 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:43:20.130 08:07:11 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:20.130 08:07:11 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:43:20.131 08:07:11 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:43:20.131 08:07:11 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:20.131 08:07:11 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:43:20.131 08:07:11 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:43:20.131 08:07:11 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:43:20.131 08:07:11 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:43:20.131 08:07:11 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:43:20.131 08:07:11 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:20.131 08:07:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:20.131 08:07:11 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=3204599 00:43:20.131 08:07:11 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:43:20.131 08:07:11 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 3204599 00:43:20.131 08:07:11 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 3204599 ']' 00:43:20.131 08:07:11 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:20.131 08:07:11 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:20.131 08:07:11 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:20.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:20.131 08:07:11 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:20.131 08:07:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:20.131 [2024-11-19 08:07:11.936531] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:43:20.131 [2024-11-19 08:07:11.936659] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:20.391 [2024-11-19 08:07:12.088142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:20.391 [2024-11-19 08:07:12.229396] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:20.391 [2024-11-19 08:07:12.229490] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:20.391 [2024-11-19 08:07:12.229515] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:20.391 [2024-11-19 08:07:12.229540] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:20.391 [2024-11-19 08:07:12.229559] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:20.391 [2024-11-19 08:07:12.231271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:21.327 08:07:12 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:21.327 08:07:12 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:43:21.327 08:07:12 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:43:21.327 08:07:12 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:21.327 08:07:12 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:21.327 08:07:12 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:21.327 08:07:12 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:43:21.327 08:07:12 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:43:21.327 08:07:12 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:21.327 08:07:12 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:21.327 [2024-11-19 08:07:12.966289] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:21.327 08:07:12 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:21.327 08:07:12 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:43:21.327 08:07:12 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:21.327 08:07:12 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:21.327 08:07:12 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:21.327 ************************************ 00:43:21.327 START TEST fio_dif_1_default 00:43:21.327 ************************************ 00:43:21.327 08:07:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:43:21.327 08:07:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:43:21.327 08:07:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:43:21.328 08:07:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:43:21.328 08:07:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:43:21.328 08:07:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:43:21.328 08:07:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:43:21.328 08:07:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:21.328 08:07:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:21.328 bdev_null0 00:43:21.328 08:07:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:21.328 08:07:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:43:21.328 08:07:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:21.328 08:07:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:21.328 08:07:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:21.328 08:07:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:43:21.328 08:07:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:21.328 08:07:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:21.328 08:07:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:21.328 08:07:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:21.328 08:07:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:21.328 08:07:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:21.328 [2024-11-19 08:07:13.026609] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:21.328 08:07:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:21.328 08:07:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:43:21.328 08:07:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:43:21.328 08:07:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:43:21.328 08:07:13 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:43:21.328 08:07:13 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:43:21.328 08:07:13 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:21.328 08:07:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:21.328 08:07:13 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:21.328 { 00:43:21.328 "params": { 00:43:21.328 "name": "Nvme$subsystem", 00:43:21.328 "trtype": "$TEST_TRANSPORT", 00:43:21.328 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:21.328 "adrfam": "ipv4", 00:43:21.328 "trsvcid": "$NVMF_PORT", 00:43:21.328 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:21.328 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:21.328 "hdgst": ${hdgst:-false}, 00:43:21.328 "ddgst": ${ddgst:-false} 00:43:21.328 }, 00:43:21.328 "method": "bdev_nvme_attach_controller" 00:43:21.328 } 00:43:21.328 EOF 00:43:21.328 )") 00:43:21.328 08:07:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:43:21.328 08:07:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:21.328 08:07:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:43:21.328 08:07:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:43:21.328 08:07:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:43:21.328 08:07:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:21.328 08:07:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:43:21.328 08:07:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:21.328 08:07:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:43:21.328 08:07:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:43:21.328 08:07:13 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:43:21.328 08:07:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:21.328 08:07:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:43:21.328 08:07:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:43:21.328 08:07:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:21.328 08:07:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:43:21.328 08:07:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:21.328 08:07:13 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:43:21.328 08:07:13 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:43:21.328 08:07:13 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:43:21.328 "params": { 00:43:21.328 "name": "Nvme0", 00:43:21.328 "trtype": "tcp", 00:43:21.328 "traddr": "10.0.0.2", 00:43:21.328 "adrfam": "ipv4", 00:43:21.328 "trsvcid": "4420", 00:43:21.328 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:21.328 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:21.328 "hdgst": false, 00:43:21.328 "ddgst": false 00:43:21.328 }, 00:43:21.328 "method": "bdev_nvme_attach_controller" 00:43:21.328 }' 00:43:21.328 08:07:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:43:21.328 08:07:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:43:21.328 08:07:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1351 -- # break 00:43:21.328 08:07:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:43:21.328 08:07:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:21.587 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:43:21.587 fio-3.35 00:43:21.587 Starting 1 thread 00:43:33.814 00:43:33.814 filename0: (groupid=0, jobs=1): err= 0: pid=3205006: Tue Nov 19 08:07:24 2024 00:43:33.814 read: IOPS=195, BW=783KiB/s (802kB/s)(7840KiB/10016msec) 00:43:33.814 slat (nsec): min=5735, max=41133, avg=14425.27, stdev=4599.10 00:43:33.814 clat (usec): min=699, max=43668, avg=20395.69, stdev=20190.02 00:43:33.814 lat (usec): min=710, max=43709, avg=20410.12, stdev=20189.87 00:43:33.814 clat percentiles (usec): 00:43:33.814 | 1.00th=[ 717], 5.00th=[ 742], 10.00th=[ 750], 20.00th=[ 766], 00:43:33.814 | 30.00th=[ 783], 40.00th=[ 799], 50.00th=[ 857], 60.00th=[41157], 00:43:33.814 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:43:33.814 | 99.00th=[41681], 99.50th=[41681], 99.90th=[43779], 99.95th=[43779], 00:43:33.814 | 99.99th=[43779] 00:43:33.814 bw ( KiB/s): min= 672, max= 896, per=99.90%, avg=782.40, stdev=58.25, samples=20 00:43:33.814 iops : min= 168, max= 224, avg=195.60, stdev=14.56, samples=20 00:43:33.814 lat (usec) : 750=10.10%, 1000=41.12% 00:43:33.814 lat (msec) : 2=0.20%, 50=48.57% 00:43:33.814 cpu : usr=91.94%, sys=7.50%, ctx=14, majf=0, minf=1634 00:43:33.814 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:33.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:33.814 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:33.814 issued rwts: total=1960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:33.814 latency : target=0, window=0, percentile=100.00%, depth=4 00:43:33.814 00:43:33.814 Run status group 0 (all jobs): 00:43:33.814 READ: bw=783KiB/s (802kB/s), 783KiB/s-783KiB/s (802kB/s-802kB/s), io=7840KiB (8028kB), run=10016-10016msec 00:43:33.814 ----------------------------------------------------- 00:43:33.814 Suppressions used: 00:43:33.814 count bytes template 00:43:33.814 1 8 /usr/src/fio/parse.c 00:43:33.814 1 8 libtcmalloc_minimal.so 00:43:33.814 1 904 libcrypto.so 00:43:33.814 ----------------------------------------------------- 00:43:33.814 00:43:33.814 08:07:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:43:33.814 08:07:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:43:33.814 08:07:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:43:33.814 08:07:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:43:33.814 08:07:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:43:33.814 08:07:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:33.814 08:07:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:33.814 08:07:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:33.814 08:07:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:33.814 08:07:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:43:33.814 08:07:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:33.814 08:07:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:33.814 08:07:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:33.814 00:43:33.814 real 0m12.365s 00:43:33.814 user 0m11.257s 00:43:33.814 sys 0m1.238s 00:43:33.814 08:07:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:33.814 08:07:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:33.814 ************************************ 00:43:33.814 END TEST fio_dif_1_default 00:43:33.814 ************************************ 00:43:33.814 08:07:25 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:43:33.814 08:07:25 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:33.814 08:07:25 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:33.814 08:07:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:33.814 ************************************ 00:43:33.814 START TEST fio_dif_1_multi_subsystems 00:43:33.814 ************************************ 00:43:33.814 08:07:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:43:33.814 08:07:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:43:33.814 08:07:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:43:33.814 08:07:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:43:33.814 08:07:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:43:33.814 08:07:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:43:33.814 08:07:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:43:33.814 08:07:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:43:33.814 08:07:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:33.814 08:07:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:33.814 bdev_null0 00:43:33.814 08:07:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:33.814 08:07:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:43:33.814 08:07:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:33.814 08:07:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:33.814 08:07:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:33.814 08:07:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:43:33.814 08:07:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:33.814 08:07:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:33.814 08:07:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:33.814 08:07:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:33.814 08:07:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:33.815 08:07:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:33.815 [2024-11-19 08:07:25.432398] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:33.815 08:07:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:33.815 08:07:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:43:33.815 08:07:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:43:33.815 08:07:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:43:33.815 08:07:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:43:33.815 08:07:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:33.815 08:07:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:33.815 bdev_null1 00:43:33.815 08:07:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:33.815 08:07:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:43:33.815 08:07:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:33.815 08:07:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:33.815 08:07:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:33.815 08:07:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:43:33.815 08:07:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:33.815 08:07:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:33.815 08:07:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:33.815 08:07:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:33.815 08:07:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:33.815 08:07:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:33.815 08:07:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:33.815 08:07:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:43:33.815 08:07:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:43:33.815 08:07:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:43:33.815 08:07:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:43:33.815 08:07:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:43:33.815 08:07:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:33.815 08:07:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:33.815 08:07:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:33.815 { 00:43:33.815 "params": { 00:43:33.815 "name": "Nvme$subsystem", 00:43:33.815 "trtype": "$TEST_TRANSPORT", 00:43:33.815 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:33.815 "adrfam": "ipv4", 00:43:33.815 "trsvcid": "$NVMF_PORT", 00:43:33.815 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:33.815 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:33.815 "hdgst": ${hdgst:-false}, 00:43:33.815 "ddgst": ${ddgst:-false} 00:43:33.815 }, 00:43:33.815 "method": "bdev_nvme_attach_controller" 00:43:33.815 } 00:43:33.815 EOF 00:43:33.815 )") 00:43:33.815 08:07:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:43:33.815 08:07:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:33.815 08:07:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:43:33.815 08:07:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:43:33.815 08:07:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:43:33.815 08:07:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:33.815 08:07:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:43:33.815 08:07:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:33.815 08:07:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:43:33.815 08:07:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:43:33.815 08:07:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:43:33.815 08:07:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:33.815 08:07:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:43:33.815 08:07:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:33.815 08:07:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:43:33.815 08:07:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:43:33.815 08:07:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:43:33.815 08:07:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:33.815 08:07:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:33.815 08:07:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:33.815 { 00:43:33.815 "params": { 00:43:33.815 "name": "Nvme$subsystem", 00:43:33.815 "trtype": "$TEST_TRANSPORT", 00:43:33.815 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:33.815 "adrfam": "ipv4", 00:43:33.815 "trsvcid": "$NVMF_PORT", 00:43:33.815 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:33.815 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:33.815 "hdgst": ${hdgst:-false}, 00:43:33.815 "ddgst": ${ddgst:-false} 00:43:33.815 }, 00:43:33.815 "method": "bdev_nvme_attach_controller" 00:43:33.815 } 00:43:33.815 EOF 00:43:33.815 )") 00:43:33.815 08:07:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:43:33.815 08:07:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:43:33.815 08:07:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:43:33.815 08:07:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:43:33.815 08:07:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:43:33.815 08:07:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:43:33.815 "params": { 00:43:33.815 "name": "Nvme0", 00:43:33.815 "trtype": "tcp", 00:43:33.815 "traddr": "10.0.0.2", 00:43:33.815 "adrfam": "ipv4", 00:43:33.815 "trsvcid": "4420", 00:43:33.815 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:33.815 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:33.815 "hdgst": false, 00:43:33.815 "ddgst": false 00:43:33.815 }, 00:43:33.815 "method": "bdev_nvme_attach_controller" 00:43:33.815 },{ 00:43:33.815 "params": { 00:43:33.815 "name": "Nvme1", 00:43:33.815 "trtype": "tcp", 00:43:33.815 "traddr": "10.0.0.2", 00:43:33.815 "adrfam": "ipv4", 00:43:33.815 "trsvcid": "4420", 00:43:33.815 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:43:33.815 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:43:33.815 "hdgst": false, 00:43:33.815 "ddgst": false 00:43:33.815 }, 00:43:33.815 "method": "bdev_nvme_attach_controller" 00:43:33.815 }' 00:43:33.815 08:07:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:43:33.815 08:07:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:43:33.815 08:07:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1351 -- # break 00:43:33.815 08:07:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:43:33.815 08:07:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:34.074 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:43:34.074 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:43:34.074 fio-3.35 00:43:34.074 Starting 2 threads 00:43:46.277 00:43:46.277 filename0: (groupid=0, jobs=1): err= 0: pid=3206566: Tue Nov 19 08:07:36 2024 00:43:46.277 read: IOPS=141, BW=565KiB/s (579kB/s)(5664KiB/10023msec) 00:43:46.277 slat (nsec): min=4995, max=70880, avg=19679.23, stdev=9205.45 00:43:46.277 clat (usec): min=763, max=42982, avg=28253.27, stdev=18966.62 00:43:46.277 lat (usec): min=776, max=43001, avg=28272.95, stdev=18965.98 00:43:46.277 clat percentiles (usec): 00:43:46.277 | 1.00th=[ 807], 5.00th=[ 857], 10.00th=[ 889], 20.00th=[ 938], 00:43:46.277 | 30.00th=[ 996], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:43:46.277 | 70.00th=[41157], 80.00th=[41681], 90.00th=[41681], 95.00th=[42206], 00:43:46.277 | 99.00th=[42206], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:43:46.277 | 99.99th=[42730] 00:43:46.277 bw ( KiB/s): min= 384, max= 768, per=59.28%, avg=564.80, stdev=180.80, samples=20 00:43:46.277 iops : min= 96, max= 192, avg=141.20, stdev=45.20, samples=20 00:43:46.277 lat (usec) : 1000=30.16% 00:43:46.277 lat (msec) : 2=2.33%, 50=67.51% 00:43:46.277 cpu : usr=96.54%, sys=2.76%, ctx=35, majf=0, minf=1636 00:43:46.277 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:46.277 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:46.277 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:46.277 issued rwts: total=1416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:46.277 latency : target=0, window=0, percentile=100.00%, depth=4 00:43:46.277 filename1: (groupid=0, jobs=1): err= 0: pid=3206567: Tue Nov 19 08:07:36 2024 00:43:46.277 read: IOPS=96, BW=387KiB/s (396kB/s)(3872KiB/10015msec) 00:43:46.277 slat (nsec): min=9159, max=75508, avg=17973.55, stdev=9247.61 00:43:46.277 clat (usec): min=745, max=46130, avg=41326.23, stdev=3743.33 00:43:46.277 lat (usec): min=756, max=46192, avg=41344.21, stdev=3742.71 00:43:46.277 clat percentiles (usec): 00:43:46.277 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:43:46.277 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:43:46.277 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:43:46.277 | 99.00th=[42730], 99.50th=[42730], 99.90th=[45876], 99.95th=[45876], 00:43:46.277 | 99.99th=[45876] 00:43:46.277 bw ( KiB/s): min= 352, max= 416, per=40.47%, avg=385.60, stdev=19.35, samples=20 00:43:46.277 iops : min= 88, max= 104, avg=96.40, stdev= 4.84, samples=20 00:43:46.277 lat (usec) : 750=0.10%, 1000=0.72% 00:43:46.277 lat (msec) : 50=99.17% 00:43:46.277 cpu : usr=96.03%, sys=3.50%, ctx=14, majf=0, minf=1636 00:43:46.277 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:46.277 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:46.277 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:46.277 issued rwts: total=968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:46.277 latency : target=0, window=0, percentile=100.00%, depth=4 00:43:46.277 00:43:46.277 Run status group 0 (all jobs): 00:43:46.277 READ: bw=951KiB/s (974kB/s), 387KiB/s-565KiB/s (396kB/s-579kB/s), io=9536KiB (9765kB), run=10015-10023msec 00:43:46.277 ----------------------------------------------------- 00:43:46.277 Suppressions used: 00:43:46.277 count bytes template 00:43:46.277 2 16 /usr/src/fio/parse.c 00:43:46.277 1 8 libtcmalloc_minimal.so 00:43:46.277 1 904 libcrypto.so 00:43:46.277 ----------------------------------------------------- 00:43:46.277 00:43:46.277 08:07:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:43:46.277 08:07:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:43:46.277 08:07:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:43:46.277 08:07:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:43:46.277 08:07:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:43:46.277 08:07:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:46.277 08:07:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:46.277 08:07:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:46.277 08:07:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:46.277 08:07:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:43:46.277 08:07:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:46.277 08:07:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:46.277 08:07:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:46.277 08:07:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:43:46.277 08:07:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:43:46.277 08:07:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:43:46.277 08:07:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:46.277 08:07:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:46.277 08:07:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:46.277 08:07:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:46.277 08:07:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:43:46.277 08:07:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:46.277 08:07:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:46.277 08:07:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:46.277 00:43:46.277 real 0m12.590s 00:43:46.277 user 0m21.638s 00:43:46.277 sys 0m1.122s 00:43:46.277 08:07:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:46.277 08:07:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:46.277 ************************************ 00:43:46.278 END TEST fio_dif_1_multi_subsystems 00:43:46.278 ************************************ 00:43:46.278 08:07:38 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:43:46.278 08:07:38 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:46.278 08:07:38 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:46.278 08:07:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:46.278 ************************************ 00:43:46.278 START TEST fio_dif_rand_params 00:43:46.278 ************************************ 00:43:46.278 08:07:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:43:46.278 08:07:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:43:46.278 08:07:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:43:46.278 08:07:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:43:46.278 08:07:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:43:46.278 08:07:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:43:46.278 08:07:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:43:46.278 08:07:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:43:46.278 08:07:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:43:46.278 08:07:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:43:46.278 08:07:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:43:46.278 08:07:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:43:46.278 08:07:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:43:46.278 08:07:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:43:46.278 08:07:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:46.278 08:07:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:46.278 bdev_null0 00:43:46.278 08:07:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:46.278 08:07:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:43:46.278 08:07:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:46.278 08:07:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:46.278 08:07:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:46.278 08:07:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:43:46.278 08:07:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:46.278 08:07:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:46.278 08:07:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:46.278 08:07:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:46.278 08:07:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:46.278 08:07:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:46.278 [2024-11-19 08:07:38.079069] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:46.278 08:07:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:46.278 08:07:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:43:46.278 08:07:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:43:46.278 08:07:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:43:46.278 08:07:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:43:46.278 08:07:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:43:46.278 08:07:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:46.278 08:07:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:46.278 08:07:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:46.278 { 00:43:46.278 "params": { 00:43:46.278 "name": "Nvme$subsystem", 00:43:46.278 "trtype": "$TEST_TRANSPORT", 00:43:46.278 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:46.278 "adrfam": "ipv4", 00:43:46.278 "trsvcid": "$NVMF_PORT", 00:43:46.278 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:46.278 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:46.278 "hdgst": ${hdgst:-false}, 00:43:46.278 "ddgst": ${ddgst:-false} 00:43:46.278 }, 00:43:46.278 "method": "bdev_nvme_attach_controller" 00:43:46.278 } 00:43:46.278 EOF 00:43:46.278 )") 00:43:46.278 08:07:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:46.278 08:07:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:43:46.278 08:07:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:43:46.278 08:07:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:43:46.278 08:07:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:46.278 08:07:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:43:46.278 08:07:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:43:46.278 08:07:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:46.278 08:07:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:43:46.278 08:07:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:43:46.278 08:07:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:43:46.278 08:07:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:46.278 08:07:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:46.278 08:07:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:43:46.278 08:07:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:43:46.278 08:07:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:43:46.278 08:07:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:46.278 08:07:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:43:46.278 08:07:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:43:46.278 08:07:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:43:46.278 "params": { 00:43:46.278 "name": "Nvme0", 00:43:46.278 "trtype": "tcp", 00:43:46.278 "traddr": "10.0.0.2", 00:43:46.278 "adrfam": "ipv4", 00:43:46.278 "trsvcid": "4420", 00:43:46.278 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:46.278 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:46.278 "hdgst": false, 00:43:46.278 "ddgst": false 00:43:46.278 }, 00:43:46.278 "method": "bdev_nvme_attach_controller" 00:43:46.278 }' 00:43:46.278 08:07:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:43:46.278 08:07:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:43:46.278 08:07:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # break 00:43:46.278 08:07:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:43:46.278 08:07:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:46.536 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:43:46.536 ... 00:43:46.536 fio-3.35 00:43:46.536 Starting 3 threads 00:43:53.141 00:43:53.141 filename0: (groupid=0, jobs=1): err= 0: pid=3208010: Tue Nov 19 08:07:44 2024 00:43:53.141 read: IOPS=196, BW=24.6MiB/s (25.8MB/s)(123MiB/5008msec) 00:43:53.141 slat (nsec): min=10967, max=52627, avg=20303.50, stdev=5155.60 00:43:53.141 clat (usec): min=7537, max=21554, avg=15210.42, stdev=2358.44 00:43:53.141 lat (usec): min=7555, max=21573, avg=15230.72, stdev=2358.61 00:43:53.141 clat percentiles (usec): 00:43:53.141 | 1.00th=[ 8586], 5.00th=[10552], 10.00th=[12256], 20.00th=[13435], 00:43:53.141 | 30.00th=[14222], 40.00th=[14746], 50.00th=[15401], 60.00th=[15926], 00:43:53.141 | 70.00th=[16450], 80.00th=[17171], 90.00th=[17957], 95.00th=[18744], 00:43:53.141 | 99.00th=[20055], 99.50th=[20317], 99.90th=[21627], 99.95th=[21627], 00:43:53.141 | 99.99th=[21627] 00:43:53.141 bw ( KiB/s): min=22784, max=31744, per=35.44%, avg=25164.80, stdev=2560.14, samples=10 00:43:53.141 iops : min= 178, max= 248, avg=196.60, stdev=20.00, samples=10 00:43:53.141 lat (msec) : 10=3.35%, 20=95.44%, 50=1.22% 00:43:53.141 cpu : usr=93.13%, sys=6.29%, ctx=5, majf=0, minf=1634 00:43:53.141 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:53.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:53.141 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:53.141 issued rwts: total=986,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:53.141 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:53.141 filename0: (groupid=0, jobs=1): err= 0: pid=3208011: Tue Nov 19 08:07:44 2024 00:43:53.141 read: IOPS=194, BW=24.3MiB/s (25.5MB/s)(122MiB/5007msec) 00:43:53.141 slat (nsec): min=7392, max=57084, avg=22524.71, stdev=6637.08 00:43:53.141 clat (usec): min=8826, max=23956, avg=15389.48, stdev=2340.93 00:43:53.141 lat (usec): min=8844, max=23978, avg=15412.00, stdev=2341.84 00:43:53.141 clat percentiles (usec): 00:43:53.141 | 1.00th=[10159], 5.00th=[11994], 10.00th=[12649], 20.00th=[13435], 00:43:53.141 | 30.00th=[13960], 40.00th=[14615], 50.00th=[15139], 60.00th=[15926], 00:43:53.141 | 70.00th=[16712], 80.00th=[17433], 90.00th=[18744], 95.00th=[19530], 00:43:53.141 | 99.00th=[20579], 99.50th=[21627], 99.90th=[23987], 99.95th=[23987], 00:43:53.141 | 99.99th=[23987] 00:43:53.141 bw ( KiB/s): min=23552, max=27904, per=35.05%, avg=24883.20, stdev=1581.78, samples=10 00:43:53.141 iops : min= 184, max= 218, avg=194.40, stdev=12.36, samples=10 00:43:53.141 lat (msec) : 10=0.82%, 20=96.51%, 50=2.67% 00:43:53.141 cpu : usr=93.35%, sys=6.05%, ctx=9, majf=0, minf=1634 00:43:53.141 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:53.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:53.141 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:53.141 issued rwts: total=974,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:53.141 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:53.141 filename0: (groupid=0, jobs=1): err= 0: pid=3208012: Tue Nov 19 08:07:44 2024 00:43:53.141 read: IOPS=166, BW=20.8MiB/s (21.8MB/s)(105MiB/5046msec) 00:43:53.141 slat (nsec): min=7282, max=48985, avg=19824.50, stdev=4881.87 00:43:53.141 clat (usec): min=9476, max=55743, avg=17969.71, stdev=6009.42 00:43:53.141 lat (usec): min=9494, max=55761, avg=17989.54, stdev=6008.95 00:43:53.141 clat percentiles (usec): 00:43:53.141 | 1.00th=[12125], 5.00th=[13304], 10.00th=[14091], 20.00th=[15139], 00:43:53.141 | 30.00th=[16057], 40.00th=[16712], 50.00th=[17171], 60.00th=[17957], 00:43:53.141 | 70.00th=[18482], 80.00th=[19268], 90.00th=[20055], 95.00th=[21103], 00:43:53.141 | 99.00th=[54264], 99.50th=[54789], 99.90th=[55837], 99.95th=[55837], 00:43:53.141 | 99.99th=[55837] 00:43:53.141 bw ( KiB/s): min=12825, max=24576, per=30.15%, avg=21404.10, stdev=3304.78, samples=10 00:43:53.141 iops : min= 100, max= 192, avg=167.20, stdev=25.87, samples=10 00:43:53.141 lat (msec) : 10=0.12%, 20=90.23%, 50=7.39%, 100=2.26% 00:43:53.141 cpu : usr=93.12%, sys=6.30%, ctx=7, majf=0, minf=1637 00:43:53.141 IO depths : 1=1.2%, 2=98.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:53.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:53.141 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:53.141 issued rwts: total=839,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:53.141 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:53.142 00:43:53.142 Run status group 0 (all jobs): 00:43:53.142 READ: bw=69.3MiB/s (72.7MB/s), 20.8MiB/s-24.6MiB/s (21.8MB/s-25.8MB/s), io=350MiB (367MB), run=5007-5046msec 00:43:53.708 ----------------------------------------------------- 00:43:53.708 Suppressions used: 00:43:53.708 count bytes template 00:43:53.709 5 44 /usr/src/fio/parse.c 00:43:53.709 1 8 libtcmalloc_minimal.so 00:43:53.709 1 904 libcrypto.so 00:43:53.709 ----------------------------------------------------- 00:43:53.709 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:53.709 bdev_null0 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:53.709 [2024-11-19 08:07:45.524259] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:53.709 bdev_null1 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:53.709 bdev_null2 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:53.709 { 00:43:53.709 "params": { 00:43:53.709 "name": "Nvme$subsystem", 00:43:53.709 "trtype": "$TEST_TRANSPORT", 00:43:53.709 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:53.709 "adrfam": "ipv4", 00:43:53.709 "trsvcid": "$NVMF_PORT", 00:43:53.709 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:53.709 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:53.709 "hdgst": ${hdgst:-false}, 00:43:53.709 "ddgst": ${ddgst:-false} 00:43:53.709 }, 00:43:53.709 "method": "bdev_nvme_attach_controller" 00:43:53.709 } 00:43:53.709 EOF 00:43:53.709 )") 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:43:53.709 08:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:43:53.710 08:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:53.710 08:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:43:53.710 08:07:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:53.710 08:07:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:53.710 { 00:43:53.710 "params": { 00:43:53.710 "name": "Nvme$subsystem", 00:43:53.710 "trtype": "$TEST_TRANSPORT", 00:43:53.710 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:53.710 "adrfam": "ipv4", 00:43:53.710 "trsvcid": "$NVMF_PORT", 00:43:53.710 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:53.710 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:53.710 "hdgst": ${hdgst:-false}, 00:43:53.710 "ddgst": ${ddgst:-false} 00:43:53.710 }, 00:43:53.710 "method": "bdev_nvme_attach_controller" 00:43:53.710 } 00:43:53.710 EOF 00:43:53.710 )") 00:43:53.710 08:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:43:53.710 08:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:43:53.710 08:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:43:53.710 08:07:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:43:53.710 08:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:43:53.710 08:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:43:53.710 08:07:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:53.710 08:07:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:53.710 { 00:43:53.710 "params": { 00:43:53.710 "name": "Nvme$subsystem", 00:43:53.710 "trtype": "$TEST_TRANSPORT", 00:43:53.710 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:53.710 "adrfam": "ipv4", 00:43:53.710 "trsvcid": "$NVMF_PORT", 00:43:53.710 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:53.710 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:53.710 "hdgst": ${hdgst:-false}, 00:43:53.710 "ddgst": ${ddgst:-false} 00:43:53.710 }, 00:43:53.710 "method": "bdev_nvme_attach_controller" 00:43:53.710 } 00:43:53.710 EOF 00:43:53.710 )") 00:43:53.710 08:07:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:43:53.710 08:07:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:43:53.710 08:07:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:43:53.710 08:07:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:43:53.710 "params": { 00:43:53.710 "name": "Nvme0", 00:43:53.710 "trtype": "tcp", 00:43:53.710 "traddr": "10.0.0.2", 00:43:53.710 "adrfam": "ipv4", 00:43:53.710 "trsvcid": "4420", 00:43:53.710 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:53.710 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:53.710 "hdgst": false, 00:43:53.710 "ddgst": false 00:43:53.710 }, 00:43:53.710 "method": "bdev_nvme_attach_controller" 00:43:53.710 },{ 00:43:53.710 "params": { 00:43:53.710 "name": "Nvme1", 00:43:53.710 "trtype": "tcp", 00:43:53.710 "traddr": "10.0.0.2", 00:43:53.710 "adrfam": "ipv4", 00:43:53.710 "trsvcid": "4420", 00:43:53.710 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:43:53.710 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:43:53.710 "hdgst": false, 00:43:53.710 "ddgst": false 00:43:53.710 }, 00:43:53.710 "method": "bdev_nvme_attach_controller" 00:43:53.710 },{ 00:43:53.710 "params": { 00:43:53.710 "name": "Nvme2", 00:43:53.710 "trtype": "tcp", 00:43:53.710 "traddr": "10.0.0.2", 00:43:53.710 "adrfam": "ipv4", 00:43:53.710 "trsvcid": "4420", 00:43:53.710 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:43:53.710 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:43:53.710 "hdgst": false, 00:43:53.710 "ddgst": false 00:43:53.710 }, 00:43:53.710 "method": "bdev_nvme_attach_controller" 00:43:53.710 }' 00:43:53.710 08:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:43:53.710 08:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:43:53.710 08:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # break 00:43:53.710 08:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:43:53.710 08:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:54.276 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:43:54.276 ... 00:43:54.276 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:43:54.276 ... 00:43:54.276 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:43:54.276 ... 00:43:54.276 fio-3.35 00:43:54.276 Starting 24 threads 00:44:06.478 00:44:06.478 filename0: (groupid=0, jobs=1): err= 0: pid=3208964: Tue Nov 19 08:07:57 2024 00:44:06.478 read: IOPS=320, BW=1281KiB/s (1312kB/s)(12.6MiB/10042msec) 00:44:06.478 slat (nsec): min=6612, max=96811, avg=29304.57, stdev=19637.98 00:44:06.478 clat (usec): min=11873, max=68943, avg=49680.78, stdev=9739.55 00:44:06.478 lat (usec): min=11894, max=68980, avg=49710.09, stdev=9733.73 00:44:06.478 clat percentiles (usec): 00:44:06.478 | 1.00th=[20841], 5.00th=[43254], 10.00th=[44303], 20.00th=[44303], 00:44:06.478 | 30.00th=[44827], 40.00th=[45876], 50.00th=[45876], 60.00th=[46400], 00:44:06.478 | 70.00th=[47449], 80.00th=[64750], 90.00th=[66323], 95.00th=[67634], 00:44:06.478 | 99.00th=[67634], 99.50th=[68682], 99.90th=[68682], 99.95th=[68682], 00:44:06.478 | 99.99th=[68682] 00:44:06.478 bw ( KiB/s): min= 896, max= 1664, per=4.19%, avg=1280.00, stdev=215.79, samples=20 00:44:06.478 iops : min= 224, max= 416, avg=320.00, stdev=53.95, samples=20 00:44:06.478 lat (msec) : 20=1.00%, 50=75.12%, 100=23.88% 00:44:06.478 cpu : usr=97.59%, sys=1.65%, ctx=53, majf=0, minf=1634 00:44:06.478 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:44:06.478 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:06.478 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:06.478 issued rwts: total=3216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:06.478 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:06.478 filename0: (groupid=0, jobs=1): err= 0: pid=3208965: Tue Nov 19 08:07:57 2024 00:44:06.478 read: IOPS=313, BW=1253KiB/s (1283kB/s)(12.2MiB/10011msec) 00:44:06.478 slat (nsec): min=10663, max=99982, avg=43559.86, stdev=20673.53 00:44:06.478 clat (msec): min=33, max=108, avg=50.69, stdev= 9.78 00:44:06.478 lat (msec): min=33, max=108, avg=50.74, stdev= 9.77 00:44:06.478 clat percentiles (msec): 00:44:06.478 | 1.00th=[ 43], 5.00th=[ 44], 10.00th=[ 45], 20.00th=[ 45], 00:44:06.478 | 30.00th=[ 45], 40.00th=[ 46], 50.00th=[ 46], 60.00th=[ 47], 00:44:06.478 | 70.00th=[ 48], 80.00th=[ 65], 90.00th=[ 67], 95.00th=[ 68], 00:44:06.478 | 99.00th=[ 69], 99.50th=[ 109], 99.90th=[ 109], 99.95th=[ 109], 00:44:06.478 | 99.99th=[ 109] 00:44:06.478 bw ( KiB/s): min= 896, max= 1536, per=4.14%, avg=1266.53, stdev=190.31, samples=19 00:44:06.478 iops : min= 224, max= 384, avg=316.63, stdev=47.58, samples=19 00:44:06.478 lat (msec) : 50=74.49%, 100=25.00%, 250=0.51% 00:44:06.478 cpu : usr=98.25%, sys=1.26%, ctx=19, majf=0, minf=1635 00:44:06.478 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:44:06.478 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:06.478 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:06.478 issued rwts: total=3136,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:06.478 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:06.478 filename0: (groupid=0, jobs=1): err= 0: pid=3208966: Tue Nov 19 08:07:57 2024 00:44:06.478 read: IOPS=314, BW=1257KiB/s (1287kB/s)(12.3MiB/10032msec) 00:44:06.478 slat (nsec): min=6890, max=62085, avg=25708.41, stdev=10760.89 00:44:06.478 clat (usec): min=30446, max=90546, avg=50707.75, stdev=9161.89 00:44:06.478 lat (usec): min=30461, max=90565, avg=50733.45, stdev=9160.83 00:44:06.478 clat percentiles (usec): 00:44:06.478 | 1.00th=[43779], 5.00th=[44303], 10.00th=[44303], 20.00th=[44827], 00:44:06.478 | 30.00th=[45351], 40.00th=[45876], 50.00th=[46400], 60.00th=[46924], 00:44:06.478 | 70.00th=[47449], 80.00th=[65274], 90.00th=[66323], 95.00th=[66847], 00:44:06.478 | 99.00th=[68682], 99.50th=[90702], 99.90th=[90702], 99.95th=[90702], 00:44:06.478 | 99.99th=[90702] 00:44:06.478 bw ( KiB/s): min= 896, max= 1408, per=4.10%, avg=1254.40, stdev=197.17, samples=20 00:44:06.478 iops : min= 224, max= 352, avg=313.60, stdev=49.29, samples=20 00:44:06.478 lat (msec) : 50=75.06%, 100=24.94% 00:44:06.478 cpu : usr=98.49%, sys=1.01%, ctx=27, majf=0, minf=1636 00:44:06.478 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:44:06.478 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:06.478 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:06.478 issued rwts: total=3152,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:06.478 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:06.479 filename0: (groupid=0, jobs=1): err= 0: pid=3208967: Tue Nov 19 08:07:57 2024 00:44:06.479 read: IOPS=316, BW=1264KiB/s (1295kB/s)(12.4MiB/10024msec) 00:44:06.479 slat (nsec): min=6331, max=93267, avg=40124.60, stdev=13124.25 00:44:06.479 clat (usec): min=29285, max=89824, avg=50262.92, stdev=8849.03 00:44:06.479 lat (usec): min=29304, max=89848, avg=50303.05, stdev=8844.93 00:44:06.479 clat percentiles (usec): 00:44:06.479 | 1.00th=[40633], 5.00th=[43779], 10.00th=[44303], 20.00th=[44827], 00:44:06.479 | 30.00th=[44827], 40.00th=[45351], 50.00th=[45876], 60.00th=[46400], 00:44:06.479 | 70.00th=[47449], 80.00th=[64750], 90.00th=[66323], 95.00th=[66847], 00:44:06.479 | 99.00th=[67634], 99.50th=[68682], 99.90th=[88605], 99.95th=[89654], 00:44:06.479 | 99.99th=[89654] 00:44:06.479 bw ( KiB/s): min= 892, max= 1408, per=4.12%, avg=1260.60, stdev=200.74, samples=20 00:44:06.479 iops : min= 223, max= 352, avg=315.15, stdev=50.18, samples=20 00:44:06.479 lat (msec) : 50=75.82%, 100=24.18% 00:44:06.479 cpu : usr=98.11%, sys=1.36%, ctx=17, majf=0, minf=1632 00:44:06.479 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:44:06.479 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:06.479 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:06.479 issued rwts: total=3168,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:06.479 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:06.479 filename0: (groupid=0, jobs=1): err= 0: pid=3208968: Tue Nov 19 08:07:57 2024 00:44:06.479 read: IOPS=313, BW=1254KiB/s (1284kB/s)(12.2MiB/10006msec) 00:44:06.479 slat (nsec): min=10883, max=97376, avg=27394.17, stdev=11284.37 00:44:06.479 clat (msec): min=43, max=103, avg=50.81, stdev= 9.46 00:44:06.479 lat (msec): min=43, max=103, avg=50.84, stdev= 9.46 00:44:06.479 clat percentiles (msec): 00:44:06.479 | 1.00th=[ 44], 5.00th=[ 45], 10.00th=[ 45], 20.00th=[ 45], 00:44:06.479 | 30.00th=[ 46], 40.00th=[ 46], 50.00th=[ 47], 60.00th=[ 47], 00:44:06.479 | 70.00th=[ 48], 80.00th=[ 65], 90.00th=[ 67], 95.00th=[ 68], 00:44:06.479 | 99.00th=[ 69], 99.50th=[ 104], 99.90th=[ 104], 99.95th=[ 104], 00:44:06.479 | 99.99th=[ 104] 00:44:06.479 bw ( KiB/s): min= 896, max= 1536, per=4.14%, avg=1266.63, stdev=185.40, samples=19 00:44:06.479 iops : min= 224, max= 384, avg=316.63, stdev=46.37, samples=19 00:44:06.479 lat (msec) : 50=74.49%, 100=25.00%, 250=0.51% 00:44:06.479 cpu : usr=98.24%, sys=1.27%, ctx=20, majf=0, minf=1633 00:44:06.479 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:44:06.479 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:06.479 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:06.479 issued rwts: total=3136,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:06.479 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:06.479 filename0: (groupid=0, jobs=1): err= 0: pid=3208969: Tue Nov 19 08:07:57 2024 00:44:06.479 read: IOPS=314, BW=1256KiB/s (1286kB/s)(12.3MiB/10036msec) 00:44:06.479 slat (nsec): min=8882, max=94087, avg=38109.35, stdev=13119.16 00:44:06.479 clat (usec): min=30828, max=98179, avg=50565.10, stdev=9310.22 00:44:06.479 lat (usec): min=30864, max=98203, avg=50603.21, stdev=9307.33 00:44:06.479 clat percentiles (usec): 00:44:06.479 | 1.00th=[43254], 5.00th=[43779], 10.00th=[44303], 20.00th=[44827], 00:44:06.479 | 30.00th=[44827], 40.00th=[45351], 50.00th=[45876], 60.00th=[46924], 00:44:06.479 | 70.00th=[47449], 80.00th=[64750], 90.00th=[66323], 95.00th=[66847], 00:44:06.479 | 99.00th=[83362], 99.50th=[87557], 99.90th=[90702], 99.95th=[98042], 00:44:06.479 | 99.99th=[98042] 00:44:06.479 bw ( KiB/s): min= 873, max= 1408, per=4.10%, avg=1253.25, stdev=199.68, samples=20 00:44:06.479 iops : min= 218, max= 352, avg=313.30, stdev=49.94, samples=20 00:44:06.479 lat (msec) : 50=75.48%, 100=24.52% 00:44:06.479 cpu : usr=97.06%, sys=1.82%, ctx=130, majf=0, minf=1631 00:44:06.479 IO depths : 1=5.9%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.6%, 32=0.0%, >=64=0.0% 00:44:06.479 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:06.479 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:06.479 issued rwts: total=3152,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:06.479 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:06.479 filename0: (groupid=0, jobs=1): err= 0: pid=3208970: Tue Nov 19 08:07:57 2024 00:44:06.479 read: IOPS=318, BW=1272KiB/s (1303kB/s)(12.4MiB/10009msec) 00:44:06.479 slat (usec): min=7, max=105, avg=31.55, stdev=10.15 00:44:06.479 clat (usec): min=16359, max=89702, avg=50019.42, stdev=9295.02 00:44:06.479 lat (usec): min=16388, max=89731, avg=50050.97, stdev=9295.90 00:44:06.479 clat percentiles (usec): 00:44:06.479 | 1.00th=[23725], 5.00th=[43779], 10.00th=[44303], 20.00th=[44827], 00:44:06.479 | 30.00th=[45351], 40.00th=[45351], 50.00th=[45876], 60.00th=[46400], 00:44:06.479 | 70.00th=[47449], 80.00th=[64750], 90.00th=[66323], 95.00th=[66847], 00:44:06.479 | 99.00th=[67634], 99.50th=[68682], 99.90th=[88605], 99.95th=[89654], 00:44:06.479 | 99.99th=[89654] 00:44:06.479 bw ( KiB/s): min= 896, max= 1536, per=4.21%, avg=1286.74, stdev=188.28, samples=19 00:44:06.479 iops : min= 224, max= 384, avg=321.68, stdev=47.07, samples=19 00:44:06.479 lat (msec) : 20=0.50%, 50=75.79%, 100=23.71% 00:44:06.479 cpu : usr=95.24%, sys=2.82%, ctx=185, majf=0, minf=1634 00:44:06.479 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:44:06.479 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:06.479 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:06.479 issued rwts: total=3184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:06.479 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:06.479 filename0: (groupid=0, jobs=1): err= 0: pid=3208971: Tue Nov 19 08:07:57 2024 00:44:06.479 read: IOPS=319, BW=1278KiB/s (1309kB/s)(12.5MiB/10014msec) 00:44:06.479 slat (nsec): min=6066, max=88725, avg=33806.21, stdev=12396.98 00:44:06.479 clat (usec): min=12072, max=69321, avg=49768.43, stdev=9727.36 00:44:06.479 lat (usec): min=12089, max=69370, avg=49802.24, stdev=9725.66 00:44:06.479 clat percentiles (usec): 00:44:06.479 | 1.00th=[16450], 5.00th=[43779], 10.00th=[44303], 20.00th=[44827], 00:44:06.479 | 30.00th=[45351], 40.00th=[45351], 50.00th=[45876], 60.00th=[46400], 00:44:06.479 | 70.00th=[47449], 80.00th=[64750], 90.00th=[66323], 95.00th=[66847], 00:44:06.479 | 99.00th=[67634], 99.50th=[68682], 99.90th=[69731], 99.95th=[69731], 00:44:06.479 | 99.99th=[69731] 00:44:06.479 bw ( KiB/s): min= 896, max= 1664, per=4.17%, avg=1273.60, stdev=217.68, samples=20 00:44:06.479 iops : min= 224, max= 416, avg=318.40, stdev=54.42, samples=20 00:44:06.479 lat (msec) : 20=1.50%, 50=74.72%, 100=23.78% 00:44:06.479 cpu : usr=97.63%, sys=1.52%, ctx=105, majf=0, minf=1634 00:44:06.479 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:44:06.479 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:06.479 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:06.479 issued rwts: total=3200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:06.479 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:06.479 filename1: (groupid=0, jobs=1): err= 0: pid=3208972: Tue Nov 19 08:07:57 2024 00:44:06.479 read: IOPS=319, BW=1278KiB/s (1309kB/s)(12.5MiB/10014msec) 00:44:06.479 slat (usec): min=6, max=100, avg=37.35, stdev=16.26 00:44:06.479 clat (usec): min=11910, max=89942, avg=49742.95, stdev=9820.46 00:44:06.479 lat (usec): min=11929, max=89966, avg=49780.30, stdev=9816.79 00:44:06.479 clat percentiles (usec): 00:44:06.479 | 1.00th=[15795], 5.00th=[43779], 10.00th=[44303], 20.00th=[44827], 00:44:06.479 | 30.00th=[45351], 40.00th=[45351], 50.00th=[45876], 60.00th=[46400], 00:44:06.479 | 70.00th=[47449], 80.00th=[64750], 90.00th=[66323], 95.00th=[66847], 00:44:06.479 | 99.00th=[67634], 99.50th=[68682], 99.90th=[87557], 99.95th=[89654], 00:44:06.479 | 99.99th=[89654] 00:44:06.479 bw ( KiB/s): min= 896, max= 1664, per=4.17%, avg=1273.60, stdev=217.68, samples=20 00:44:06.479 iops : min= 224, max= 416, avg=318.40, stdev=54.42, samples=20 00:44:06.479 lat (msec) : 20=1.50%, 50=74.75%, 100=23.75% 00:44:06.479 cpu : usr=98.08%, sys=1.42%, ctx=26, majf=0, minf=1632 00:44:06.479 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:44:06.479 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:06.479 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:06.479 issued rwts: total=3200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:06.479 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:06.479 filename1: (groupid=0, jobs=1): err= 0: pid=3208973: Tue Nov 19 08:07:57 2024 00:44:06.479 read: IOPS=313, BW=1254KiB/s (1284kB/s)(12.2MiB/10007msec) 00:44:06.479 slat (nsec): min=11904, max=79193, avg=30784.75, stdev=12404.77 00:44:06.479 clat (msec): min=43, max=104, avg=50.78, stdev= 9.48 00:44:06.479 lat (msec): min=43, max=104, avg=50.81, stdev= 9.48 00:44:06.479 clat percentiles (msec): 00:44:06.479 | 1.00th=[ 44], 5.00th=[ 45], 10.00th=[ 45], 20.00th=[ 45], 00:44:06.479 | 30.00th=[ 46], 40.00th=[ 46], 50.00th=[ 47], 60.00th=[ 47], 00:44:06.479 | 70.00th=[ 48], 80.00th=[ 65], 90.00th=[ 67], 95.00th=[ 68], 00:44:06.479 | 99.00th=[ 69], 99.50th=[ 105], 99.90th=[ 105], 99.95th=[ 105], 00:44:06.479 | 99.99th=[ 105] 00:44:06.479 bw ( KiB/s): min= 896, max= 1536, per=4.14%, avg=1266.53, stdev=190.31, samples=19 00:44:06.479 iops : min= 224, max= 384, avg=316.63, stdev=47.58, samples=19 00:44:06.479 lat (msec) : 50=74.49%, 100=25.00%, 250=0.51% 00:44:06.479 cpu : usr=98.34%, sys=1.16%, ctx=20, majf=0, minf=1631 00:44:06.479 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:44:06.479 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:06.479 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:06.479 issued rwts: total=3136,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:06.479 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:06.479 filename1: (groupid=0, jobs=1): err= 0: pid=3208974: Tue Nov 19 08:07:57 2024 00:44:06.479 read: IOPS=314, BW=1257KiB/s (1287kB/s)(12.3MiB/10032msec) 00:44:06.479 slat (nsec): min=11452, max=75773, avg=31456.14, stdev=9858.51 00:44:06.479 clat (usec): min=29388, max=94231, avg=50628.75, stdev=9746.44 00:44:06.479 lat (usec): min=29411, max=94256, avg=50660.20, stdev=9746.97 00:44:06.479 clat percentiles (usec): 00:44:06.479 | 1.00th=[43254], 5.00th=[44303], 10.00th=[44303], 20.00th=[44827], 00:44:06.479 | 30.00th=[45351], 40.00th=[45351], 50.00th=[45876], 60.00th=[46400], 00:44:06.479 | 70.00th=[47449], 80.00th=[64750], 90.00th=[66323], 95.00th=[67634], 00:44:06.479 | 99.00th=[88605], 99.50th=[93848], 99.90th=[93848], 99.95th=[93848], 00:44:06.479 | 99.99th=[93848] 00:44:06.480 bw ( KiB/s): min= 896, max= 1536, per=4.10%, avg=1253.45, stdev=194.46, samples=20 00:44:06.480 iops : min= 224, max= 384, avg=313.30, stdev=48.68, samples=20 00:44:06.480 lat (msec) : 50=75.92%, 100=24.08% 00:44:06.480 cpu : usr=98.44%, sys=1.07%, ctx=21, majf=0, minf=1633 00:44:06.480 IO depths : 1=5.7%, 2=12.0%, 4=25.0%, 8=50.5%, 16=6.8%, 32=0.0%, >=64=0.0% 00:44:06.480 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:06.480 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:06.480 issued rwts: total=3152,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:06.480 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:06.480 filename1: (groupid=0, jobs=1): err= 0: pid=3208975: Tue Nov 19 08:07:57 2024 00:44:06.480 read: IOPS=314, BW=1257KiB/s (1287kB/s)(12.3MiB/10034msec) 00:44:06.480 slat (nsec): min=12264, max=85385, avg=33966.68, stdev=10702.67 00:44:06.480 clat (usec): min=43206, max=90399, avg=50619.71, stdev=9347.78 00:44:06.480 lat (usec): min=43236, max=90440, avg=50653.67, stdev=9346.68 00:44:06.480 clat percentiles (usec): 00:44:06.480 | 1.00th=[43779], 5.00th=[44303], 10.00th=[44303], 20.00th=[44827], 00:44:06.480 | 30.00th=[45351], 40.00th=[45351], 50.00th=[45876], 60.00th=[46924], 00:44:06.480 | 70.00th=[47449], 80.00th=[64750], 90.00th=[66323], 95.00th=[66847], 00:44:06.480 | 99.00th=[86508], 99.50th=[87557], 99.90th=[88605], 99.95th=[90702], 00:44:06.480 | 99.99th=[90702] 00:44:06.480 bw ( KiB/s): min= 868, max= 1536, per=4.10%, avg=1253.00, stdev=204.44, samples=20 00:44:06.480 iops : min= 217, max= 384, avg=313.25, stdev=51.11, samples=20 00:44:06.480 lat (msec) : 50=75.63%, 100=24.37% 00:44:06.480 cpu : usr=96.92%, sys=2.09%, ctx=127, majf=0, minf=1633 00:44:06.480 IO depths : 1=5.9%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.6%, 32=0.0%, >=64=0.0% 00:44:06.480 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:06.480 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:06.480 issued rwts: total=3152,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:06.480 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:06.480 filename1: (groupid=0, jobs=1): err= 0: pid=3208976: Tue Nov 19 08:07:57 2024 00:44:06.480 read: IOPS=316, BW=1264KiB/s (1295kB/s)(12.4MiB/10022msec) 00:44:06.480 slat (nsec): min=5888, max=80549, avg=36130.63, stdev=11047.83 00:44:06.480 clat (usec): min=22085, max=89998, avg=50301.96, stdev=9562.82 00:44:06.480 lat (usec): min=22132, max=90040, avg=50338.09, stdev=9561.16 00:44:06.480 clat percentiles (usec): 00:44:06.480 | 1.00th=[30540], 5.00th=[43779], 10.00th=[44303], 20.00th=[44827], 00:44:06.480 | 30.00th=[44827], 40.00th=[45351], 50.00th=[45876], 60.00th=[46400], 00:44:06.480 | 70.00th=[47449], 80.00th=[64750], 90.00th=[66323], 95.00th=[66847], 00:44:06.480 | 99.00th=[68682], 99.50th=[72877], 99.90th=[89654], 99.95th=[89654], 00:44:06.480 | 99.99th=[89654] 00:44:06.480 bw ( KiB/s): min= 896, max= 1408, per=4.12%, avg=1260.80, stdev=200.35, samples=20 00:44:06.480 iops : min= 224, max= 352, avg=315.20, stdev=50.09, samples=20 00:44:06.480 lat (msec) : 50=74.18%, 100=25.82% 00:44:06.480 cpu : usr=98.36%, sys=1.14%, ctx=17, majf=0, minf=1631 00:44:06.480 IO depths : 1=4.8%, 2=11.1%, 4=25.0%, 8=51.4%, 16=7.7%, 32=0.0%, >=64=0.0% 00:44:06.480 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:06.480 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:06.480 issued rwts: total=3168,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:06.480 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:06.480 filename1: (groupid=0, jobs=1): err= 0: pid=3208977: Tue Nov 19 08:07:57 2024 00:44:06.480 read: IOPS=313, BW=1254KiB/s (1284kB/s)(12.2MiB/10004msec) 00:44:06.480 slat (nsec): min=12103, max=94051, avg=35708.72, stdev=14326.79 00:44:06.480 clat (msec): min=31, max=101, avg=50.70, stdev= 9.54 00:44:06.480 lat (msec): min=31, max=101, avg=50.74, stdev= 9.54 00:44:06.480 clat percentiles (msec): 00:44:06.480 | 1.00th=[ 43], 5.00th=[ 44], 10.00th=[ 45], 20.00th=[ 45], 00:44:06.480 | 30.00th=[ 46], 40.00th=[ 46], 50.00th=[ 46], 60.00th=[ 47], 00:44:06.480 | 70.00th=[ 48], 80.00th=[ 65], 90.00th=[ 67], 95.00th=[ 68], 00:44:06.480 | 99.00th=[ 69], 99.50th=[ 102], 99.90th=[ 102], 99.95th=[ 102], 00:44:06.480 | 99.99th=[ 102] 00:44:06.480 bw ( KiB/s): min= 896, max= 1536, per=4.14%, avg=1266.63, stdev=185.40, samples=19 00:44:06.480 iops : min= 224, max= 384, avg=316.63, stdev=46.37, samples=19 00:44:06.480 lat (msec) : 50=74.55%, 100=24.94%, 250=0.51% 00:44:06.480 cpu : usr=97.66%, sys=1.54%, ctx=39, majf=0, minf=1633 00:44:06.480 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:44:06.480 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:06.480 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:06.480 issued rwts: total=3136,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:06.480 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:06.480 filename1: (groupid=0, jobs=1): err= 0: pid=3208978: Tue Nov 19 08:07:57 2024 00:44:06.480 read: IOPS=318, BW=1272KiB/s (1303kB/s)(12.4MiB/10010msec) 00:44:06.480 slat (nsec): min=5923, max=95914, avg=37890.84, stdev=12668.79 00:44:06.480 clat (usec): min=16677, max=89634, avg=49961.62, stdev=9343.89 00:44:06.480 lat (usec): min=16699, max=89667, avg=49999.51, stdev=9342.72 00:44:06.480 clat percentiles (usec): 00:44:06.480 | 1.00th=[23987], 5.00th=[43779], 10.00th=[44303], 20.00th=[44827], 00:44:06.480 | 30.00th=[44827], 40.00th=[45351], 50.00th=[45876], 60.00th=[46400], 00:44:06.480 | 70.00th=[47449], 80.00th=[64750], 90.00th=[66323], 95.00th=[66847], 00:44:06.480 | 99.00th=[67634], 99.50th=[68682], 99.90th=[89654], 99.95th=[89654], 00:44:06.480 | 99.99th=[89654] 00:44:06.480 bw ( KiB/s): min= 896, max= 1536, per=4.21%, avg=1286.74, stdev=188.28, samples=19 00:44:06.480 iops : min= 224, max= 384, avg=321.68, stdev=47.07, samples=19 00:44:06.480 lat (msec) : 20=0.50%, 50=75.82%, 100=23.68% 00:44:06.480 cpu : usr=97.12%, sys=1.93%, ctx=110, majf=0, minf=1636 00:44:06.480 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:44:06.480 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:06.480 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:06.480 issued rwts: total=3184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:06.480 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:06.480 filename1: (groupid=0, jobs=1): err= 0: pid=3208979: Tue Nov 19 08:07:57 2024 00:44:06.480 read: IOPS=314, BW=1257KiB/s (1287kB/s)(12.3MiB/10028msec) 00:44:06.480 slat (usec): min=12, max=101, avg=38.22, stdev=15.94 00:44:06.480 clat (usec): min=29406, max=90480, avg=50549.64, stdev=9424.83 00:44:06.480 lat (usec): min=29422, max=90504, avg=50587.86, stdev=9419.16 00:44:06.480 clat percentiles (usec): 00:44:06.480 | 1.00th=[42730], 5.00th=[43779], 10.00th=[44303], 20.00th=[44827], 00:44:06.480 | 30.00th=[45351], 40.00th=[45351], 50.00th=[45876], 60.00th=[46924], 00:44:06.480 | 70.00th=[47449], 80.00th=[64750], 90.00th=[66323], 95.00th=[66847], 00:44:06.480 | 99.00th=[81265], 99.50th=[87557], 99.90th=[89654], 99.95th=[90702], 00:44:06.480 | 99.99th=[90702] 00:44:06.480 bw ( KiB/s): min= 878, max= 1408, per=4.10%, avg=1253.50, stdev=198.70, samples=20 00:44:06.480 iops : min= 219, max= 352, avg=313.35, stdev=49.73, samples=20 00:44:06.480 lat (msec) : 50=75.32%, 100=24.68% 00:44:06.480 cpu : usr=98.33%, sys=1.16%, ctx=18, majf=0, minf=1631 00:44:06.480 IO depths : 1=5.7%, 2=12.0%, 4=25.0%, 8=50.5%, 16=6.8%, 32=0.0%, >=64=0.0% 00:44:06.480 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:06.480 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:06.480 issued rwts: total=3152,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:06.480 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:06.480 filename2: (groupid=0, jobs=1): err= 0: pid=3208980: Tue Nov 19 08:07:57 2024 00:44:06.480 read: IOPS=384, BW=1539KiB/s (1576kB/s)(15.1MiB/10035msec) 00:44:06.480 slat (nsec): min=10964, max=91221, avg=25121.16, stdev=18725.77 00:44:06.480 clat (msec): min=24, max=151, avg=41.40, stdev=11.82 00:44:06.480 lat (msec): min=24, max=151, avg=41.42, stdev=11.82 00:44:06.480 clat percentiles (msec): 00:44:06.480 | 1.00th=[ 30], 5.00th=[ 30], 10.00th=[ 31], 20.00th=[ 32], 00:44:06.480 | 30.00th=[ 33], 40.00th=[ 39], 50.00th=[ 44], 60.00th=[ 45], 00:44:06.480 | 70.00th=[ 45], 80.00th=[ 46], 90.00th=[ 54], 95.00th=[ 61], 00:44:06.480 | 99.00th=[ 90], 99.50th=[ 91], 99.90th=[ 129], 99.95th=[ 153], 00:44:06.480 | 99.99th=[ 153] 00:44:06.480 bw ( KiB/s): min= 1136, max= 1968, per=5.08%, avg=1552.95, stdev=261.89, samples=19 00:44:06.480 iops : min= 284, max= 492, avg=388.21, stdev=65.52, samples=19 00:44:06.480 lat (msec) : 50=89.27%, 100=10.31%, 250=0.41% 00:44:06.480 cpu : usr=98.24%, sys=1.26%, ctx=22, majf=0, minf=1633 00:44:06.480 IO depths : 1=0.1%, 2=1.5%, 4=10.1%, 8=75.6%, 16=12.7%, 32=0.0%, >=64=0.0% 00:44:06.480 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:06.480 complete : 0=0.0%, 4=89.9%, 8=4.9%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:06.480 issued rwts: total=3860,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:06.480 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:06.480 filename2: (groupid=0, jobs=1): err= 0: pid=3208981: Tue Nov 19 08:07:57 2024 00:44:06.481 read: IOPS=334, BW=1336KiB/s (1368kB/s)(13.1MiB/10035msec) 00:44:06.481 slat (nsec): min=11286, max=96978, avg=29485.73, stdev=13985.25 00:44:06.481 clat (msec): min=21, max=113, avg=47.62, stdev=12.35 00:44:06.481 lat (msec): min=21, max=113, avg=47.65, stdev=12.35 00:44:06.481 clat percentiles (msec): 00:44:06.481 | 1.00th=[ 30], 5.00th=[ 31], 10.00th=[ 32], 20.00th=[ 43], 00:44:06.481 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 46], 60.00th=[ 46], 00:44:06.481 | 70.00th=[ 47], 80.00th=[ 56], 90.00th=[ 67], 95.00th=[ 67], 00:44:06.481 | 99.00th=[ 100], 99.50th=[ 104], 99.90th=[ 114], 99.95th=[ 114], 00:44:06.481 | 99.99th=[ 114] 00:44:06.481 bw ( KiB/s): min= 896, max= 1760, per=4.41%, avg=1347.37, stdev=243.86, samples=19 00:44:06.481 iops : min= 224, max= 440, avg=336.84, stdev=60.96, samples=19 00:44:06.481 lat (msec) : 50=77.86%, 100=21.24%, 250=0.89% 00:44:06.481 cpu : usr=98.26%, sys=1.24%, ctx=13, majf=0, minf=1635 00:44:06.481 IO depths : 1=2.1%, 2=6.4%, 4=18.8%, 8=62.1%, 16=10.7%, 32=0.0%, >=64=0.0% 00:44:06.481 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:06.481 complete : 0=0.0%, 4=92.5%, 8=2.2%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:06.481 issued rwts: total=3352,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:06.481 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:06.481 filename2: (groupid=0, jobs=1): err= 0: pid=3208982: Tue Nov 19 08:07:57 2024 00:44:06.481 read: IOPS=313, BW=1254KiB/s (1284kB/s)(12.2MiB/10006msec) 00:44:06.481 slat (usec): min=15, max=100, avg=62.48, stdev=10.22 00:44:06.481 clat (msec): min=32, max=103, avg=50.49, stdev= 9.73 00:44:06.481 lat (msec): min=32, max=103, avg=50.55, stdev= 9.73 00:44:06.481 clat percentiles (msec): 00:44:06.481 | 1.00th=[ 43], 5.00th=[ 44], 10.00th=[ 45], 20.00th=[ 45], 00:44:06.481 | 30.00th=[ 45], 40.00th=[ 46], 50.00th=[ 46], 60.00th=[ 47], 00:44:06.481 | 70.00th=[ 47], 80.00th=[ 65], 90.00th=[ 67], 95.00th=[ 67], 00:44:06.481 | 99.00th=[ 86], 99.50th=[ 103], 99.90th=[ 104], 99.95th=[ 104], 00:44:06.481 | 99.99th=[ 104] 00:44:06.481 bw ( KiB/s): min= 896, max= 1536, per=4.14%, avg=1266.37, stdev=185.57, samples=19 00:44:06.481 iops : min= 224, max= 384, avg=316.58, stdev=46.40, samples=19 00:44:06.481 lat (msec) : 50=75.00%, 100=24.49%, 250=0.51% 00:44:06.481 cpu : usr=98.08%, sys=1.38%, ctx=13, majf=0, minf=1633 00:44:06.481 IO depths : 1=5.5%, 2=11.8%, 4=25.0%, 8=50.7%, 16=7.0%, 32=0.0%, >=64=0.0% 00:44:06.481 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:06.481 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:06.481 issued rwts: total=3136,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:06.481 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:06.481 filename2: (groupid=0, jobs=1): err= 0: pid=3208983: Tue Nov 19 08:07:57 2024 00:44:06.481 read: IOPS=312, BW=1252KiB/s (1282kB/s)(12.2MiB/10022msec) 00:44:06.481 slat (usec): min=12, max=116, avg=30.99, stdev= 8.91 00:44:06.481 clat (msec): min=31, max=101, avg=50.79, stdev=10.10 00:44:06.481 lat (msec): min=31, max=101, avg=50.82, stdev=10.09 00:44:06.481 clat percentiles (msec): 00:44:06.481 | 1.00th=[ 37], 5.00th=[ 44], 10.00th=[ 45], 20.00th=[ 45], 00:44:06.481 | 30.00th=[ 46], 40.00th=[ 46], 50.00th=[ 46], 60.00th=[ 47], 00:44:06.481 | 70.00th=[ 48], 80.00th=[ 65], 90.00th=[ 67], 95.00th=[ 68], 00:44:06.481 | 99.00th=[ 87], 99.50th=[ 102], 99.90th=[ 102], 99.95th=[ 102], 00:44:06.481 | 99.99th=[ 102] 00:44:06.481 bw ( KiB/s): min= 912, max= 1520, per=4.10%, avg=1253.70, stdev=188.49, samples=20 00:44:06.481 iops : min= 228, max= 380, avg=313.40, stdev=47.14, samples=20 00:44:06.481 lat (msec) : 50=73.85%, 100=25.57%, 250=0.57% 00:44:06.481 cpu : usr=97.71%, sys=1.49%, ctx=148, majf=0, minf=1631 00:44:06.481 IO depths : 1=4.0%, 2=10.2%, 4=24.8%, 8=52.5%, 16=8.5%, 32=0.0%, >=64=0.0% 00:44:06.481 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:06.481 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:06.481 issued rwts: total=3136,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:06.481 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:06.481 filename2: (groupid=0, jobs=1): err= 0: pid=3208984: Tue Nov 19 08:07:57 2024 00:44:06.481 read: IOPS=313, BW=1253KiB/s (1283kB/s)(12.2MiB/10011msec) 00:44:06.481 slat (nsec): min=8830, max=72830, avg=30604.94, stdev=9138.92 00:44:06.481 clat (msec): min=43, max=108, avg=50.81, stdev= 9.62 00:44:06.481 lat (msec): min=43, max=108, avg=50.84, stdev= 9.62 00:44:06.481 clat percentiles (msec): 00:44:06.481 | 1.00th=[ 44], 5.00th=[ 45], 10.00th=[ 45], 20.00th=[ 45], 00:44:06.481 | 30.00th=[ 46], 40.00th=[ 46], 50.00th=[ 47], 60.00th=[ 47], 00:44:06.481 | 70.00th=[ 48], 80.00th=[ 65], 90.00th=[ 67], 95.00th=[ 68], 00:44:06.481 | 99.00th=[ 69], 99.50th=[ 109], 99.90th=[ 109], 99.95th=[ 109], 00:44:06.481 | 99.99th=[ 109] 00:44:06.481 bw ( KiB/s): min= 896, max= 1536, per=4.14%, avg=1266.53, stdev=190.31, samples=19 00:44:06.481 iops : min= 224, max= 384, avg=316.63, stdev=47.58, samples=19 00:44:06.481 lat (msec) : 50=74.49%, 100=25.00%, 250=0.51% 00:44:06.481 cpu : usr=98.15%, sys=1.36%, ctx=33, majf=0, minf=1631 00:44:06.481 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:44:06.481 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:06.481 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:06.481 issued rwts: total=3136,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:06.481 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:06.481 filename2: (groupid=0, jobs=1): err= 0: pid=3208985: Tue Nov 19 08:07:57 2024 00:44:06.481 read: IOPS=313, BW=1256KiB/s (1286kB/s)(12.3MiB/10041msec) 00:44:06.481 slat (nsec): min=10960, max=98512, avg=31959.90, stdev=7155.75 00:44:06.481 clat (usec): min=28000, max=89959, avg=50594.36, stdev=9089.53 00:44:06.481 lat (usec): min=28023, max=89990, avg=50626.32, stdev=9089.27 00:44:06.481 clat percentiles (usec): 00:44:06.481 | 1.00th=[43254], 5.00th=[44303], 10.00th=[44303], 20.00th=[44827], 00:44:06.481 | 30.00th=[45351], 40.00th=[45351], 50.00th=[45876], 60.00th=[46400], 00:44:06.481 | 70.00th=[47449], 80.00th=[65274], 90.00th=[66323], 95.00th=[66847], 00:44:06.481 | 99.00th=[68682], 99.50th=[81265], 99.90th=[87557], 99.95th=[89654], 00:44:06.481 | 99.99th=[89654] 00:44:06.481 bw ( KiB/s): min= 896, max= 1408, per=4.10%, avg=1254.40, stdev=197.43, samples=20 00:44:06.481 iops : min= 224, max= 352, avg=313.60, stdev=49.36, samples=20 00:44:06.481 lat (msec) : 50=75.00%, 100=25.00% 00:44:06.481 cpu : usr=97.07%, sys=1.85%, ctx=114, majf=0, minf=1636 00:44:06.481 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:44:06.481 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:06.481 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:06.481 issued rwts: total=3152,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:06.481 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:06.481 filename2: (groupid=0, jobs=1): err= 0: pid=3208986: Tue Nov 19 08:07:57 2024 00:44:06.481 read: IOPS=315, BW=1261KiB/s (1291kB/s)(12.3MiB/10001msec) 00:44:06.481 slat (usec): min=7, max=103, avg=45.46, stdev=16.26 00:44:06.481 clat (usec): min=30801, max=90445, avg=50352.83, stdev=8941.32 00:44:06.481 lat (usec): min=30820, max=90464, avg=50398.29, stdev=8933.81 00:44:06.481 clat percentiles (usec): 00:44:06.481 | 1.00th=[42730], 5.00th=[43779], 10.00th=[44303], 20.00th=[44827], 00:44:06.481 | 30.00th=[44827], 40.00th=[45351], 50.00th=[45876], 60.00th=[46400], 00:44:06.481 | 70.00th=[47449], 80.00th=[64750], 90.00th=[66323], 95.00th=[66847], 00:44:06.481 | 99.00th=[68682], 99.50th=[68682], 99.90th=[89654], 99.95th=[90702], 00:44:06.481 | 99.99th=[90702] 00:44:06.481 bw ( KiB/s): min= 896, max= 1536, per=4.17%, avg=1273.26, stdev=183.39, samples=19 00:44:06.481 iops : min= 224, max= 384, avg=318.32, stdev=45.85, samples=19 00:44:06.481 lat (msec) : 50=75.32%, 100=24.68% 00:44:06.481 cpu : usr=97.72%, sys=1.42%, ctx=119, majf=0, minf=1633 00:44:06.481 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:44:06.481 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:06.481 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:06.481 issued rwts: total=3152,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:06.481 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:06.481 filename2: (groupid=0, jobs=1): err= 0: pid=3208987: Tue Nov 19 08:07:57 2024 00:44:06.481 read: IOPS=313, BW=1256KiB/s (1286kB/s)(12.3MiB/10039msec) 00:44:06.481 slat (usec): min=5, max=143, avg=61.35, stdev=10.15 00:44:06.481 clat (usec): min=42670, max=91675, avg=50398.75, stdev=9406.72 00:44:06.481 lat (usec): min=42732, max=91693, avg=50460.10, stdev=9406.31 00:44:06.481 clat percentiles (usec): 00:44:06.481 | 1.00th=[43254], 5.00th=[43779], 10.00th=[44303], 20.00th=[44303], 00:44:06.481 | 30.00th=[44827], 40.00th=[45351], 50.00th=[45876], 60.00th=[46400], 00:44:06.481 | 70.00th=[47449], 80.00th=[64750], 90.00th=[66323], 95.00th=[66847], 00:44:06.481 | 99.00th=[86508], 99.50th=[91751], 99.90th=[91751], 99.95th=[91751], 00:44:06.481 | 99.99th=[91751] 00:44:06.481 bw ( KiB/s): min= 896, max= 1536, per=4.10%, avg=1254.40, stdev=188.49, samples=20 00:44:06.481 iops : min= 224, max= 384, avg=313.60, stdev=47.12, samples=20 00:44:06.481 lat (msec) : 50=75.67%, 100=24.33% 00:44:06.481 cpu : usr=98.06%, sys=1.40%, ctx=18, majf=0, minf=1633 00:44:06.481 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:44:06.481 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:06.481 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:06.481 issued rwts: total=3152,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:06.481 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:06.481 00:44:06.481 Run status group 0 (all jobs): 00:44:06.481 READ: bw=29.8MiB/s (31.3MB/s), 1252KiB/s-1539KiB/s (1282kB/s-1576kB/s), io=300MiB (314MB), run=10001-10042msec 00:44:06.740 ----------------------------------------------------- 00:44:06.740 Suppressions used: 00:44:06.740 count bytes template 00:44:06.740 45 402 /usr/src/fio/parse.c 00:44:06.740 1 8 libtcmalloc_minimal.so 00:44:06.740 1 904 libcrypto.so 00:44:06.740 ----------------------------------------------------- 00:44:06.740 00:44:06.740 08:07:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:44:06.740 08:07:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:44:06.740 08:07:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:44:06.740 08:07:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:44:06.740 08:07:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:44:06.740 08:07:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:44:06.740 08:07:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:06.740 08:07:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:06.740 08:07:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:06.740 08:07:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:44:06.740 08:07:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:06.740 08:07:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:06.740 08:07:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:06.740 08:07:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:44:06.740 08:07:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:44:06.740 08:07:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:44:06.740 08:07:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:44:06.740 08:07:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:06.740 08:07:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:06.740 08:07:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:06.740 08:07:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:44:06.740 08:07:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:06.740 08:07:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:06.740 08:07:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:06.740 08:07:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:44:06.740 08:07:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:44:06.740 08:07:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:44:06.740 08:07:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:44:06.740 08:07:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:06.740 08:07:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:06.740 08:07:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:06.740 08:07:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:44:06.740 08:07:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:06.740 08:07:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:06.740 08:07:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:06.740 08:07:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:44:06.740 08:07:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:44:06.740 08:07:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:44:06.740 08:07:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:44:06.740 08:07:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:44:06.740 08:07:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:44:06.740 08:07:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:44:06.740 08:07:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:44:06.740 08:07:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:44:06.740 08:07:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:44:06.740 08:07:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:44:06.740 08:07:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:44:06.740 08:07:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:06.740 08:07:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:06.740 bdev_null0 00:44:06.740 08:07:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:06.740 08:07:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:44:06.740 08:07:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:06.740 08:07:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:06.740 08:07:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:06.740 08:07:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:44:06.740 08:07:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:06.740 08:07:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:06.999 08:07:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:06.999 08:07:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:44:06.999 08:07:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:06.999 08:07:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:06.999 [2024-11-19 08:07:58.684527] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:06.999 08:07:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:06.999 08:07:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:44:06.999 08:07:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:44:06.999 08:07:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:44:06.999 08:07:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:44:06.999 08:07:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:06.999 08:07:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:06.999 bdev_null1 00:44:06.999 08:07:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:06.999 08:07:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:44:06.999 08:07:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:06.999 08:07:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:06.999 08:07:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:06.999 08:07:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:44:06.999 08:07:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:06.999 08:07:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:06.999 08:07:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:06.999 08:07:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:44:06.999 08:07:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:06.999 08:07:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:06.999 08:07:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:06.999 08:07:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:44:06.999 08:07:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:44:06.999 08:07:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:44:06.999 08:07:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:44:06.999 08:07:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:44:06.999 08:07:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:44:06.999 08:07:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:06.999 08:07:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:44:06.999 { 00:44:06.999 "params": { 00:44:06.999 "name": "Nvme$subsystem", 00:44:06.999 "trtype": "$TEST_TRANSPORT", 00:44:06.999 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:06.999 "adrfam": "ipv4", 00:44:06.999 "trsvcid": "$NVMF_PORT", 00:44:06.999 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:06.999 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:06.999 "hdgst": ${hdgst:-false}, 00:44:06.999 "ddgst": ${ddgst:-false} 00:44:06.999 }, 00:44:06.999 "method": "bdev_nvme_attach_controller" 00:44:06.999 } 00:44:06.999 EOF 00:44:06.999 )") 00:44:06.999 08:07:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:07.000 08:07:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:44:07.000 08:07:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:44:07.000 08:07:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:44:07.000 08:07:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:07.000 08:07:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:44:07.000 08:07:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:44:07.000 08:07:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:44:07.000 08:07:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:44:07.000 08:07:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:44:07.000 08:07:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:44:07.000 08:07:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:44:07.000 08:07:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:07.000 08:07:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:44:07.000 08:07:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:44:07.000 08:07:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:44:07.000 08:07:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:44:07.000 08:07:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:44:07.000 08:07:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:44:07.000 08:07:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:44:07.000 { 00:44:07.000 "params": { 00:44:07.000 "name": "Nvme$subsystem", 00:44:07.000 "trtype": "$TEST_TRANSPORT", 00:44:07.000 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:07.000 "adrfam": "ipv4", 00:44:07.000 "trsvcid": "$NVMF_PORT", 00:44:07.000 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:07.000 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:07.000 "hdgst": ${hdgst:-false}, 00:44:07.000 "ddgst": ${ddgst:-false} 00:44:07.000 }, 00:44:07.000 "method": "bdev_nvme_attach_controller" 00:44:07.000 } 00:44:07.000 EOF 00:44:07.000 )") 00:44:07.000 08:07:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:44:07.000 08:07:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:44:07.000 08:07:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:44:07.000 08:07:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:44:07.000 08:07:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:44:07.000 08:07:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:44:07.000 "params": { 00:44:07.000 "name": "Nvme0", 00:44:07.000 "trtype": "tcp", 00:44:07.000 "traddr": "10.0.0.2", 00:44:07.000 "adrfam": "ipv4", 00:44:07.000 "trsvcid": "4420", 00:44:07.000 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:07.000 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:07.000 "hdgst": false, 00:44:07.000 "ddgst": false 00:44:07.000 }, 00:44:07.000 "method": "bdev_nvme_attach_controller" 00:44:07.000 },{ 00:44:07.000 "params": { 00:44:07.000 "name": "Nvme1", 00:44:07.000 "trtype": "tcp", 00:44:07.000 "traddr": "10.0.0.2", 00:44:07.000 "adrfam": "ipv4", 00:44:07.000 "trsvcid": "4420", 00:44:07.000 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:44:07.000 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:44:07.000 "hdgst": false, 00:44:07.000 "ddgst": false 00:44:07.000 }, 00:44:07.000 "method": "bdev_nvme_attach_controller" 00:44:07.000 }' 00:44:07.000 08:07:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:44:07.000 08:07:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:44:07.000 08:07:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # break 00:44:07.000 08:07:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:44:07.000 08:07:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:07.259 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:44:07.259 ... 00:44:07.259 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:44:07.259 ... 00:44:07.259 fio-3.35 00:44:07.259 Starting 4 threads 00:44:13.819 00:44:13.819 filename0: (groupid=0, jobs=1): err= 0: pid=3210486: Tue Nov 19 08:08:05 2024 00:44:13.819 read: IOPS=1359, BW=10.6MiB/s (11.1MB/s)(53.2MiB/5003msec) 00:44:13.819 slat (nsec): min=6991, max=68728, avg=23553.70, stdev=8032.17 00:44:13.819 clat (usec): min=1373, max=18244, avg=5791.55, stdev=830.49 00:44:13.819 lat (usec): min=1399, max=18291, avg=5815.10, stdev=828.34 00:44:13.819 clat percentiles (usec): 00:44:13.819 | 1.00th=[ 3916], 5.00th=[ 5014], 10.00th=[ 5211], 20.00th=[ 5407], 00:44:13.819 | 30.00th=[ 5538], 40.00th=[ 5604], 50.00th=[ 5604], 60.00th=[ 5669], 00:44:13.819 | 70.00th=[ 5800], 80.00th=[ 5932], 90.00th=[ 6980], 95.00th=[ 7177], 00:44:13.819 | 99.00th=[ 8029], 99.50th=[ 9110], 99.90th=[16581], 99.95th=[16581], 00:44:13.819 | 99.99th=[18220] 00:44:13.819 bw ( KiB/s): min= 8862, max=11520, per=25.31%, avg=10871.80, stdev=928.92, samples=10 00:44:13.819 iops : min= 1107, max= 1440, avg=1358.90, stdev=116.30, samples=10 00:44:13.819 lat (msec) : 2=0.10%, 4=1.03%, 10=98.68%, 20=0.19% 00:44:13.819 cpu : usr=96.14%, sys=3.28%, ctx=9, majf=0, minf=1636 00:44:13.819 IO depths : 1=1.3%, 2=22.0%, 4=52.2%, 8=24.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:13.819 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:13.819 complete : 0=0.0%, 4=90.5%, 8=9.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:13.819 issued rwts: total=6804,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:13.819 latency : target=0, window=0, percentile=100.00%, depth=8 00:44:13.819 filename0: (groupid=0, jobs=1): err= 0: pid=3210487: Tue Nov 19 08:08:05 2024 00:44:13.819 read: IOPS=1313, BW=10.3MiB/s (10.8MB/s)(51.3MiB/5001msec) 00:44:13.819 slat (nsec): min=7029, max=73323, avg=22680.10, stdev=10007.03 00:44:13.819 clat (usec): min=925, max=18859, avg=6002.24, stdev=1285.90 00:44:13.819 lat (usec): min=948, max=18881, avg=6024.92, stdev=1283.72 00:44:13.819 clat percentiles (usec): 00:44:13.819 | 1.00th=[ 2008], 5.00th=[ 5145], 10.00th=[ 5342], 20.00th=[ 5473], 00:44:13.819 | 30.00th=[ 5538], 40.00th=[ 5604], 50.00th=[ 5669], 60.00th=[ 5735], 00:44:13.819 | 70.00th=[ 5866], 80.00th=[ 6718], 90.00th=[ 7242], 95.00th=[ 8225], 00:44:13.819 | 99.00th=[11076], 99.50th=[12387], 99.90th=[13435], 99.95th=[13566], 00:44:13.819 | 99.99th=[18744] 00:44:13.819 bw ( KiB/s): min= 8432, max=11184, per=24.18%, avg=10387.56, stdev=1087.91, samples=9 00:44:13.819 iops : min= 1054, max= 1398, avg=1298.44, stdev=135.99, samples=9 00:44:13.819 lat (usec) : 1000=0.02% 00:44:13.819 lat (msec) : 2=0.93%, 4=1.28%, 10=95.89%, 20=1.89% 00:44:13.819 cpu : usr=95.76%, sys=3.70%, ctx=10, majf=0, minf=1638 00:44:13.819 IO depths : 1=1.2%, 2=17.2%, 4=56.1%, 8=25.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:13.819 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:13.819 complete : 0=0.0%, 4=91.3%, 8=8.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:13.819 issued rwts: total=6570,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:13.819 latency : target=0, window=0, percentile=100.00%, depth=8 00:44:13.819 filename1: (groupid=0, jobs=1): err= 0: pid=3210488: Tue Nov 19 08:08:05 2024 00:44:13.819 read: IOPS=1341, BW=10.5MiB/s (11.0MB/s)(52.4MiB/5001msec) 00:44:13.819 slat (nsec): min=6754, max=73283, avg=22835.02, stdev=9539.67 00:44:13.819 clat (usec): min=1044, max=14505, avg=5871.52, stdev=1049.57 00:44:13.819 lat (usec): min=1062, max=14536, avg=5894.36, stdev=1047.55 00:44:13.819 clat percentiles (usec): 00:44:13.820 | 1.00th=[ 2540], 5.00th=[ 5014], 10.00th=[ 5276], 20.00th=[ 5407], 00:44:13.820 | 30.00th=[ 5538], 40.00th=[ 5604], 50.00th=[ 5669], 60.00th=[ 5735], 00:44:13.820 | 70.00th=[ 5800], 80.00th=[ 6063], 90.00th=[ 7046], 95.00th=[ 7439], 00:44:13.820 | 99.00th=[ 9765], 99.50th=[11076], 99.90th=[14484], 99.95th=[14484], 00:44:13.820 | 99.99th=[14484] 00:44:13.820 bw ( KiB/s): min= 8752, max=11264, per=24.79%, avg=10647.11, stdev=1048.34, samples=9 00:44:13.820 iops : min= 1094, max= 1408, avg=1330.89, stdev=131.04, samples=9 00:44:13.820 lat (msec) : 2=0.27%, 4=1.49%, 10=97.38%, 20=0.86% 00:44:13.820 cpu : usr=96.00%, sys=3.48%, ctx=7, majf=0, minf=1634 00:44:13.820 IO depths : 1=1.8%, 2=21.0%, 4=53.4%, 8=23.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:13.820 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:13.820 complete : 0=0.0%, 4=90.5%, 8=9.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:13.820 issued rwts: total=6709,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:13.820 latency : target=0, window=0, percentile=100.00%, depth=8 00:44:13.820 filename1: (groupid=0, jobs=1): err= 0: pid=3210489: Tue Nov 19 08:08:05 2024 00:44:13.820 read: IOPS=1354, BW=10.6MiB/s (11.1MB/s)(52.9MiB/5002msec) 00:44:13.820 slat (nsec): min=7033, max=70888, avg=17310.66, stdev=7854.51 00:44:13.820 clat (usec): min=925, max=15436, avg=5846.20, stdev=825.15 00:44:13.820 lat (usec): min=951, max=15458, avg=5863.51, stdev=824.52 00:44:13.820 clat percentiles (usec): 00:44:13.820 | 1.00th=[ 4047], 5.00th=[ 4948], 10.00th=[ 5276], 20.00th=[ 5473], 00:44:13.820 | 30.00th=[ 5538], 40.00th=[ 5604], 50.00th=[ 5669], 60.00th=[ 5735], 00:44:13.820 | 70.00th=[ 5800], 80.00th=[ 6063], 90.00th=[ 6915], 95.00th=[ 7242], 00:44:13.820 | 99.00th=[ 8586], 99.50th=[ 9372], 99.90th=[15139], 99.95th=[15139], 00:44:13.820 | 99.99th=[15401] 00:44:13.820 bw ( KiB/s): min= 8817, max=11424, per=25.22%, avg=10830.50, stdev=907.03, samples=10 00:44:13.820 iops : min= 1102, max= 1428, avg=1353.80, stdev=113.41, samples=10 00:44:13.820 lat (usec) : 1000=0.01% 00:44:13.820 lat (msec) : 4=0.94%, 10=98.88%, 20=0.16% 00:44:13.820 cpu : usr=95.58%, sys=3.86%, ctx=13, majf=0, minf=1638 00:44:13.820 IO depths : 1=1.0%, 2=13.8%, 4=60.0%, 8=25.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:13.820 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:13.820 complete : 0=0.0%, 4=91.1%, 8=8.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:13.820 issued rwts: total=6776,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:13.820 latency : target=0, window=0, percentile=100.00%, depth=8 00:44:13.820 00:44:13.820 Run status group 0 (all jobs): 00:44:13.820 READ: bw=41.9MiB/s (44.0MB/s), 10.3MiB/s-10.6MiB/s (10.8MB/s-11.1MB/s), io=210MiB (220MB), run=5001-5003msec 00:44:14.386 ----------------------------------------------------- 00:44:14.386 Suppressions used: 00:44:14.386 count bytes template 00:44:14.386 6 52 /usr/src/fio/parse.c 00:44:14.386 1 8 libtcmalloc_minimal.so 00:44:14.386 1 904 libcrypto.so 00:44:14.386 ----------------------------------------------------- 00:44:14.386 00:44:14.386 08:08:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:44:14.386 08:08:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:44:14.386 08:08:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:44:14.386 08:08:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:44:14.386 08:08:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:44:14.386 08:08:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:44:14.386 08:08:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:14.386 08:08:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:14.645 08:08:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:14.645 08:08:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:44:14.645 08:08:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:14.645 08:08:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:14.645 08:08:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:14.645 08:08:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:44:14.645 08:08:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:44:14.645 08:08:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:44:14.645 08:08:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:44:14.645 08:08:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:14.645 08:08:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:14.645 08:08:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:14.645 08:08:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:44:14.645 08:08:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:14.645 08:08:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:14.645 08:08:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:14.645 00:44:14.645 real 0m28.298s 00:44:14.645 user 4m37.033s 00:44:14.645 sys 0m6.772s 00:44:14.645 08:08:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:14.645 08:08:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:14.645 ************************************ 00:44:14.645 END TEST fio_dif_rand_params 00:44:14.645 ************************************ 00:44:14.645 08:08:06 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:44:14.645 08:08:06 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:44:14.645 08:08:06 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:14.645 08:08:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:14.645 ************************************ 00:44:14.645 START TEST fio_dif_digest 00:44:14.645 ************************************ 00:44:14.645 08:08:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:44:14.645 08:08:06 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:44:14.645 08:08:06 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:44:14.645 08:08:06 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:44:14.645 08:08:06 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:44:14.645 08:08:06 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:44:14.645 08:08:06 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:44:14.645 08:08:06 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:44:14.645 08:08:06 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:44:14.645 08:08:06 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:44:14.645 08:08:06 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:44:14.645 08:08:06 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:44:14.645 08:08:06 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:44:14.645 08:08:06 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:44:14.645 08:08:06 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:44:14.645 08:08:06 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:44:14.645 08:08:06 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:44:14.645 08:08:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:14.645 08:08:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:14.645 bdev_null0 00:44:14.645 08:08:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:14.645 08:08:06 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:44:14.645 08:08:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:14.645 08:08:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:14.645 08:08:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:14.645 08:08:06 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:44:14.645 08:08:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:14.645 08:08:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:14.645 08:08:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:14.645 08:08:06 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:44:14.645 08:08:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:14.645 08:08:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:14.645 [2024-11-19 08:08:06.421279] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:14.645 08:08:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:14.645 08:08:06 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:44:14.645 08:08:06 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:44:14.645 08:08:06 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:44:14.645 08:08:06 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:44:14.645 08:08:06 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:14.645 08:08:06 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:44:14.645 08:08:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:14.646 08:08:06 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:44:14.646 08:08:06 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:44:14.646 08:08:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:44:14.646 08:08:06 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:44:14.646 { 00:44:14.646 "params": { 00:44:14.646 "name": "Nvme$subsystem", 00:44:14.646 "trtype": "$TEST_TRANSPORT", 00:44:14.646 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:14.646 "adrfam": "ipv4", 00:44:14.646 "trsvcid": "$NVMF_PORT", 00:44:14.646 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:14.646 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:14.646 "hdgst": ${hdgst:-false}, 00:44:14.646 "ddgst": ${ddgst:-false} 00:44:14.646 }, 00:44:14.646 "method": "bdev_nvme_attach_controller" 00:44:14.646 } 00:44:14.646 EOF 00:44:14.646 )") 00:44:14.646 08:08:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:44:14.646 08:08:06 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:44:14.646 08:08:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:44:14.646 08:08:06 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:44:14.646 08:08:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:14.646 08:08:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:44:14.646 08:08:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:44:14.646 08:08:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:44:14.646 08:08:06 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:44:14.646 08:08:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:14.646 08:08:06 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:44:14.646 08:08:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:44:14.646 08:08:06 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:44:14.646 08:08:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:44:14.646 08:08:06 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:44:14.646 08:08:06 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:44:14.646 08:08:06 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:44:14.646 "params": { 00:44:14.646 "name": "Nvme0", 00:44:14.646 "trtype": "tcp", 00:44:14.646 "traddr": "10.0.0.2", 00:44:14.646 "adrfam": "ipv4", 00:44:14.646 "trsvcid": "4420", 00:44:14.646 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:14.646 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:14.646 "hdgst": true, 00:44:14.646 "ddgst": true 00:44:14.646 }, 00:44:14.646 "method": "bdev_nvme_attach_controller" 00:44:14.646 }' 00:44:14.646 08:08:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:44:14.646 08:08:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:44:14.646 08:08:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1351 -- # break 00:44:14.646 08:08:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:44:14.646 08:08:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:14.904 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:44:14.904 ... 00:44:14.904 fio-3.35 00:44:14.904 Starting 3 threads 00:44:27.103 00:44:27.103 filename0: (groupid=0, jobs=1): err= 0: pid=3211478: Tue Nov 19 08:08:17 2024 00:44:27.103 read: IOPS=175, BW=21.9MiB/s (23.0MB/s)(221MiB/10050msec) 00:44:27.103 slat (nsec): min=7442, max=86172, avg=26135.40, stdev=9610.68 00:44:27.103 clat (usec): min=10084, max=54480, avg=17037.47, stdev=1779.19 00:44:27.103 lat (usec): min=10112, max=54506, avg=17063.61, stdev=1778.94 00:44:27.103 clat percentiles (usec): 00:44:27.103 | 1.00th=[13960], 5.00th=[14877], 10.00th=[15401], 20.00th=[15926], 00:44:27.103 | 30.00th=[16319], 40.00th=[16712], 50.00th=[16909], 60.00th=[17433], 00:44:27.103 | 70.00th=[17695], 80.00th=[17957], 90.00th=[18744], 95.00th=[19006], 00:44:27.103 | 99.00th=[20055], 99.50th=[21103], 99.90th=[51643], 99.95th=[54264], 00:44:27.103 | 99.99th=[54264] 00:44:27.103 bw ( KiB/s): min=21504, max=23296, per=36.52%, avg=22540.80, stdev=480.55, samples=20 00:44:27.103 iops : min= 168, max= 182, avg=176.10, stdev= 3.75, samples=20 00:44:27.103 lat (msec) : 20=98.92%, 50=0.96%, 100=0.11% 00:44:27.103 cpu : usr=92.23%, sys=6.38%, ctx=195, majf=0, minf=1634 00:44:27.103 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:27.103 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:27.103 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:27.103 issued rwts: total=1764,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:27.103 latency : target=0, window=0, percentile=100.00%, depth=3 00:44:27.103 filename0: (groupid=0, jobs=1): err= 0: pid=3211480: Tue Nov 19 08:08:17 2024 00:44:27.103 read: IOPS=152, BW=19.1MiB/s (20.1MB/s)(192MiB/10047msec) 00:44:27.103 slat (nsec): min=7108, max=54055, avg=21487.68, stdev=5190.53 00:44:27.103 clat (usec): min=15541, max=64137, avg=19557.10, stdev=2744.15 00:44:27.103 lat (usec): min=15559, max=64159, avg=19578.59, stdev=2744.04 00:44:27.103 clat percentiles (usec): 00:44:27.103 | 1.00th=[16450], 5.00th=[17171], 10.00th=[17695], 20.00th=[18220], 00:44:27.103 | 30.00th=[18744], 40.00th=[19006], 50.00th=[19268], 60.00th=[19792], 00:44:27.103 | 70.00th=[20055], 80.00th=[20579], 90.00th=[21365], 95.00th=[21890], 00:44:27.103 | 99.00th=[23200], 99.50th=[24249], 99.90th=[64226], 99.95th=[64226], 00:44:27.103 | 99.99th=[64226] 00:44:27.103 bw ( KiB/s): min=18432, max=20736, per=31.83%, avg=19646.10, stdev=666.75, samples=20 00:44:27.103 iops : min= 144, max= 162, avg=153.45, stdev= 5.27, samples=20 00:44:27.103 lat (msec) : 20=67.73%, 50=31.95%, 100=0.33% 00:44:27.103 cpu : usr=93.82%, sys=5.61%, ctx=14, majf=0, minf=1634 00:44:27.103 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:27.103 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:27.103 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:27.103 issued rwts: total=1537,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:27.103 latency : target=0, window=0, percentile=100.00%, depth=3 00:44:27.103 filename0: (groupid=0, jobs=1): err= 0: pid=3211481: Tue Nov 19 08:08:17 2024 00:44:27.103 read: IOPS=153, BW=19.2MiB/s (20.2MB/s)(193MiB/10048msec) 00:44:27.103 slat (nsec): min=7507, max=53883, avg=21826.15, stdev=5375.06 00:44:27.103 clat (usec): min=12376, max=54830, avg=19457.73, stdev=1972.26 00:44:27.103 lat (usec): min=12395, max=54851, avg=19479.55, stdev=1972.12 00:44:27.103 clat percentiles (usec): 00:44:27.103 | 1.00th=[16188], 5.00th=[17171], 10.00th=[17695], 20.00th=[18220], 00:44:27.103 | 30.00th=[18482], 40.00th=[19006], 50.00th=[19268], 60.00th=[19792], 00:44:27.103 | 70.00th=[20055], 80.00th=[20579], 90.00th=[21365], 95.00th=[22152], 00:44:27.103 | 99.00th=[23462], 99.50th=[24249], 99.90th=[50070], 99.95th=[54789], 00:44:27.103 | 99.99th=[54789] 00:44:27.103 bw ( KiB/s): min=18176, max=21248, per=31.98%, avg=19739.60, stdev=792.54, samples=20 00:44:27.103 iops : min= 142, max= 166, avg=154.20, stdev= 6.19, samples=20 00:44:27.103 lat (msec) : 20=65.89%, 50=33.98%, 100=0.13% 00:44:27.103 cpu : usr=93.70%, sys=5.73%, ctx=15, majf=0, minf=1632 00:44:27.103 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:27.103 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:27.103 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:27.103 issued rwts: total=1545,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:27.103 latency : target=0, window=0, percentile=100.00%, depth=3 00:44:27.103 00:44:27.103 Run status group 0 (all jobs): 00:44:27.103 READ: bw=60.3MiB/s (63.2MB/s), 19.1MiB/s-21.9MiB/s (20.1MB/s-23.0MB/s), io=606MiB (635MB), run=10047-10050msec 00:44:27.103 ----------------------------------------------------- 00:44:27.103 Suppressions used: 00:44:27.103 count bytes template 00:44:27.103 5 44 /usr/src/fio/parse.c 00:44:27.103 1 8 libtcmalloc_minimal.so 00:44:27.103 1 904 libcrypto.so 00:44:27.103 ----------------------------------------------------- 00:44:27.103 00:44:27.103 08:08:18 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:44:27.103 08:08:18 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:44:27.103 08:08:18 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:44:27.103 08:08:18 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:44:27.103 08:08:18 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:44:27.103 08:08:18 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:44:27.103 08:08:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:27.103 08:08:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:27.103 08:08:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:27.103 08:08:18 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:44:27.103 08:08:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:27.103 08:08:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:27.103 08:08:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:27.103 00:44:27.103 real 0m12.556s 00:44:27.103 user 0m30.518s 00:44:27.103 sys 0m2.285s 00:44:27.103 08:08:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:27.104 08:08:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:27.104 ************************************ 00:44:27.104 END TEST fio_dif_digest 00:44:27.104 ************************************ 00:44:27.104 08:08:18 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:44:27.104 08:08:18 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:44:27.104 08:08:18 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:44:27.104 08:08:18 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:44:27.104 08:08:18 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:44:27.104 08:08:18 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:44:27.104 08:08:18 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:44:27.104 08:08:18 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:44:27.104 rmmod nvme_tcp 00:44:27.104 rmmod nvme_fabrics 00:44:27.104 rmmod nvme_keyring 00:44:27.104 08:08:19 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:44:27.104 08:08:19 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:44:27.104 08:08:19 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:44:27.104 08:08:19 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 3204599 ']' 00:44:27.104 08:08:19 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 3204599 00:44:27.104 08:08:19 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 3204599 ']' 00:44:27.104 08:08:19 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 3204599 00:44:27.104 08:08:19 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:44:27.362 08:08:19 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:27.362 08:08:19 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3204599 00:44:27.362 08:08:19 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:27.362 08:08:19 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:27.362 08:08:19 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3204599' 00:44:27.362 killing process with pid 3204599 00:44:27.362 08:08:19 nvmf_dif -- common/autotest_common.sh@973 -- # kill 3204599 00:44:27.362 08:08:19 nvmf_dif -- common/autotest_common.sh@978 -- # wait 3204599 00:44:28.738 08:08:20 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:44:28.738 08:08:20 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:44:29.674 Waiting for block devices as requested 00:44:29.674 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:44:29.674 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:44:29.674 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:44:29.933 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:44:29.933 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:44:29.933 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:44:29.933 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:44:30.192 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:44:30.192 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:44:30.192 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:44:30.192 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:44:30.452 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:44:30.452 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:44:30.452 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:44:30.452 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:44:30.712 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:44:30.712 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:44:30.712 08:08:22 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:44:30.712 08:08:22 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:44:30.712 08:08:22 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:44:30.712 08:08:22 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:44:30.712 08:08:22 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:44:30.712 08:08:22 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:44:30.712 08:08:22 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:44:30.712 08:08:22 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:44:30.712 08:08:22 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:30.712 08:08:22 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:44:30.712 08:08:22 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:33.246 08:08:24 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:44:33.246 00:44:33.246 real 1m16.436s 00:44:33.246 user 6m46.916s 00:44:33.246 sys 0m18.206s 00:44:33.246 08:08:24 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:33.246 08:08:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:33.246 ************************************ 00:44:33.246 END TEST nvmf_dif 00:44:33.246 ************************************ 00:44:33.246 08:08:24 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:44:33.246 08:08:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:44:33.246 08:08:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:33.246 08:08:24 -- common/autotest_common.sh@10 -- # set +x 00:44:33.246 ************************************ 00:44:33.246 START TEST nvmf_abort_qd_sizes 00:44:33.246 ************************************ 00:44:33.246 08:08:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:44:33.246 * Looking for test storage... 00:44:33.246 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:44:33.246 08:08:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:44:33.246 08:08:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:44:33.246 08:08:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:44:33.246 08:08:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:44:33.246 08:08:24 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:33.246 08:08:24 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:33.246 08:08:24 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:33.246 08:08:24 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:44:33.246 08:08:24 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:44:33.246 08:08:24 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:44:33.246 08:08:24 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:44:33.246 08:08:24 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:44:33.246 08:08:24 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:44:33.246 08:08:24 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:44:33.246 08:08:24 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:33.246 08:08:24 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:44:33.246 08:08:24 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:44:33.246 08:08:24 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:33.246 08:08:24 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:33.246 08:08:24 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:44:33.246 08:08:24 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:44:33.246 08:08:24 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:33.246 08:08:24 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:44:33.246 08:08:24 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:44:33.246 08:08:24 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:44:33.246 08:08:24 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:44:33.246 08:08:24 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:33.246 08:08:24 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:44:33.246 08:08:24 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:44:33.246 08:08:24 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:33.246 08:08:24 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:33.246 08:08:24 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:44:33.246 08:08:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:33.246 08:08:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:44:33.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:33.246 --rc genhtml_branch_coverage=1 00:44:33.246 --rc genhtml_function_coverage=1 00:44:33.246 --rc genhtml_legend=1 00:44:33.246 --rc geninfo_all_blocks=1 00:44:33.246 --rc geninfo_unexecuted_blocks=1 00:44:33.246 00:44:33.246 ' 00:44:33.246 08:08:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:44:33.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:33.246 --rc genhtml_branch_coverage=1 00:44:33.246 --rc genhtml_function_coverage=1 00:44:33.246 --rc genhtml_legend=1 00:44:33.246 --rc geninfo_all_blocks=1 00:44:33.246 --rc geninfo_unexecuted_blocks=1 00:44:33.246 00:44:33.246 ' 00:44:33.246 08:08:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:44:33.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:33.246 --rc genhtml_branch_coverage=1 00:44:33.246 --rc genhtml_function_coverage=1 00:44:33.246 --rc genhtml_legend=1 00:44:33.246 --rc geninfo_all_blocks=1 00:44:33.246 --rc geninfo_unexecuted_blocks=1 00:44:33.246 00:44:33.246 ' 00:44:33.246 08:08:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:44:33.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:33.246 --rc genhtml_branch_coverage=1 00:44:33.246 --rc genhtml_function_coverage=1 00:44:33.246 --rc genhtml_legend=1 00:44:33.246 --rc geninfo_all_blocks=1 00:44:33.246 --rc geninfo_unexecuted_blocks=1 00:44:33.246 00:44:33.246 ' 00:44:33.247 08:08:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:33.247 08:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:44:33.247 08:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:33.247 08:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:33.247 08:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:33.247 08:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:33.247 08:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:33.247 08:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:33.247 08:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:33.247 08:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:33.247 08:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:33.247 08:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:33.247 08:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:44:33.247 08:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:44:33.247 08:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:33.247 08:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:33.247 08:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:33.247 08:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:33.247 08:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:33.247 08:08:24 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:44:33.247 08:08:24 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:33.247 08:08:24 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:33.247 08:08:24 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:33.247 08:08:24 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:33.247 08:08:24 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:33.247 08:08:24 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:33.247 08:08:24 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:44:33.247 08:08:24 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:33.247 08:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:44:33.247 08:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:33.247 08:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:33.247 08:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:33.247 08:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:33.247 08:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:33.247 08:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:44:33.247 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:44:33.247 08:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:33.247 08:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:33.247 08:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:33.247 08:08:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:44:33.247 08:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:44:33.247 08:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:44:33.247 08:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:44:33.247 08:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:44:33.247 08:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:44:33.247 08:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:33.247 08:08:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:44:33.247 08:08:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:33.247 08:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:44:33.247 08:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:44:33.247 08:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:44:33.247 08:08:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:44:35.150 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:44:35.150 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:44:35.150 Found net devices under 0000:0a:00.0: cvl_0_0 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:44:35.150 Found net devices under 0000:0a:00.1: cvl_0_1 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:44:35.150 08:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:44:35.150 08:08:27 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:44:35.150 08:08:27 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:44:35.150 08:08:27 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:44:35.150 08:08:27 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:44:35.150 08:08:27 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:44:35.150 08:08:27 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:44:35.150 08:08:27 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:44:35.409 08:08:27 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:44:35.409 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:44:35.409 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 00:44:35.409 00:44:35.409 --- 10.0.0.2 ping statistics --- 00:44:35.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:35.409 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:44:35.409 08:08:27 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:44:35.409 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:44:35.409 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:44:35.409 00:44:35.409 --- 10.0.0.1 ping statistics --- 00:44:35.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:35.409 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:44:35.409 08:08:27 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:44:35.409 08:08:27 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:44:35.409 08:08:27 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:44:35.409 08:08:27 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:44:36.353 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:44:36.353 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:44:36.353 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:44:36.353 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:44:36.353 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:44:36.353 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:44:36.353 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:44:36.611 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:44:36.611 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:44:36.611 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:44:36.611 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:44:36.611 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:44:36.611 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:44:36.611 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:44:36.611 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:44:36.611 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:44:37.547 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:44:37.547 08:08:29 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:44:37.547 08:08:29 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:44:37.547 08:08:29 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:44:37.547 08:08:29 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:44:37.547 08:08:29 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:44:37.547 08:08:29 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:44:37.547 08:08:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:44:37.547 08:08:29 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:44:37.547 08:08:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:37.547 08:08:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:37.547 08:08:29 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=3216530 00:44:37.547 08:08:29 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:44:37.547 08:08:29 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 3216530 00:44:37.547 08:08:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 3216530 ']' 00:44:37.547 08:08:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:37.547 08:08:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:37.547 08:08:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:37.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:37.548 08:08:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:37.548 08:08:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:37.805 [2024-11-19 08:08:29.524926] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:44:37.805 [2024-11-19 08:08:29.525098] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:37.805 [2024-11-19 08:08:29.678647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:44:38.064 [2024-11-19 08:08:29.822713] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:38.064 [2024-11-19 08:08:29.822809] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:38.064 [2024-11-19 08:08:29.822836] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:38.065 [2024-11-19 08:08:29.822861] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:38.065 [2024-11-19 08:08:29.822881] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:38.065 [2024-11-19 08:08:29.825764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:44:38.065 [2024-11-19 08:08:29.825797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:44:38.065 [2024-11-19 08:08:29.825876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:38.065 [2024-11-19 08:08:29.825883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:44:38.632 08:08:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:38.632 08:08:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:44:38.632 08:08:30 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:44:38.632 08:08:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:38.632 08:08:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:38.632 08:08:30 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:44:38.632 08:08:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:44:38.632 08:08:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:44:38.632 08:08:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:44:38.632 08:08:30 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:44:38.632 08:08:30 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:44:38.632 08:08:30 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:88:00.0 ]] 00:44:38.632 08:08:30 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:44:38.632 08:08:30 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:44:38.632 08:08:30 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:44:38.632 08:08:30 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:44:38.632 08:08:30 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:44:38.632 08:08:30 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:44:38.632 08:08:30 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:44:38.632 08:08:30 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:88:00.0 00:44:38.632 08:08:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:44:38.632 08:08:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:44:38.632 08:08:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:44:38.632 08:08:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:44:38.632 08:08:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:38.632 08:08:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:38.632 ************************************ 00:44:38.632 START TEST spdk_target_abort 00:44:38.632 ************************************ 00:44:38.632 08:08:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:44:38.632 08:08:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:44:38.632 08:08:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:44:38.632 08:08:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:38.632 08:08:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:41.918 spdk_targetn1 00:44:41.918 08:08:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:41.918 08:08:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:44:41.918 08:08:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:41.918 08:08:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:41.918 [2024-11-19 08:08:33.443427] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:41.918 08:08:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:41.918 08:08:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:44:41.918 08:08:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:41.918 08:08:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:41.918 08:08:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:41.918 08:08:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:44:41.918 08:08:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:41.918 08:08:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:41.918 08:08:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:41.918 08:08:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:44:41.918 08:08:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:41.918 08:08:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:41.918 [2024-11-19 08:08:33.489846] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:41.918 08:08:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:41.918 08:08:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:44:41.918 08:08:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:44:41.918 08:08:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:44:41.918 08:08:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:44:41.918 08:08:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:44:41.918 08:08:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:44:41.918 08:08:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:44:41.918 08:08:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:44:41.918 08:08:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:44:41.918 08:08:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:41.918 08:08:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:44:41.918 08:08:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:41.918 08:08:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:44:41.918 08:08:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:41.918 08:08:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:44:41.918 08:08:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:41.918 08:08:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:44:41.918 08:08:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:41.918 08:08:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:41.918 08:08:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:41.918 08:08:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:45.207 Initializing NVMe Controllers 00:44:45.207 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:44:45.207 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:45.207 Initialization complete. Launching workers. 00:44:45.207 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8539, failed: 0 00:44:45.207 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1204, failed to submit 7335 00:44:45.207 success 716, unsuccessful 488, failed 0 00:44:45.207 08:08:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:45.207 08:08:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:48.493 Initializing NVMe Controllers 00:44:48.493 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:44:48.493 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:48.493 Initialization complete. Launching workers. 00:44:48.493 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8549, failed: 0 00:44:48.493 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1250, failed to submit 7299 00:44:48.493 success 302, unsuccessful 948, failed 0 00:44:48.493 08:08:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:48.493 08:08:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:52.680 Initializing NVMe Controllers 00:44:52.680 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:44:52.680 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:52.680 Initialization complete. Launching workers. 00:44:52.680 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 27386, failed: 0 00:44:52.680 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2663, failed to submit 24723 00:44:52.680 success 225, unsuccessful 2438, failed 0 00:44:52.680 08:08:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:44:52.680 08:08:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:52.680 08:08:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:52.680 08:08:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:52.680 08:08:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:44:52.681 08:08:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:52.681 08:08:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:53.248 08:08:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:53.248 08:08:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3216530 00:44:53.248 08:08:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 3216530 ']' 00:44:53.248 08:08:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 3216530 00:44:53.248 08:08:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:44:53.248 08:08:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:53.248 08:08:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3216530 00:44:53.248 08:08:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:53.248 08:08:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:53.248 08:08:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3216530' 00:44:53.248 killing process with pid 3216530 00:44:53.248 08:08:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 3216530 00:44:53.248 08:08:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 3216530 00:44:54.183 00:44:54.183 real 0m15.490s 00:44:54.183 user 1m0.255s 00:44:54.183 sys 0m3.012s 00:44:54.183 08:08:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:54.183 08:08:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:54.183 ************************************ 00:44:54.183 END TEST spdk_target_abort 00:44:54.183 ************************************ 00:44:54.183 08:08:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:44:54.183 08:08:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:44:54.183 08:08:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:54.183 08:08:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:54.183 ************************************ 00:44:54.183 START TEST kernel_target_abort 00:44:54.183 ************************************ 00:44:54.183 08:08:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:44:54.183 08:08:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:44:54.183 08:08:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:44:54.183 08:08:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:44:54.183 08:08:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:44:54.183 08:08:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:44:54.183 08:08:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:44:54.183 08:08:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:44:54.183 08:08:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:44:54.183 08:08:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:44:54.183 08:08:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:44:54.183 08:08:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:44:54.183 08:08:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:44:54.183 08:08:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:44:54.183 08:08:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:44:54.183 08:08:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:44:54.183 08:08:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:44:54.183 08:08:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:44:54.183 08:08:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:44:54.183 08:08:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:44:54.183 08:08:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:44:54.441 08:08:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:44:54.441 08:08:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:44:55.377 Waiting for block devices as requested 00:44:55.377 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:44:55.635 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:44:55.635 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:44:55.894 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:44:55.894 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:44:55.894 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:44:55.894 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:44:56.153 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:44:56.153 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:44:56.153 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:44:56.153 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:44:56.153 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:44:56.411 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:44:56.411 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:44:56.411 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:44:56.411 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:44:56.669 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:44:56.928 08:08:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:44:56.928 08:08:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:44:56.928 08:08:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:44:56.928 08:08:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:44:56.928 08:08:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:44:56.928 08:08:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:44:56.928 08:08:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:44:56.928 08:08:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:44:56.928 08:08:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:44:56.928 No valid GPT data, bailing 00:44:56.928 08:08:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:44:56.928 08:08:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:44:56.928 08:08:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:44:56.928 08:08:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:44:56.928 08:08:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:44:56.928 08:08:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:44:56.928 08:08:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:44:56.928 08:08:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:44:56.928 08:08:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:44:56.928 08:08:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:44:56.928 08:08:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:44:56.928 08:08:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:44:56.928 08:08:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:44:56.928 08:08:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:44:56.928 08:08:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:44:56.928 08:08:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:44:56.928 08:08:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:44:56.928 08:08:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:44:57.187 00:44:57.187 Discovery Log Number of Records 2, Generation counter 2 00:44:57.187 =====Discovery Log Entry 0====== 00:44:57.187 trtype: tcp 00:44:57.187 adrfam: ipv4 00:44:57.187 subtype: current discovery subsystem 00:44:57.187 treq: not specified, sq flow control disable supported 00:44:57.187 portid: 1 00:44:57.187 trsvcid: 4420 00:44:57.187 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:44:57.187 traddr: 10.0.0.1 00:44:57.187 eflags: none 00:44:57.187 sectype: none 00:44:57.187 =====Discovery Log Entry 1====== 00:44:57.187 trtype: tcp 00:44:57.187 adrfam: ipv4 00:44:57.187 subtype: nvme subsystem 00:44:57.187 treq: not specified, sq flow control disable supported 00:44:57.187 portid: 1 00:44:57.187 trsvcid: 4420 00:44:57.187 subnqn: nqn.2016-06.io.spdk:testnqn 00:44:57.187 traddr: 10.0.0.1 00:44:57.187 eflags: none 00:44:57.187 sectype: none 00:44:57.187 08:08:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:44:57.187 08:08:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:44:57.187 08:08:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:44:57.187 08:08:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:44:57.187 08:08:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:44:57.187 08:08:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:44:57.188 08:08:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:44:57.188 08:08:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:44:57.188 08:08:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:44:57.188 08:08:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:57.188 08:08:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:44:57.188 08:08:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:57.188 08:08:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:44:57.188 08:08:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:57.188 08:08:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:44:57.188 08:08:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:57.188 08:08:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:44:57.188 08:08:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:57.188 08:08:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:57.188 08:08:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:57.188 08:08:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:45:00.473 Initializing NVMe Controllers 00:45:00.473 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:45:00.473 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:45:00.473 Initialization complete. Launching workers. 00:45:00.473 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 39119, failed: 0 00:45:00.473 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 39119, failed to submit 0 00:45:00.473 success 0, unsuccessful 39119, failed 0 00:45:00.473 08:08:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:45:00.473 08:08:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:45:03.761 Initializing NVMe Controllers 00:45:03.761 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:45:03.761 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:45:03.761 Initialization complete. Launching workers. 00:45:03.761 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 66777, failed: 0 00:45:03.761 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 16858, failed to submit 49919 00:45:03.761 success 0, unsuccessful 16858, failed 0 00:45:03.761 08:08:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:45:03.761 08:08:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:45:07.053 Initializing NVMe Controllers 00:45:07.053 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:45:07.053 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:45:07.053 Initialization complete. Launching workers. 00:45:07.053 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 64197, failed: 0 00:45:07.053 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 16034, failed to submit 48163 00:45:07.053 success 0, unsuccessful 16034, failed 0 00:45:07.053 08:08:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:45:07.053 08:08:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:45:07.053 08:08:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:45:07.053 08:08:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:45:07.053 08:08:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:45:07.053 08:08:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:45:07.053 08:08:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:45:07.053 08:08:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:45:07.053 08:08:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:45:07.053 08:08:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:45:07.994 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:45:07.994 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:45:07.994 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:45:07.994 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:45:07.994 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:45:07.994 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:45:07.994 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:45:07.994 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:45:07.994 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:45:07.994 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:45:07.994 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:45:07.994 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:45:07.994 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:45:07.994 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:45:07.994 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:45:07.994 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:45:08.931 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:45:09.191 00:45:09.191 real 0m14.834s 00:45:09.191 user 0m7.296s 00:45:09.191 sys 0m3.417s 00:45:09.191 08:09:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:09.191 08:09:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:45:09.191 ************************************ 00:45:09.191 END TEST kernel_target_abort 00:45:09.191 ************************************ 00:45:09.191 08:09:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:45:09.191 08:09:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:45:09.191 08:09:00 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:45:09.191 08:09:00 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:45:09.191 08:09:00 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:45:09.191 08:09:00 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:45:09.191 08:09:00 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:45:09.191 08:09:00 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:45:09.191 rmmod nvme_tcp 00:45:09.191 rmmod nvme_fabrics 00:45:09.191 rmmod nvme_keyring 00:45:09.191 08:09:00 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:45:09.191 08:09:01 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:45:09.191 08:09:01 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:45:09.191 08:09:01 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 3216530 ']' 00:45:09.191 08:09:01 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 3216530 00:45:09.191 08:09:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 3216530 ']' 00:45:09.191 08:09:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 3216530 00:45:09.191 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3216530) - No such process 00:45:09.191 08:09:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 3216530 is not found' 00:45:09.191 Process with pid 3216530 is not found 00:45:09.191 08:09:01 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:45:09.191 08:09:01 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:45:10.246 Waiting for block devices as requested 00:45:10.575 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:45:10.575 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:45:10.575 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:45:10.833 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:45:10.833 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:45:10.833 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:45:10.833 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:45:10.833 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:45:11.091 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:45:11.091 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:45:11.091 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:45:11.091 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:45:11.091 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:45:11.349 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:45:11.349 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:45:11.349 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:45:11.349 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:45:11.609 08:09:03 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:45:11.609 08:09:03 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:45:11.609 08:09:03 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:45:11.609 08:09:03 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:45:11.609 08:09:03 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:45:11.609 08:09:03 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:45:11.609 08:09:03 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:45:11.609 08:09:03 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:45:11.609 08:09:03 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:11.609 08:09:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:45:11.609 08:09:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:13.517 08:09:05 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:45:13.517 00:45:13.517 real 0m40.704s 00:45:13.517 user 1m10.077s 00:45:13.517 sys 0m9.975s 00:45:13.517 08:09:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:13.517 08:09:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:45:13.517 ************************************ 00:45:13.517 END TEST nvmf_abort_qd_sizes 00:45:13.517 ************************************ 00:45:13.517 08:09:05 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:45:13.517 08:09:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:45:13.517 08:09:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:13.517 08:09:05 -- common/autotest_common.sh@10 -- # set +x 00:45:13.775 ************************************ 00:45:13.775 START TEST keyring_file 00:45:13.775 ************************************ 00:45:13.775 08:09:05 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:45:13.775 * Looking for test storage... 00:45:13.775 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:45:13.775 08:09:05 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:45:13.775 08:09:05 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:45:13.775 08:09:05 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:45:13.775 08:09:05 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:45:13.775 08:09:05 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:45:13.775 08:09:05 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:45:13.775 08:09:05 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:45:13.775 08:09:05 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:45:13.775 08:09:05 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:45:13.775 08:09:05 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:45:13.775 08:09:05 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:45:13.775 08:09:05 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:45:13.775 08:09:05 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:45:13.775 08:09:05 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:45:13.775 08:09:05 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:45:13.775 08:09:05 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:45:13.775 08:09:05 keyring_file -- scripts/common.sh@345 -- # : 1 00:45:13.775 08:09:05 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:45:13.775 08:09:05 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:45:13.775 08:09:05 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:45:13.775 08:09:05 keyring_file -- scripts/common.sh@353 -- # local d=1 00:45:13.775 08:09:05 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:45:13.775 08:09:05 keyring_file -- scripts/common.sh@355 -- # echo 1 00:45:13.775 08:09:05 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:45:13.775 08:09:05 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:45:13.775 08:09:05 keyring_file -- scripts/common.sh@353 -- # local d=2 00:45:13.775 08:09:05 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:45:13.775 08:09:05 keyring_file -- scripts/common.sh@355 -- # echo 2 00:45:13.775 08:09:05 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:45:13.775 08:09:05 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:45:13.775 08:09:05 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:45:13.775 08:09:05 keyring_file -- scripts/common.sh@368 -- # return 0 00:45:13.775 08:09:05 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:45:13.775 08:09:05 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:45:13.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:13.775 --rc genhtml_branch_coverage=1 00:45:13.775 --rc genhtml_function_coverage=1 00:45:13.775 --rc genhtml_legend=1 00:45:13.775 --rc geninfo_all_blocks=1 00:45:13.775 --rc geninfo_unexecuted_blocks=1 00:45:13.775 00:45:13.775 ' 00:45:13.775 08:09:05 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:45:13.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:13.775 --rc genhtml_branch_coverage=1 00:45:13.775 --rc genhtml_function_coverage=1 00:45:13.775 --rc genhtml_legend=1 00:45:13.775 --rc geninfo_all_blocks=1 00:45:13.775 --rc geninfo_unexecuted_blocks=1 00:45:13.775 00:45:13.775 ' 00:45:13.775 08:09:05 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:45:13.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:13.775 --rc genhtml_branch_coverage=1 00:45:13.775 --rc genhtml_function_coverage=1 00:45:13.775 --rc genhtml_legend=1 00:45:13.775 --rc geninfo_all_blocks=1 00:45:13.775 --rc geninfo_unexecuted_blocks=1 00:45:13.775 00:45:13.775 ' 00:45:13.775 08:09:05 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:45:13.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:13.775 --rc genhtml_branch_coverage=1 00:45:13.775 --rc genhtml_function_coverage=1 00:45:13.775 --rc genhtml_legend=1 00:45:13.775 --rc geninfo_all_blocks=1 00:45:13.775 --rc geninfo_unexecuted_blocks=1 00:45:13.775 00:45:13.775 ' 00:45:13.775 08:09:05 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:45:13.775 08:09:05 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:45:13.775 08:09:05 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:45:13.775 08:09:05 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:13.775 08:09:05 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:13.775 08:09:05 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:13.775 08:09:05 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:13.775 08:09:05 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:13.775 08:09:05 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:13.775 08:09:05 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:13.775 08:09:05 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:13.775 08:09:05 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:13.775 08:09:05 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:13.775 08:09:05 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:45:13.775 08:09:05 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:45:13.775 08:09:05 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:13.775 08:09:05 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:13.775 08:09:05 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:45:13.775 08:09:05 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:45:13.775 08:09:05 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:45:13.775 08:09:05 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:45:13.775 08:09:05 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:13.775 08:09:05 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:13.775 08:09:05 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:13.775 08:09:05 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:13.775 08:09:05 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:13.775 08:09:05 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:13.775 08:09:05 keyring_file -- paths/export.sh@5 -- # export PATH 00:45:13.775 08:09:05 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:13.775 08:09:05 keyring_file -- nvmf/common.sh@51 -- # : 0 00:45:13.775 08:09:05 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:45:13.775 08:09:05 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:45:13.775 08:09:05 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:45:13.775 08:09:05 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:13.776 08:09:05 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:13.776 08:09:05 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:45:13.776 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:45:13.776 08:09:05 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:45:13.776 08:09:05 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:45:13.776 08:09:05 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:45:13.776 08:09:05 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:45:13.776 08:09:05 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:45:13.776 08:09:05 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:45:13.776 08:09:05 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:45:13.776 08:09:05 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:45:13.776 08:09:05 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:45:13.776 08:09:05 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:45:13.776 08:09:05 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:45:13.776 08:09:05 keyring_file -- keyring/common.sh@17 -- # name=key0 00:45:13.776 08:09:05 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:45:13.776 08:09:05 keyring_file -- keyring/common.sh@17 -- # digest=0 00:45:13.776 08:09:05 keyring_file -- keyring/common.sh@18 -- # mktemp 00:45:13.776 08:09:05 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.NqBE2BJNaP 00:45:13.776 08:09:05 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:45:13.776 08:09:05 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:45:13.776 08:09:05 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:45:13.776 08:09:05 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:45:13.776 08:09:05 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:45:13.776 08:09:05 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:45:13.776 08:09:05 keyring_file -- nvmf/common.sh@733 -- # python - 00:45:13.776 08:09:05 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.NqBE2BJNaP 00:45:13.776 08:09:05 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.NqBE2BJNaP 00:45:13.776 08:09:05 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.NqBE2BJNaP 00:45:13.776 08:09:05 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:45:13.776 08:09:05 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:45:13.776 08:09:05 keyring_file -- keyring/common.sh@17 -- # name=key1 00:45:13.776 08:09:05 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:45:13.776 08:09:05 keyring_file -- keyring/common.sh@17 -- # digest=0 00:45:13.776 08:09:05 keyring_file -- keyring/common.sh@18 -- # mktemp 00:45:13.776 08:09:05 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Wv5N5tNQ4F 00:45:13.776 08:09:05 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:45:13.776 08:09:05 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:45:13.776 08:09:05 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:45:13.776 08:09:05 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:45:13.776 08:09:05 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:45:13.776 08:09:05 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:45:13.776 08:09:05 keyring_file -- nvmf/common.sh@733 -- # python - 00:45:13.776 08:09:05 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Wv5N5tNQ4F 00:45:13.776 08:09:05 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Wv5N5tNQ4F 00:45:13.776 08:09:05 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.Wv5N5tNQ4F 00:45:13.776 08:09:05 keyring_file -- keyring/file.sh@30 -- # tgtpid=3222888 00:45:13.776 08:09:05 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:45:13.776 08:09:05 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3222888 00:45:13.776 08:09:05 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3222888 ']' 00:45:13.776 08:09:05 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:13.776 08:09:05 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:45:13.776 08:09:05 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:13.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:13.776 08:09:05 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:45:13.776 08:09:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:45:14.034 [2024-11-19 08:09:05.804096] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:45:14.034 [2024-11-19 08:09:05.804239] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3222888 ] 00:45:14.034 [2024-11-19 08:09:05.945186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:14.292 [2024-11-19 08:09:06.082281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:45:15.228 08:09:06 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:45:15.228 08:09:06 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:45:15.228 08:09:06 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:45:15.228 08:09:06 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:15.228 08:09:06 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:45:15.228 [2024-11-19 08:09:07.005794] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:15.228 null0 00:45:15.228 [2024-11-19 08:09:07.037808] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:45:15.228 [2024-11-19 08:09:07.038418] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:45:15.228 08:09:07 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:15.228 08:09:07 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:45:15.228 08:09:07 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:45:15.228 08:09:07 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:45:15.228 08:09:07 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:45:15.228 08:09:07 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:15.228 08:09:07 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:45:15.228 08:09:07 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:15.229 08:09:07 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:45:15.229 08:09:07 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:15.229 08:09:07 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:45:15.229 [2024-11-19 08:09:07.065844] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:45:15.229 request: 00:45:15.229 { 00:45:15.229 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:45:15.229 "secure_channel": false, 00:45:15.229 "listen_address": { 00:45:15.229 "trtype": "tcp", 00:45:15.229 "traddr": "127.0.0.1", 00:45:15.229 "trsvcid": "4420" 00:45:15.229 }, 00:45:15.229 "method": "nvmf_subsystem_add_listener", 00:45:15.229 "req_id": 1 00:45:15.229 } 00:45:15.229 Got JSON-RPC error response 00:45:15.229 response: 00:45:15.229 { 00:45:15.229 "code": -32602, 00:45:15.229 "message": "Invalid parameters" 00:45:15.229 } 00:45:15.229 08:09:07 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:45:15.229 08:09:07 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:45:15.229 08:09:07 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:45:15.229 08:09:07 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:45:15.229 08:09:07 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:45:15.229 08:09:07 keyring_file -- keyring/file.sh@47 -- # bperfpid=3223111 00:45:15.229 08:09:07 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:45:15.229 08:09:07 keyring_file -- keyring/file.sh@49 -- # waitforlisten 3223111 /var/tmp/bperf.sock 00:45:15.229 08:09:07 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3223111 ']' 00:45:15.229 08:09:07 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:45:15.229 08:09:07 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:45:15.229 08:09:07 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:45:15.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:45:15.229 08:09:07 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:45:15.229 08:09:07 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:45:15.229 [2024-11-19 08:09:07.154757] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:45:15.229 [2024-11-19 08:09:07.154885] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3223111 ] 00:45:15.487 [2024-11-19 08:09:07.297123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:15.745 [2024-11-19 08:09:07.435142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:45:16.311 08:09:08 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:45:16.311 08:09:08 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:45:16.311 08:09:08 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.NqBE2BJNaP 00:45:16.311 08:09:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.NqBE2BJNaP 00:45:16.570 08:09:08 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.Wv5N5tNQ4F 00:45:16.570 08:09:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.Wv5N5tNQ4F 00:45:16.827 08:09:08 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:45:16.827 08:09:08 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:45:16.827 08:09:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:16.828 08:09:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:16.828 08:09:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:17.085 08:09:08 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.NqBE2BJNaP == \/\t\m\p\/\t\m\p\.\N\q\B\E\2\B\J\N\a\P ]] 00:45:17.085 08:09:08 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:45:17.085 08:09:08 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:45:17.085 08:09:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:17.085 08:09:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:17.085 08:09:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:45:17.343 08:09:09 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.Wv5N5tNQ4F == \/\t\m\p\/\t\m\p\.\W\v\5\N\5\t\N\Q\4\F ]] 00:45:17.343 08:09:09 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:45:17.343 08:09:09 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:17.343 08:09:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:17.343 08:09:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:17.343 08:09:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:17.343 08:09:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:17.602 08:09:09 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:45:17.602 08:09:09 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:45:17.862 08:09:09 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:45:17.862 08:09:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:17.862 08:09:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:17.862 08:09:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:45:17.862 08:09:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:18.122 08:09:09 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:45:18.122 08:09:09 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:18.122 08:09:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:18.380 [2024-11-19 08:09:10.090082] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:45:18.380 nvme0n1 00:45:18.380 08:09:10 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:45:18.380 08:09:10 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:18.380 08:09:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:18.380 08:09:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:18.380 08:09:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:18.380 08:09:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:18.639 08:09:10 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:45:18.639 08:09:10 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:45:18.639 08:09:10 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:45:18.639 08:09:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:18.639 08:09:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:18.639 08:09:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:18.639 08:09:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:45:18.896 08:09:10 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:45:18.896 08:09:10 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:45:19.154 Running I/O for 1 seconds... 00:45:20.089 6479.00 IOPS, 25.31 MiB/s 00:45:20.089 Latency(us) 00:45:20.089 [2024-11-19T07:09:12.019Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:20.089 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:45:20.089 nvme0n1 : 1.01 6527.01 25.50 0.00 0.00 19512.84 11553.75 33593.27 00:45:20.089 [2024-11-19T07:09:12.019Z] =================================================================================================================== 00:45:20.089 [2024-11-19T07:09:12.019Z] Total : 6527.01 25.50 0.00 0.00 19512.84 11553.75 33593.27 00:45:20.089 { 00:45:20.089 "results": [ 00:45:20.089 { 00:45:20.089 "job": "nvme0n1", 00:45:20.089 "core_mask": "0x2", 00:45:20.089 "workload": "randrw", 00:45:20.089 "percentage": 50, 00:45:20.089 "status": "finished", 00:45:20.089 "queue_depth": 128, 00:45:20.089 "io_size": 4096, 00:45:20.089 "runtime": 1.012256, 00:45:20.089 "iops": 6527.005026396485, 00:45:20.089 "mibps": 25.49611338436127, 00:45:20.089 "io_failed": 0, 00:45:20.089 "io_timeout": 0, 00:45:20.089 "avg_latency_us": 19512.83894858988, 00:45:20.089 "min_latency_us": 11553.754074074073, 00:45:20.089 "max_latency_us": 33593.26814814815 00:45:20.089 } 00:45:20.089 ], 00:45:20.089 "core_count": 1 00:45:20.089 } 00:45:20.089 08:09:11 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:45:20.089 08:09:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:45:20.347 08:09:12 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:45:20.347 08:09:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:20.347 08:09:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:20.347 08:09:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:20.347 08:09:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:20.347 08:09:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:20.606 08:09:12 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:45:20.606 08:09:12 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:45:20.606 08:09:12 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:45:20.606 08:09:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:20.606 08:09:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:20.606 08:09:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:45:20.606 08:09:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:20.864 08:09:12 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:45:20.864 08:09:12 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:45:20.864 08:09:12 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:45:20.864 08:09:12 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:45:20.864 08:09:12 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:45:20.864 08:09:12 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:20.864 08:09:12 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:45:20.864 08:09:12 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:20.864 08:09:12 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:45:20.864 08:09:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:45:21.123 [2024-11-19 08:09:12.994609] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:45:21.123 [2024-11-19 08:09:12.994705] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7780 (107): Transport endpoint is not connected 00:45:21.123 [2024-11-19 08:09:12.995659] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7780 (9): Bad file descriptor 00:45:21.123 [2024-11-19 08:09:12.996655] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:45:21.123 [2024-11-19 08:09:12.996708] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:45:21.123 [2024-11-19 08:09:12.996747] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:45:21.123 [2024-11-19 08:09:12.996768] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:45:21.123 request: 00:45:21.123 { 00:45:21.123 "name": "nvme0", 00:45:21.123 "trtype": "tcp", 00:45:21.123 "traddr": "127.0.0.1", 00:45:21.123 "adrfam": "ipv4", 00:45:21.123 "trsvcid": "4420", 00:45:21.123 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:21.123 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:21.123 "prchk_reftag": false, 00:45:21.123 "prchk_guard": false, 00:45:21.123 "hdgst": false, 00:45:21.123 "ddgst": false, 00:45:21.123 "psk": "key1", 00:45:21.123 "allow_unrecognized_csi": false, 00:45:21.123 "method": "bdev_nvme_attach_controller", 00:45:21.123 "req_id": 1 00:45:21.123 } 00:45:21.123 Got JSON-RPC error response 00:45:21.123 response: 00:45:21.123 { 00:45:21.123 "code": -5, 00:45:21.123 "message": "Input/output error" 00:45:21.123 } 00:45:21.123 08:09:13 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:45:21.123 08:09:13 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:45:21.123 08:09:13 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:45:21.123 08:09:13 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:45:21.123 08:09:13 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:45:21.123 08:09:13 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:21.123 08:09:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:21.123 08:09:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:21.123 08:09:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:21.123 08:09:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:21.381 08:09:13 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:45:21.381 08:09:13 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:45:21.381 08:09:13 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:45:21.381 08:09:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:21.381 08:09:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:21.381 08:09:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:21.381 08:09:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:45:21.946 08:09:13 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:45:21.946 08:09:13 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:45:21.946 08:09:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:45:21.946 08:09:13 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:45:21.946 08:09:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:45:22.202 08:09:14 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:45:22.202 08:09:14 keyring_file -- keyring/file.sh@78 -- # jq length 00:45:22.202 08:09:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:22.459 08:09:14 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:45:22.459 08:09:14 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.NqBE2BJNaP 00:45:22.716 08:09:14 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.NqBE2BJNaP 00:45:22.716 08:09:14 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:45:22.716 08:09:14 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.NqBE2BJNaP 00:45:22.716 08:09:14 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:45:22.716 08:09:14 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:22.716 08:09:14 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:45:22.716 08:09:14 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:22.716 08:09:14 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.NqBE2BJNaP 00:45:22.716 08:09:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.NqBE2BJNaP 00:45:22.974 [2024-11-19 08:09:14.653959] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.NqBE2BJNaP': 0100660 00:45:22.974 [2024-11-19 08:09:14.654027] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:45:22.974 request: 00:45:22.974 { 00:45:22.974 "name": "key0", 00:45:22.974 "path": "/tmp/tmp.NqBE2BJNaP", 00:45:22.974 "method": "keyring_file_add_key", 00:45:22.974 "req_id": 1 00:45:22.974 } 00:45:22.974 Got JSON-RPC error response 00:45:22.974 response: 00:45:22.974 { 00:45:22.974 "code": -1, 00:45:22.974 "message": "Operation not permitted" 00:45:22.974 } 00:45:22.974 08:09:14 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:45:22.974 08:09:14 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:45:22.974 08:09:14 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:45:22.974 08:09:14 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:45:22.974 08:09:14 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.NqBE2BJNaP 00:45:22.974 08:09:14 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.NqBE2BJNaP 00:45:22.974 08:09:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.NqBE2BJNaP 00:45:23.231 08:09:14 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.NqBE2BJNaP 00:45:23.232 08:09:14 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:45:23.232 08:09:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:23.232 08:09:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:23.232 08:09:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:23.232 08:09:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:23.232 08:09:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:23.489 08:09:15 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:45:23.489 08:09:15 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:23.489 08:09:15 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:45:23.489 08:09:15 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:23.489 08:09:15 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:45:23.489 08:09:15 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:23.489 08:09:15 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:45:23.490 08:09:15 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:23.490 08:09:15 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:23.490 08:09:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:23.748 [2024-11-19 08:09:15.476313] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.NqBE2BJNaP': No such file or directory 00:45:23.748 [2024-11-19 08:09:15.476366] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:45:23.748 [2024-11-19 08:09:15.476414] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:45:23.748 [2024-11-19 08:09:15.476439] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:45:23.748 [2024-11-19 08:09:15.476461] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:45:23.748 [2024-11-19 08:09:15.476484] bdev_nvme.c:6669:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:45:23.748 request: 00:45:23.748 { 00:45:23.748 "name": "nvme0", 00:45:23.748 "trtype": "tcp", 00:45:23.748 "traddr": "127.0.0.1", 00:45:23.748 "adrfam": "ipv4", 00:45:23.748 "trsvcid": "4420", 00:45:23.748 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:23.748 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:23.748 "prchk_reftag": false, 00:45:23.748 "prchk_guard": false, 00:45:23.748 "hdgst": false, 00:45:23.748 "ddgst": false, 00:45:23.748 "psk": "key0", 00:45:23.748 "allow_unrecognized_csi": false, 00:45:23.748 "method": "bdev_nvme_attach_controller", 00:45:23.748 "req_id": 1 00:45:23.748 } 00:45:23.748 Got JSON-RPC error response 00:45:23.748 response: 00:45:23.748 { 00:45:23.748 "code": -19, 00:45:23.748 "message": "No such device" 00:45:23.748 } 00:45:23.748 08:09:15 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:45:23.748 08:09:15 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:45:23.748 08:09:15 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:45:23.748 08:09:15 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:45:23.748 08:09:15 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:45:23.748 08:09:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:45:24.006 08:09:15 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:45:24.006 08:09:15 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:45:24.006 08:09:15 keyring_file -- keyring/common.sh@17 -- # name=key0 00:45:24.006 08:09:15 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:45:24.006 08:09:15 keyring_file -- keyring/common.sh@17 -- # digest=0 00:45:24.006 08:09:15 keyring_file -- keyring/common.sh@18 -- # mktemp 00:45:24.006 08:09:15 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.f1mOcyjF2Z 00:45:24.006 08:09:15 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:45:24.006 08:09:15 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:45:24.006 08:09:15 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:45:24.006 08:09:15 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:45:24.006 08:09:15 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:45:24.006 08:09:15 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:45:24.006 08:09:15 keyring_file -- nvmf/common.sh@733 -- # python - 00:45:24.006 08:09:15 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.f1mOcyjF2Z 00:45:24.006 08:09:15 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.f1mOcyjF2Z 00:45:24.006 08:09:15 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.f1mOcyjF2Z 00:45:24.006 08:09:15 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.f1mOcyjF2Z 00:45:24.006 08:09:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.f1mOcyjF2Z 00:45:24.264 08:09:16 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:24.264 08:09:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:24.523 nvme0n1 00:45:24.523 08:09:16 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:45:24.523 08:09:16 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:24.523 08:09:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:24.523 08:09:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:24.523 08:09:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:24.523 08:09:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:25.090 08:09:16 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:45:25.090 08:09:16 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:45:25.090 08:09:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:45:25.348 08:09:17 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:45:25.348 08:09:17 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:45:25.348 08:09:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:25.348 08:09:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:25.348 08:09:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:25.606 08:09:17 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:45:25.606 08:09:17 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:45:25.606 08:09:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:25.606 08:09:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:25.606 08:09:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:25.606 08:09:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:25.606 08:09:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:25.864 08:09:17 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:45:25.864 08:09:17 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:45:25.864 08:09:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:45:26.123 08:09:17 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:45:26.124 08:09:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:26.124 08:09:17 keyring_file -- keyring/file.sh@105 -- # jq length 00:45:26.382 08:09:18 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:45:26.382 08:09:18 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.f1mOcyjF2Z 00:45:26.382 08:09:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.f1mOcyjF2Z 00:45:26.639 08:09:18 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.Wv5N5tNQ4F 00:45:26.639 08:09:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.Wv5N5tNQ4F 00:45:26.897 08:09:18 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:26.897 08:09:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:27.155 nvme0n1 00:45:27.155 08:09:19 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:45:27.155 08:09:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:45:27.721 08:09:19 keyring_file -- keyring/file.sh@113 -- # config='{ 00:45:27.721 "subsystems": [ 00:45:27.721 { 00:45:27.721 "subsystem": "keyring", 00:45:27.721 "config": [ 00:45:27.721 { 00:45:27.721 "method": "keyring_file_add_key", 00:45:27.721 "params": { 00:45:27.721 "name": "key0", 00:45:27.721 "path": "/tmp/tmp.f1mOcyjF2Z" 00:45:27.721 } 00:45:27.721 }, 00:45:27.721 { 00:45:27.721 "method": "keyring_file_add_key", 00:45:27.721 "params": { 00:45:27.721 "name": "key1", 00:45:27.721 "path": "/tmp/tmp.Wv5N5tNQ4F" 00:45:27.721 } 00:45:27.721 } 00:45:27.721 ] 00:45:27.721 }, 00:45:27.721 { 00:45:27.721 "subsystem": "iobuf", 00:45:27.721 "config": [ 00:45:27.721 { 00:45:27.721 "method": "iobuf_set_options", 00:45:27.721 "params": { 00:45:27.721 "small_pool_count": 8192, 00:45:27.721 "large_pool_count": 1024, 00:45:27.721 "small_bufsize": 8192, 00:45:27.721 "large_bufsize": 135168, 00:45:27.721 "enable_numa": false 00:45:27.721 } 00:45:27.721 } 00:45:27.721 ] 00:45:27.721 }, 00:45:27.721 { 00:45:27.721 "subsystem": "sock", 00:45:27.721 "config": [ 00:45:27.721 { 00:45:27.721 "method": "sock_set_default_impl", 00:45:27.721 "params": { 00:45:27.721 "impl_name": "posix" 00:45:27.721 } 00:45:27.721 }, 00:45:27.721 { 00:45:27.721 "method": "sock_impl_set_options", 00:45:27.721 "params": { 00:45:27.721 "impl_name": "ssl", 00:45:27.721 "recv_buf_size": 4096, 00:45:27.721 "send_buf_size": 4096, 00:45:27.721 "enable_recv_pipe": true, 00:45:27.721 "enable_quickack": false, 00:45:27.721 "enable_placement_id": 0, 00:45:27.721 "enable_zerocopy_send_server": true, 00:45:27.721 "enable_zerocopy_send_client": false, 00:45:27.721 "zerocopy_threshold": 0, 00:45:27.721 "tls_version": 0, 00:45:27.721 "enable_ktls": false 00:45:27.721 } 00:45:27.721 }, 00:45:27.721 { 00:45:27.721 "method": "sock_impl_set_options", 00:45:27.721 "params": { 00:45:27.721 "impl_name": "posix", 00:45:27.721 "recv_buf_size": 2097152, 00:45:27.721 "send_buf_size": 2097152, 00:45:27.721 "enable_recv_pipe": true, 00:45:27.721 "enable_quickack": false, 00:45:27.721 "enable_placement_id": 0, 00:45:27.721 "enable_zerocopy_send_server": true, 00:45:27.721 "enable_zerocopy_send_client": false, 00:45:27.721 "zerocopy_threshold": 0, 00:45:27.721 "tls_version": 0, 00:45:27.721 "enable_ktls": false 00:45:27.721 } 00:45:27.721 } 00:45:27.721 ] 00:45:27.721 }, 00:45:27.721 { 00:45:27.721 "subsystem": "vmd", 00:45:27.721 "config": [] 00:45:27.721 }, 00:45:27.721 { 00:45:27.721 "subsystem": "accel", 00:45:27.721 "config": [ 00:45:27.721 { 00:45:27.721 "method": "accel_set_options", 00:45:27.721 "params": { 00:45:27.721 "small_cache_size": 128, 00:45:27.721 "large_cache_size": 16, 00:45:27.721 "task_count": 2048, 00:45:27.721 "sequence_count": 2048, 00:45:27.721 "buf_count": 2048 00:45:27.721 } 00:45:27.721 } 00:45:27.721 ] 00:45:27.721 }, 00:45:27.721 { 00:45:27.721 "subsystem": "bdev", 00:45:27.721 "config": [ 00:45:27.721 { 00:45:27.721 "method": "bdev_set_options", 00:45:27.721 "params": { 00:45:27.721 "bdev_io_pool_size": 65535, 00:45:27.721 "bdev_io_cache_size": 256, 00:45:27.721 "bdev_auto_examine": true, 00:45:27.721 "iobuf_small_cache_size": 128, 00:45:27.721 "iobuf_large_cache_size": 16 00:45:27.721 } 00:45:27.721 }, 00:45:27.721 { 00:45:27.721 "method": "bdev_raid_set_options", 00:45:27.721 "params": { 00:45:27.721 "process_window_size_kb": 1024, 00:45:27.721 "process_max_bandwidth_mb_sec": 0 00:45:27.721 } 00:45:27.721 }, 00:45:27.721 { 00:45:27.721 "method": "bdev_iscsi_set_options", 00:45:27.721 "params": { 00:45:27.721 "timeout_sec": 30 00:45:27.721 } 00:45:27.721 }, 00:45:27.721 { 00:45:27.721 "method": "bdev_nvme_set_options", 00:45:27.721 "params": { 00:45:27.721 "action_on_timeout": "none", 00:45:27.721 "timeout_us": 0, 00:45:27.721 "timeout_admin_us": 0, 00:45:27.721 "keep_alive_timeout_ms": 10000, 00:45:27.721 "arbitration_burst": 0, 00:45:27.721 "low_priority_weight": 0, 00:45:27.721 "medium_priority_weight": 0, 00:45:27.721 "high_priority_weight": 0, 00:45:27.721 "nvme_adminq_poll_period_us": 10000, 00:45:27.721 "nvme_ioq_poll_period_us": 0, 00:45:27.721 "io_queue_requests": 512, 00:45:27.721 "delay_cmd_submit": true, 00:45:27.721 "transport_retry_count": 4, 00:45:27.721 "bdev_retry_count": 3, 00:45:27.721 "transport_ack_timeout": 0, 00:45:27.721 "ctrlr_loss_timeout_sec": 0, 00:45:27.721 "reconnect_delay_sec": 0, 00:45:27.721 "fast_io_fail_timeout_sec": 0, 00:45:27.721 "disable_auto_failback": false, 00:45:27.721 "generate_uuids": false, 00:45:27.721 "transport_tos": 0, 00:45:27.721 "nvme_error_stat": false, 00:45:27.721 "rdma_srq_size": 0, 00:45:27.721 "io_path_stat": false, 00:45:27.721 "allow_accel_sequence": false, 00:45:27.721 "rdma_max_cq_size": 0, 00:45:27.721 "rdma_cm_event_timeout_ms": 0, 00:45:27.721 "dhchap_digests": [ 00:45:27.721 "sha256", 00:45:27.722 "sha384", 00:45:27.722 "sha512" 00:45:27.722 ], 00:45:27.722 "dhchap_dhgroups": [ 00:45:27.722 "null", 00:45:27.722 "ffdhe2048", 00:45:27.722 "ffdhe3072", 00:45:27.722 "ffdhe4096", 00:45:27.722 "ffdhe6144", 00:45:27.722 "ffdhe8192" 00:45:27.722 ] 00:45:27.722 } 00:45:27.722 }, 00:45:27.722 { 00:45:27.722 "method": "bdev_nvme_attach_controller", 00:45:27.722 "params": { 00:45:27.722 "name": "nvme0", 00:45:27.722 "trtype": "TCP", 00:45:27.722 "adrfam": "IPv4", 00:45:27.722 "traddr": "127.0.0.1", 00:45:27.722 "trsvcid": "4420", 00:45:27.722 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:27.722 "prchk_reftag": false, 00:45:27.722 "prchk_guard": false, 00:45:27.722 "ctrlr_loss_timeout_sec": 0, 00:45:27.722 "reconnect_delay_sec": 0, 00:45:27.722 "fast_io_fail_timeout_sec": 0, 00:45:27.722 "psk": "key0", 00:45:27.722 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:27.722 "hdgst": false, 00:45:27.722 "ddgst": false, 00:45:27.722 "multipath": "multipath" 00:45:27.722 } 00:45:27.722 }, 00:45:27.722 { 00:45:27.722 "method": "bdev_nvme_set_hotplug", 00:45:27.722 "params": { 00:45:27.722 "period_us": 100000, 00:45:27.722 "enable": false 00:45:27.722 } 00:45:27.722 }, 00:45:27.722 { 00:45:27.722 "method": "bdev_wait_for_examine" 00:45:27.722 } 00:45:27.722 ] 00:45:27.722 }, 00:45:27.722 { 00:45:27.722 "subsystem": "nbd", 00:45:27.722 "config": [] 00:45:27.722 } 00:45:27.722 ] 00:45:27.722 }' 00:45:27.722 08:09:19 keyring_file -- keyring/file.sh@115 -- # killprocess 3223111 00:45:27.722 08:09:19 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3223111 ']' 00:45:27.722 08:09:19 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3223111 00:45:27.722 08:09:19 keyring_file -- common/autotest_common.sh@959 -- # uname 00:45:27.722 08:09:19 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:45:27.722 08:09:19 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3223111 00:45:27.722 08:09:19 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:45:27.722 08:09:19 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:45:27.722 08:09:19 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3223111' 00:45:27.722 killing process with pid 3223111 00:45:27.722 08:09:19 keyring_file -- common/autotest_common.sh@973 -- # kill 3223111 00:45:27.722 Received shutdown signal, test time was about 1.000000 seconds 00:45:27.722 00:45:27.722 Latency(us) 00:45:27.722 [2024-11-19T07:09:19.652Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:27.722 [2024-11-19T07:09:19.652Z] =================================================================================================================== 00:45:27.722 [2024-11-19T07:09:19.652Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:45:27.722 08:09:19 keyring_file -- common/autotest_common.sh@978 -- # wait 3223111 00:45:28.658 08:09:20 keyring_file -- keyring/file.sh@118 -- # bperfpid=3225158 00:45:28.658 08:09:20 keyring_file -- keyring/file.sh@120 -- # waitforlisten 3225158 /var/tmp/bperf.sock 00:45:28.658 08:09:20 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3225158 ']' 00:45:28.658 08:09:20 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:45:28.658 08:09:20 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:45:28.658 08:09:20 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:45:28.658 08:09:20 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:45:28.658 "subsystems": [ 00:45:28.658 { 00:45:28.658 "subsystem": "keyring", 00:45:28.658 "config": [ 00:45:28.658 { 00:45:28.658 "method": "keyring_file_add_key", 00:45:28.658 "params": { 00:45:28.658 "name": "key0", 00:45:28.658 "path": "/tmp/tmp.f1mOcyjF2Z" 00:45:28.658 } 00:45:28.658 }, 00:45:28.658 { 00:45:28.658 "method": "keyring_file_add_key", 00:45:28.658 "params": { 00:45:28.658 "name": "key1", 00:45:28.658 "path": "/tmp/tmp.Wv5N5tNQ4F" 00:45:28.658 } 00:45:28.658 } 00:45:28.658 ] 00:45:28.658 }, 00:45:28.658 { 00:45:28.658 "subsystem": "iobuf", 00:45:28.658 "config": [ 00:45:28.658 { 00:45:28.658 "method": "iobuf_set_options", 00:45:28.658 "params": { 00:45:28.658 "small_pool_count": 8192, 00:45:28.658 "large_pool_count": 1024, 00:45:28.658 "small_bufsize": 8192, 00:45:28.658 "large_bufsize": 135168, 00:45:28.658 "enable_numa": false 00:45:28.658 } 00:45:28.658 } 00:45:28.658 ] 00:45:28.658 }, 00:45:28.658 { 00:45:28.658 "subsystem": "sock", 00:45:28.658 "config": [ 00:45:28.658 { 00:45:28.658 "method": "sock_set_default_impl", 00:45:28.658 "params": { 00:45:28.658 "impl_name": "posix" 00:45:28.658 } 00:45:28.658 }, 00:45:28.658 { 00:45:28.658 "method": "sock_impl_set_options", 00:45:28.658 "params": { 00:45:28.658 "impl_name": "ssl", 00:45:28.658 "recv_buf_size": 4096, 00:45:28.658 "send_buf_size": 4096, 00:45:28.658 "enable_recv_pipe": true, 00:45:28.658 "enable_quickack": false, 00:45:28.658 "enable_placement_id": 0, 00:45:28.658 "enable_zerocopy_send_server": true, 00:45:28.658 "enable_zerocopy_send_client": false, 00:45:28.658 "zerocopy_threshold": 0, 00:45:28.658 "tls_version": 0, 00:45:28.658 "enable_ktls": false 00:45:28.658 } 00:45:28.658 }, 00:45:28.658 { 00:45:28.658 "method": "sock_impl_set_options", 00:45:28.658 "params": { 00:45:28.658 "impl_name": "posix", 00:45:28.658 "recv_buf_size": 2097152, 00:45:28.658 "send_buf_size": 2097152, 00:45:28.658 "enable_recv_pipe": true, 00:45:28.658 "enable_quickack": false, 00:45:28.658 "enable_placement_id": 0, 00:45:28.658 "enable_zerocopy_send_server": true, 00:45:28.658 "enable_zerocopy_send_client": false, 00:45:28.659 "zerocopy_threshold": 0, 00:45:28.659 "tls_version": 0, 00:45:28.659 "enable_ktls": false 00:45:28.659 } 00:45:28.659 } 00:45:28.659 ] 00:45:28.659 }, 00:45:28.659 { 00:45:28.659 "subsystem": "vmd", 00:45:28.659 "config": [] 00:45:28.659 }, 00:45:28.659 { 00:45:28.659 "subsystem": "accel", 00:45:28.659 "config": [ 00:45:28.659 { 00:45:28.659 "method": "accel_set_options", 00:45:28.659 "params": { 00:45:28.659 "small_cache_size": 128, 00:45:28.659 "large_cache_size": 16, 00:45:28.659 "task_count": 2048, 00:45:28.659 "sequence_count": 2048, 00:45:28.659 "buf_count": 2048 00:45:28.659 } 00:45:28.659 } 00:45:28.659 ] 00:45:28.659 }, 00:45:28.659 { 00:45:28.659 "subsystem": "bdev", 00:45:28.659 "config": [ 00:45:28.659 { 00:45:28.659 "method": "bdev_set_options", 00:45:28.659 "params": { 00:45:28.659 "bdev_io_pool_size": 65535, 00:45:28.659 "bdev_io_cache_size": 256, 00:45:28.659 "bdev_auto_examine": true, 00:45:28.659 "iobuf_small_cache_size": 128, 00:45:28.659 "iobuf_large_cache_size": 16 00:45:28.659 } 00:45:28.659 }, 00:45:28.659 { 00:45:28.659 "method": "bdev_raid_set_options", 00:45:28.659 "params": { 00:45:28.659 "process_window_size_kb": 1024, 00:45:28.659 "process_max_bandwidth_mb_sec": 0 00:45:28.659 } 00:45:28.659 }, 00:45:28.659 { 00:45:28.659 "method": "bdev_iscsi_set_options", 00:45:28.659 "params": { 00:45:28.659 "timeout_sec": 30 00:45:28.659 } 00:45:28.659 }, 00:45:28.659 { 00:45:28.659 "method": "bdev_nvme_set_options", 00:45:28.659 "params": { 00:45:28.659 "action_on_timeout": "none", 00:45:28.659 "timeout_us": 0, 00:45:28.659 "timeout_admin_us": 0, 00:45:28.659 "keep_alive_timeout_ms": 10000, 00:45:28.659 "arbitration_burst": 0, 00:45:28.659 "low_priority_weight": 0, 00:45:28.659 "medium_priority_weight": 0, 00:45:28.659 "high_priority_weight": 0, 00:45:28.659 "nvme_adminq_poll_period_us": 10000, 00:45:28.659 "nvme_ioq_poll_period_us": 0, 00:45:28.659 "io_queue_requests": 512, 00:45:28.659 "delay_cmd_submit": true, 00:45:28.659 "transport_retry_count": 4, 00:45:28.659 "bdev_retry_count": 3, 00:45:28.659 "transport_ack_timeout": 0, 00:45:28.659 "ctrlr_loss_timeout_sec": 0, 00:45:28.659 "reconnect_delay_sec": 0, 00:45:28.659 "fast_io_fail_timeout_sec": 0, 00:45:28.659 "disable_auto_failback": false, 00:45:28.659 "generate_uuids": false, 00:45:28.659 "transport_tos": 0, 00:45:28.659 "nvme_error_stat": false, 00:45:28.659 "rdma_srq_size": 0, 00:45:28.659 "io_path_stat": false, 00:45:28.659 "allow_accel_sequence": false, 00:45:28.659 "rdma_max_cq_size": 0, 00:45:28.659 "rdma_cm_event_timeout_ms": 0, 00:45:28.659 "dhchap_digests": [ 00:45:28.659 "sha256", 00:45:28.659 "sha384", 00:45:28.659 "sha512" 00:45:28.659 ], 00:45:28.659 "dhchap_dhgroups": [ 00:45:28.659 "null", 00:45:28.659 "ffdhe2048", 00:45:28.659 "ffdhe3072", 00:45:28.659 "ffdhe4096", 00:45:28.659 "ffdhe6144", 00:45:28.659 "ffdhe8192" 00:45:28.659 ] 00:45:28.659 } 00:45:28.659 }, 00:45:28.659 { 00:45:28.659 "method": "bdev_nvme_attach_controller", 00:45:28.659 "params": { 00:45:28.659 "name": "nvme0", 00:45:28.659 "trtype": "TCP", 00:45:28.659 "adrfam": "IPv4", 00:45:28.659 "traddr": "127.0.0.1", 00:45:28.659 "trsvcid": "4420", 00:45:28.659 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:28.659 "prchk_reftag": false, 00:45:28.659 "prchk_guard": false, 00:45:28.659 "ctrlr_loss_timeout_sec": 0, 00:45:28.659 "reconnect_delay_sec": 0, 00:45:28.659 "fast_io_fail_timeout_sec": 0, 00:45:28.659 "psk": "key0", 00:45:28.659 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:28.659 "hdgst": false, 00:45:28.659 "ddgst": false, 00:45:28.659 "multipath": "multipath" 00:45:28.659 } 00:45:28.659 }, 00:45:28.659 { 00:45:28.659 "method": "bdev_nvme_set_hotplug", 00:45:28.659 "params": { 00:45:28.659 "period_us": 100000, 00:45:28.659 "enable": false 00:45:28.659 } 00:45:28.659 }, 00:45:28.659 { 00:45:28.659 "method": "bdev_wait_for_examine" 00:45:28.659 } 00:45:28.659 ] 00:45:28.659 }, 00:45:28.659 { 00:45:28.659 "subsystem": "nbd", 00:45:28.659 "config": [] 00:45:28.659 } 00:45:28.659 ] 00:45:28.659 }' 00:45:28.659 08:09:20 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:45:28.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:45:28.659 08:09:20 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:45:28.659 08:09:20 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:45:28.659 [2024-11-19 08:09:20.387879] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:45:28.659 [2024-11-19 08:09:20.388048] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3225158 ] 00:45:28.659 [2024-11-19 08:09:20.536576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:28.918 [2024-11-19 08:09:20.675098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:45:29.485 [2024-11-19 08:09:21.133877] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:45:29.485 08:09:21 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:45:29.485 08:09:21 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:45:29.485 08:09:21 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:45:29.485 08:09:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:29.485 08:09:21 keyring_file -- keyring/file.sh@121 -- # jq length 00:45:29.743 08:09:21 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:45:29.743 08:09:21 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:45:29.743 08:09:21 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:29.743 08:09:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:29.743 08:09:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:29.743 08:09:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:29.743 08:09:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:30.000 08:09:21 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:45:30.000 08:09:21 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:45:30.000 08:09:21 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:45:30.000 08:09:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:30.000 08:09:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:30.000 08:09:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:30.000 08:09:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:45:30.258 08:09:22 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:45:30.258 08:09:22 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:45:30.258 08:09:22 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:45:30.258 08:09:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:45:30.825 08:09:22 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:45:30.825 08:09:22 keyring_file -- keyring/file.sh@1 -- # cleanup 00:45:30.825 08:09:22 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.f1mOcyjF2Z /tmp/tmp.Wv5N5tNQ4F 00:45:30.825 08:09:22 keyring_file -- keyring/file.sh@20 -- # killprocess 3225158 00:45:30.825 08:09:22 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3225158 ']' 00:45:30.825 08:09:22 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3225158 00:45:30.825 08:09:22 keyring_file -- common/autotest_common.sh@959 -- # uname 00:45:30.825 08:09:22 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:45:30.825 08:09:22 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3225158 00:45:30.825 08:09:22 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:45:30.825 08:09:22 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:45:30.825 08:09:22 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3225158' 00:45:30.825 killing process with pid 3225158 00:45:30.825 08:09:22 keyring_file -- common/autotest_common.sh@973 -- # kill 3225158 00:45:30.825 Received shutdown signal, test time was about 1.000000 seconds 00:45:30.825 00:45:30.825 Latency(us) 00:45:30.825 [2024-11-19T07:09:22.755Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:30.825 [2024-11-19T07:09:22.755Z] =================================================================================================================== 00:45:30.825 [2024-11-19T07:09:22.755Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:45:30.825 08:09:22 keyring_file -- common/autotest_common.sh@978 -- # wait 3225158 00:45:31.760 08:09:23 keyring_file -- keyring/file.sh@21 -- # killprocess 3222888 00:45:31.760 08:09:23 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3222888 ']' 00:45:31.760 08:09:23 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3222888 00:45:31.760 08:09:23 keyring_file -- common/autotest_common.sh@959 -- # uname 00:45:31.760 08:09:23 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:45:31.760 08:09:23 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3222888 00:45:31.760 08:09:23 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:45:31.760 08:09:23 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:45:31.760 08:09:23 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3222888' 00:45:31.760 killing process with pid 3222888 00:45:31.760 08:09:23 keyring_file -- common/autotest_common.sh@973 -- # kill 3222888 00:45:31.760 08:09:23 keyring_file -- common/autotest_common.sh@978 -- # wait 3222888 00:45:34.292 00:45:34.292 real 0m20.253s 00:45:34.292 user 0m46.183s 00:45:34.292 sys 0m3.650s 00:45:34.292 08:09:25 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:34.292 08:09:25 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:45:34.292 ************************************ 00:45:34.292 END TEST keyring_file 00:45:34.292 ************************************ 00:45:34.292 08:09:25 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:45:34.292 08:09:25 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:45:34.292 08:09:25 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:45:34.292 08:09:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:34.292 08:09:25 -- common/autotest_common.sh@10 -- # set +x 00:45:34.292 ************************************ 00:45:34.292 START TEST keyring_linux 00:45:34.292 ************************************ 00:45:34.292 08:09:25 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:45:34.292 Joined session keyring: 730062877 00:45:34.292 * Looking for test storage... 00:45:34.292 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:45:34.292 08:09:25 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:45:34.292 08:09:25 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:45:34.292 08:09:25 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:45:34.292 08:09:25 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:45:34.292 08:09:25 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:45:34.292 08:09:25 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:45:34.292 08:09:25 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:45:34.292 08:09:25 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:45:34.292 08:09:25 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:45:34.292 08:09:25 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:45:34.292 08:09:25 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:45:34.292 08:09:25 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:45:34.292 08:09:25 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:45:34.292 08:09:25 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:45:34.292 08:09:25 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:45:34.292 08:09:25 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:45:34.292 08:09:25 keyring_linux -- scripts/common.sh@345 -- # : 1 00:45:34.292 08:09:25 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:45:34.292 08:09:25 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:45:34.292 08:09:25 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:45:34.292 08:09:25 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:45:34.292 08:09:25 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:45:34.292 08:09:25 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:45:34.292 08:09:25 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:45:34.292 08:09:25 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:45:34.292 08:09:25 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:45:34.292 08:09:25 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:45:34.292 08:09:25 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:45:34.292 08:09:25 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:45:34.292 08:09:25 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:45:34.292 08:09:25 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:45:34.292 08:09:25 keyring_linux -- scripts/common.sh@368 -- # return 0 00:45:34.292 08:09:25 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:45:34.292 08:09:25 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:45:34.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:34.292 --rc genhtml_branch_coverage=1 00:45:34.292 --rc genhtml_function_coverage=1 00:45:34.292 --rc genhtml_legend=1 00:45:34.292 --rc geninfo_all_blocks=1 00:45:34.292 --rc geninfo_unexecuted_blocks=1 00:45:34.292 00:45:34.292 ' 00:45:34.292 08:09:25 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:45:34.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:34.292 --rc genhtml_branch_coverage=1 00:45:34.292 --rc genhtml_function_coverage=1 00:45:34.292 --rc genhtml_legend=1 00:45:34.292 --rc geninfo_all_blocks=1 00:45:34.292 --rc geninfo_unexecuted_blocks=1 00:45:34.292 00:45:34.292 ' 00:45:34.292 08:09:25 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:45:34.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:34.292 --rc genhtml_branch_coverage=1 00:45:34.292 --rc genhtml_function_coverage=1 00:45:34.292 --rc genhtml_legend=1 00:45:34.292 --rc geninfo_all_blocks=1 00:45:34.293 --rc geninfo_unexecuted_blocks=1 00:45:34.293 00:45:34.293 ' 00:45:34.293 08:09:25 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:45:34.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:34.293 --rc genhtml_branch_coverage=1 00:45:34.293 --rc genhtml_function_coverage=1 00:45:34.293 --rc genhtml_legend=1 00:45:34.293 --rc geninfo_all_blocks=1 00:45:34.293 --rc geninfo_unexecuted_blocks=1 00:45:34.293 00:45:34.293 ' 00:45:34.293 08:09:25 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:45:34.293 08:09:25 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:45:34.293 08:09:25 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:45:34.293 08:09:25 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:34.293 08:09:25 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:34.293 08:09:25 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:34.293 08:09:25 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:34.293 08:09:25 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:34.293 08:09:25 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:34.293 08:09:25 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:34.293 08:09:25 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:34.293 08:09:25 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:34.293 08:09:25 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:34.293 08:09:25 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:45:34.293 08:09:25 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:45:34.293 08:09:25 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:34.293 08:09:25 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:34.293 08:09:25 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:45:34.293 08:09:25 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:45:34.293 08:09:25 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:45:34.293 08:09:25 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:45:34.293 08:09:25 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:34.293 08:09:25 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:34.293 08:09:25 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:34.293 08:09:25 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:34.293 08:09:25 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:34.293 08:09:25 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:34.293 08:09:25 keyring_linux -- paths/export.sh@5 -- # export PATH 00:45:34.293 08:09:25 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:34.293 08:09:25 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:45:34.293 08:09:25 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:45:34.293 08:09:25 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:45:34.293 08:09:25 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:45:34.293 08:09:25 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:34.293 08:09:25 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:34.293 08:09:25 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:45:34.293 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:45:34.293 08:09:25 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:45:34.293 08:09:25 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:45:34.293 08:09:25 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:45:34.293 08:09:25 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:45:34.293 08:09:25 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:45:34.293 08:09:25 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:45:34.293 08:09:25 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:45:34.293 08:09:25 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:45:34.293 08:09:25 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:45:34.293 08:09:25 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:45:34.293 08:09:25 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:45:34.293 08:09:25 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:45:34.293 08:09:25 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:45:34.293 08:09:25 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:45:34.293 08:09:25 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:45:34.293 08:09:25 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:45:34.293 08:09:25 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:45:34.293 08:09:25 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:45:34.293 08:09:25 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:45:34.293 08:09:25 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:45:34.293 08:09:25 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:45:34.293 08:09:25 keyring_linux -- nvmf/common.sh@733 -- # python - 00:45:34.293 08:09:25 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:45:34.293 08:09:25 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:45:34.293 /tmp/:spdk-test:key0 00:45:34.293 08:09:25 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:45:34.293 08:09:25 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:45:34.293 08:09:25 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:45:34.293 08:09:25 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:45:34.293 08:09:25 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:45:34.293 08:09:25 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:45:34.293 08:09:25 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:45:34.293 08:09:25 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:45:34.293 08:09:25 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:45:34.293 08:09:25 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:45:34.293 08:09:25 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:45:34.293 08:09:25 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:45:34.293 08:09:25 keyring_linux -- nvmf/common.sh@733 -- # python - 00:45:34.293 08:09:25 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:45:34.293 08:09:25 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:45:34.293 /tmp/:spdk-test:key1 00:45:34.293 08:09:25 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3225911 00:45:34.293 08:09:25 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:45:34.293 08:09:25 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3225911 00:45:34.293 08:09:25 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 3225911 ']' 00:45:34.293 08:09:25 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:34.293 08:09:25 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:45:34.293 08:09:25 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:34.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:34.293 08:09:25 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:45:34.293 08:09:25 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:45:34.293 [2024-11-19 08:09:26.090846] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:45:34.293 [2024-11-19 08:09:26.091018] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3225911 ] 00:45:34.602 [2024-11-19 08:09:26.233601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:34.602 [2024-11-19 08:09:26.370128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:45:35.563 08:09:27 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:45:35.563 08:09:27 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:45:35.563 08:09:27 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:45:35.563 08:09:27 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:35.563 08:09:27 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:45:35.563 [2024-11-19 08:09:27.331959] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:35.563 null0 00:45:35.563 [2024-11-19 08:09:27.363988] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:45:35.563 [2024-11-19 08:09:27.364619] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:45:35.563 08:09:27 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:35.563 08:09:27 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:45:35.563 613085695 00:45:35.563 08:09:27 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:45:35.563 673354060 00:45:35.563 08:09:27 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3226171 00:45:35.563 08:09:27 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:45:35.563 08:09:27 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3226171 /var/tmp/bperf.sock 00:45:35.563 08:09:27 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 3226171 ']' 00:45:35.563 08:09:27 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:45:35.563 08:09:27 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:45:35.563 08:09:27 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:45:35.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:45:35.564 08:09:27 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:45:35.564 08:09:27 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:45:35.564 [2024-11-19 08:09:27.470713] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:45:35.564 [2024-11-19 08:09:27.470863] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3226171 ] 00:45:35.823 [2024-11-19 08:09:27.614310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:35.823 [2024-11-19 08:09:27.749619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:45:36.756 08:09:28 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:45:36.757 08:09:28 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:45:36.757 08:09:28 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:45:36.757 08:09:28 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:45:37.014 08:09:28 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:45:37.014 08:09:28 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:45:37.581 08:09:29 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:45:37.581 08:09:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:45:37.840 [2024-11-19 08:09:29.597713] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:45:37.840 nvme0n1 00:45:37.840 08:09:29 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:45:37.840 08:09:29 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:45:37.840 08:09:29 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:45:37.840 08:09:29 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:45:37.840 08:09:29 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:45:37.840 08:09:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:38.098 08:09:29 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:45:38.098 08:09:29 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:45:38.098 08:09:29 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:45:38.098 08:09:29 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:45:38.098 08:09:29 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:38.098 08:09:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:38.098 08:09:29 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:45:38.357 08:09:30 keyring_linux -- keyring/linux.sh@25 -- # sn=613085695 00:45:38.357 08:09:30 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:45:38.357 08:09:30 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:45:38.357 08:09:30 keyring_linux -- keyring/linux.sh@26 -- # [[ 613085695 == \6\1\3\0\8\5\6\9\5 ]] 00:45:38.357 08:09:30 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 613085695 00:45:38.358 08:09:30 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:45:38.358 08:09:30 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:45:38.616 Running I/O for 1 seconds... 00:45:39.552 7559.00 IOPS, 29.53 MiB/s 00:45:39.552 Latency(us) 00:45:39.552 [2024-11-19T07:09:31.482Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:39.552 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:45:39.552 nvme0n1 : 1.02 7551.31 29.50 0.00 0.00 16789.54 10534.31 28156.21 00:45:39.552 [2024-11-19T07:09:31.482Z] =================================================================================================================== 00:45:39.552 [2024-11-19T07:09:31.482Z] Total : 7551.31 29.50 0.00 0.00 16789.54 10534.31 28156.21 00:45:39.552 { 00:45:39.552 "results": [ 00:45:39.552 { 00:45:39.552 "job": "nvme0n1", 00:45:39.552 "core_mask": "0x2", 00:45:39.552 "workload": "randread", 00:45:39.552 "status": "finished", 00:45:39.552 "queue_depth": 128, 00:45:39.552 "io_size": 4096, 00:45:39.552 "runtime": 1.018101, 00:45:39.552 "iops": 7551.3136712369405, 00:45:39.552 "mibps": 29.4973190282693, 00:45:39.552 "io_failed": 0, 00:45:39.552 "io_timeout": 0, 00:45:39.552 "avg_latency_us": 16789.542991482638, 00:45:39.552 "min_latency_us": 10534.305185185185, 00:45:39.552 "max_latency_us": 28156.207407407408 00:45:39.552 } 00:45:39.552 ], 00:45:39.552 "core_count": 1 00:45:39.552 } 00:45:39.552 08:09:31 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:45:39.552 08:09:31 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:45:39.810 08:09:31 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:45:39.810 08:09:31 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:45:39.810 08:09:31 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:45:39.810 08:09:31 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:45:39.810 08:09:31 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:39.810 08:09:31 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:45:40.068 08:09:31 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:45:40.068 08:09:31 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:45:40.068 08:09:31 keyring_linux -- keyring/linux.sh@23 -- # return 00:45:40.068 08:09:31 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:45:40.068 08:09:31 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:45:40.068 08:09:31 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:45:40.068 08:09:31 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:45:40.068 08:09:31 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:40.068 08:09:31 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:45:40.068 08:09:31 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:40.068 08:09:31 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:45:40.068 08:09:31 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:45:40.327 [2024-11-19 08:09:32.216520] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:45:40.327 [2024-11-19 08:09:32.216832] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7780 (107): Transport endpoint is not connected 00:45:40.327 [2024-11-19 08:09:32.217808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7780 (9): Bad file descriptor 00:45:40.327 [2024-11-19 08:09:32.218805] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:45:40.327 [2024-11-19 08:09:32.218837] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:45:40.327 [2024-11-19 08:09:32.218858] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:45:40.327 [2024-11-19 08:09:32.218888] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:45:40.327 request: 00:45:40.327 { 00:45:40.327 "name": "nvme0", 00:45:40.327 "trtype": "tcp", 00:45:40.327 "traddr": "127.0.0.1", 00:45:40.327 "adrfam": "ipv4", 00:45:40.327 "trsvcid": "4420", 00:45:40.327 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:40.327 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:40.327 "prchk_reftag": false, 00:45:40.327 "prchk_guard": false, 00:45:40.327 "hdgst": false, 00:45:40.327 "ddgst": false, 00:45:40.327 "psk": ":spdk-test:key1", 00:45:40.327 "allow_unrecognized_csi": false, 00:45:40.327 "method": "bdev_nvme_attach_controller", 00:45:40.327 "req_id": 1 00:45:40.327 } 00:45:40.327 Got JSON-RPC error response 00:45:40.327 response: 00:45:40.327 { 00:45:40.327 "code": -5, 00:45:40.327 "message": "Input/output error" 00:45:40.327 } 00:45:40.327 08:09:32 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:45:40.327 08:09:32 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:45:40.327 08:09:32 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:45:40.327 08:09:32 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:45:40.327 08:09:32 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:45:40.327 08:09:32 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:45:40.327 08:09:32 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:45:40.327 08:09:32 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:45:40.327 08:09:32 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:45:40.327 08:09:32 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:45:40.327 08:09:32 keyring_linux -- keyring/linux.sh@33 -- # sn=613085695 00:45:40.327 08:09:32 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 613085695 00:45:40.327 1 links removed 00:45:40.327 08:09:32 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:45:40.327 08:09:32 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:45:40.327 08:09:32 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:45:40.327 08:09:32 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:45:40.327 08:09:32 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:45:40.327 08:09:32 keyring_linux -- keyring/linux.sh@33 -- # sn=673354060 00:45:40.327 08:09:32 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 673354060 00:45:40.327 1 links removed 00:45:40.327 08:09:32 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3226171 00:45:40.327 08:09:32 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 3226171 ']' 00:45:40.327 08:09:32 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 3226171 00:45:40.327 08:09:32 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:45:40.327 08:09:32 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:45:40.327 08:09:32 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3226171 00:45:40.586 08:09:32 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:45:40.586 08:09:32 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:45:40.586 08:09:32 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3226171' 00:45:40.586 killing process with pid 3226171 00:45:40.586 08:09:32 keyring_linux -- common/autotest_common.sh@973 -- # kill 3226171 00:45:40.586 Received shutdown signal, test time was about 1.000000 seconds 00:45:40.586 00:45:40.586 Latency(us) 00:45:40.586 [2024-11-19T07:09:32.516Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:40.586 [2024-11-19T07:09:32.516Z] =================================================================================================================== 00:45:40.586 [2024-11-19T07:09:32.516Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:45:40.586 08:09:32 keyring_linux -- common/autotest_common.sh@978 -- # wait 3226171 00:45:41.520 08:09:33 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3225911 00:45:41.520 08:09:33 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 3225911 ']' 00:45:41.520 08:09:33 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 3225911 00:45:41.520 08:09:33 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:45:41.520 08:09:33 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:45:41.520 08:09:33 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3225911 00:45:41.520 08:09:33 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:45:41.520 08:09:33 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:45:41.520 08:09:33 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3225911' 00:45:41.520 killing process with pid 3225911 00:45:41.520 08:09:33 keyring_linux -- common/autotest_common.sh@973 -- # kill 3225911 00:45:41.520 08:09:33 keyring_linux -- common/autotest_common.sh@978 -- # wait 3225911 00:45:44.051 00:45:44.051 real 0m9.778s 00:45:44.051 user 0m16.964s 00:45:44.051 sys 0m1.911s 00:45:44.051 08:09:35 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:44.051 08:09:35 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:45:44.051 ************************************ 00:45:44.051 END TEST keyring_linux 00:45:44.051 ************************************ 00:45:44.051 08:09:35 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:45:44.051 08:09:35 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:45:44.051 08:09:35 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:45:44.051 08:09:35 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:45:44.051 08:09:35 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:45:44.051 08:09:35 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:45:44.051 08:09:35 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:45:44.051 08:09:35 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:45:44.051 08:09:35 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:45:44.051 08:09:35 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:45:44.051 08:09:35 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:45:44.051 08:09:35 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:45:44.051 08:09:35 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:45:44.051 08:09:35 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:45:44.051 08:09:35 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:45:44.051 08:09:35 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:45:44.051 08:09:35 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:45:44.051 08:09:35 -- common/autotest_common.sh@726 -- # xtrace_disable 00:45:44.051 08:09:35 -- common/autotest_common.sh@10 -- # set +x 00:45:44.051 08:09:35 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:45:44.051 08:09:35 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:45:44.051 08:09:35 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:45:44.051 08:09:35 -- common/autotest_common.sh@10 -- # set +x 00:45:45.976 INFO: APP EXITING 00:45:45.976 INFO: killing all VMs 00:45:45.976 INFO: killing vhost app 00:45:45.976 INFO: EXIT DONE 00:45:46.911 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:45:46.911 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:45:46.911 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:45:46.911 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:45:46.911 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:45:46.911 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:45:46.911 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:45:46.911 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:45:46.911 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:45:46.911 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:45:46.911 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:45:46.911 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:45:46.911 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:45:46.911 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:45:46.911 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:45:46.911 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:45:46.911 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:45:48.291 Cleaning 00:45:48.291 Removing: /var/run/dpdk/spdk0/config 00:45:48.291 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:45:48.291 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:45:48.291 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:45:48.291 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:45:48.291 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:45:48.291 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:45:48.291 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:45:48.291 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:45:48.291 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:45:48.291 Removing: /var/run/dpdk/spdk0/hugepage_info 00:45:48.291 Removing: /var/run/dpdk/spdk1/config 00:45:48.291 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:45:48.291 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:45:48.291 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:45:48.291 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:45:48.291 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:45:48.291 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:45:48.291 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:45:48.291 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:45:48.291 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:45:48.291 Removing: /var/run/dpdk/spdk1/hugepage_info 00:45:48.291 Removing: /var/run/dpdk/spdk2/config 00:45:48.291 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:45:48.291 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:45:48.291 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:45:48.291 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:45:48.291 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:45:48.291 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:45:48.291 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:45:48.291 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:45:48.291 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:45:48.291 Removing: /var/run/dpdk/spdk2/hugepage_info 00:45:48.292 Removing: /var/run/dpdk/spdk3/config 00:45:48.292 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:45:48.292 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:45:48.292 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:45:48.292 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:45:48.292 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:45:48.292 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:45:48.292 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:45:48.292 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:45:48.292 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:45:48.292 Removing: /var/run/dpdk/spdk3/hugepage_info 00:45:48.292 Removing: /var/run/dpdk/spdk4/config 00:45:48.292 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:45:48.292 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:45:48.292 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:45:48.292 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:45:48.292 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:45:48.292 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:45:48.292 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:45:48.292 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:45:48.292 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:45:48.292 Removing: /var/run/dpdk/spdk4/hugepage_info 00:45:48.292 Removing: /dev/shm/bdev_svc_trace.1 00:45:48.292 Removing: /dev/shm/nvmf_trace.0 00:45:48.292 Removing: /dev/shm/spdk_tgt_trace.pid2812760 00:45:48.292 Removing: /var/run/dpdk/spdk0 00:45:48.292 Removing: /var/run/dpdk/spdk1 00:45:48.292 Removing: /var/run/dpdk/spdk2 00:45:48.292 Removing: /var/run/dpdk/spdk3 00:45:48.292 Removing: /var/run/dpdk/spdk4 00:45:48.292 Removing: /var/run/dpdk/spdk_pid2809866 00:45:48.292 Removing: /var/run/dpdk/spdk_pid2811001 00:45:48.292 Removing: /var/run/dpdk/spdk_pid2812760 00:45:48.292 Removing: /var/run/dpdk/spdk_pid2813488 00:45:48.292 Removing: /var/run/dpdk/spdk_pid2814438 00:45:48.292 Removing: /var/run/dpdk/spdk_pid2814856 00:45:48.292 Removing: /var/run/dpdk/spdk_pid2815841 00:45:48.292 Removing: /var/run/dpdk/spdk_pid2815984 00:45:48.292 Removing: /var/run/dpdk/spdk_pid2816633 00:45:48.292 Removing: /var/run/dpdk/spdk_pid2817967 00:45:48.292 Removing: /var/run/dpdk/spdk_pid2819163 00:45:48.292 Removing: /var/run/dpdk/spdk_pid2819767 00:45:48.292 Removing: /var/run/dpdk/spdk_pid2820363 00:45:48.292 Removing: /var/run/dpdk/spdk_pid2820969 00:45:48.292 Removing: /var/run/dpdk/spdk_pid2821440 00:45:48.292 Removing: /var/run/dpdk/spdk_pid2821727 00:45:48.292 Removing: /var/run/dpdk/spdk_pid2821882 00:45:48.292 Removing: /var/run/dpdk/spdk_pid2822202 00:45:48.292 Removing: /var/run/dpdk/spdk_pid2822646 00:45:48.292 Removing: /var/run/dpdk/spdk_pid2825407 00:45:48.292 Removing: /var/run/dpdk/spdk_pid2825847 00:45:48.292 Removing: /var/run/dpdk/spdk_pid2826434 00:45:48.292 Removing: /var/run/dpdk/spdk_pid2826648 00:45:48.292 Removing: /var/run/dpdk/spdk_pid2828518 00:45:48.292 Removing: /var/run/dpdk/spdk_pid2828659 00:45:48.292 Removing: /var/run/dpdk/spdk_pid2829896 00:45:48.292 Removing: /var/run/dpdk/spdk_pid2830035 00:45:48.292 Removing: /var/run/dpdk/spdk_pid2830590 00:45:48.292 Removing: /var/run/dpdk/spdk_pid2830733 00:45:48.292 Removing: /var/run/dpdk/spdk_pid2831049 00:45:48.292 Removing: /var/run/dpdk/spdk_pid2831244 00:45:48.551 Removing: /var/run/dpdk/spdk_pid2832334 00:45:48.551 Removing: /var/run/dpdk/spdk_pid2832503 00:45:48.551 Removing: /var/run/dpdk/spdk_pid2832827 00:45:48.551 Removing: /var/run/dpdk/spdk_pid2835343 00:45:48.551 Removing: /var/run/dpdk/spdk_pid2838241 00:45:48.551 Removing: /var/run/dpdk/spdk_pid2845514 00:45:48.551 Removing: /var/run/dpdk/spdk_pid2846035 00:45:48.551 Removing: /var/run/dpdk/spdk_pid2848703 00:45:48.551 Removing: /var/run/dpdk/spdk_pid2848984 00:45:48.551 Removing: /var/run/dpdk/spdk_pid2851902 00:45:48.551 Removing: /var/run/dpdk/spdk_pid2855887 00:45:48.551 Removing: /var/run/dpdk/spdk_pid2858332 00:45:48.551 Removing: /var/run/dpdk/spdk_pid2866113 00:45:48.551 Removing: /var/run/dpdk/spdk_pid2871827 00:45:48.551 Removing: /var/run/dpdk/spdk_pid2873164 00:45:48.551 Removing: /var/run/dpdk/spdk_pid2873966 00:45:48.551 Removing: /var/run/dpdk/spdk_pid2885017 00:45:48.551 Removing: /var/run/dpdk/spdk_pid2887581 00:45:48.551 Removing: /var/run/dpdk/spdk_pid2945537 00:45:48.551 Removing: /var/run/dpdk/spdk_pid2948970 00:45:48.551 Removing: /var/run/dpdk/spdk_pid2953692 00:45:48.551 Removing: /var/run/dpdk/spdk_pid2959934 00:45:48.551 Removing: /var/run/dpdk/spdk_pid2989366 00:45:48.551 Removing: /var/run/dpdk/spdk_pid2992557 00:45:48.551 Removing: /var/run/dpdk/spdk_pid2993740 00:45:48.551 Removing: /var/run/dpdk/spdk_pid2995196 00:45:48.551 Removing: /var/run/dpdk/spdk_pid2995473 00:45:48.551 Removing: /var/run/dpdk/spdk_pid2995763 00:45:48.551 Removing: /var/run/dpdk/spdk_pid2996150 00:45:48.551 Removing: /var/run/dpdk/spdk_pid2996988 00:45:48.551 Removing: /var/run/dpdk/spdk_pid2998444 00:45:48.551 Removing: /var/run/dpdk/spdk_pid2999838 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3000537 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3002687 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3003858 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3004686 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3007357 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3011065 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3011067 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3011068 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3013526 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3016003 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3019535 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3043677 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3046705 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3050740 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3052209 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3053828 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3055410 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3058469 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3061699 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3064858 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3069487 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3069531 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3072654 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3072794 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3073052 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3073325 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3073456 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3074531 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3075829 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3077004 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3078184 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3079361 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3080542 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3084609 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3084937 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3086337 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3087194 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3091290 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3093897 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3097711 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3101433 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3108302 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3112944 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3113064 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3126085 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3127253 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3127911 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3128467 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3129557 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3130107 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3130764 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3131311 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3134118 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3134469 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3138525 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3138709 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3142213 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3144965 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3151992 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3152514 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3155152 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3155361 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3158938 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3162764 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3165051 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3172227 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3177816 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3179136 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3179923 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3190975 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3194021 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3196158 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3201706 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3201721 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3204857 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3206277 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3207814 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3208776 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3210297 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3211291 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3216964 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3217353 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3217745 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3219634 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3219954 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3220310 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3222888 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3223111 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3225158 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3225911 00:45:48.551 Removing: /var/run/dpdk/spdk_pid3226171 00:45:48.551 Clean 00:45:48.810 08:09:40 -- common/autotest_common.sh@1453 -- # return 0 00:45:48.810 08:09:40 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:45:48.810 08:09:40 -- common/autotest_common.sh@732 -- # xtrace_disable 00:45:48.810 08:09:40 -- common/autotest_common.sh@10 -- # set +x 00:45:48.810 08:09:40 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:45:48.810 08:09:40 -- common/autotest_common.sh@732 -- # xtrace_disable 00:45:48.810 08:09:40 -- common/autotest_common.sh@10 -- # set +x 00:45:48.810 08:09:40 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:45:48.810 08:09:40 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:45:48.810 08:09:40 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:45:48.810 08:09:40 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:45:48.810 08:09:40 -- spdk/autotest.sh@398 -- # hostname 00:45:48.810 08:09:40 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:45:49.069 geninfo: WARNING: invalid characters removed from testname! 00:46:27.776 08:10:16 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:46:30.306 08:10:22 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:46:34.490 08:10:25 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:46:37.770 08:10:29 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:46:41.955 08:10:33 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:46:44.484 08:10:36 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:46:48.669 08:10:39 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:46:48.669 08:10:39 -- spdk/autorun.sh@1 -- $ timing_finish 00:46:48.669 08:10:39 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:46:48.669 08:10:39 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:46:48.669 08:10:39 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:46:48.669 08:10:39 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:46:48.669 + [[ -n 2738482 ]] 00:46:48.669 + sudo kill 2738482 00:46:48.680 [Pipeline] } 00:46:48.695 [Pipeline] // stage 00:46:48.700 [Pipeline] } 00:46:48.714 [Pipeline] // timeout 00:46:48.719 [Pipeline] } 00:46:48.732 [Pipeline] // catchError 00:46:48.737 [Pipeline] } 00:46:48.752 [Pipeline] // wrap 00:46:48.758 [Pipeline] } 00:46:48.770 [Pipeline] // catchError 00:46:48.779 [Pipeline] stage 00:46:48.782 [Pipeline] { (Epilogue) 00:46:48.795 [Pipeline] catchError 00:46:48.796 [Pipeline] { 00:46:48.809 [Pipeline] echo 00:46:48.811 Cleanup processes 00:46:48.817 [Pipeline] sh 00:46:49.105 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:46:49.105 3239690 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:46:49.120 [Pipeline] sh 00:46:49.478 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:46:49.478 ++ grep -v 'sudo pgrep' 00:46:49.478 ++ awk '{print $1}' 00:46:49.478 + sudo kill -9 00:46:49.478 + true 00:46:49.518 [Pipeline] sh 00:46:49.804 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:47:02.019 [Pipeline] sh 00:47:02.309 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:47:02.309 Artifacts sizes are good 00:47:02.325 [Pipeline] archiveArtifacts 00:47:02.332 Archiving artifacts 00:47:02.492 [Pipeline] sh 00:47:02.778 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:47:02.793 [Pipeline] cleanWs 00:47:02.804 [WS-CLEANUP] Deleting project workspace... 00:47:02.804 [WS-CLEANUP] Deferred wipeout is used... 00:47:02.812 [WS-CLEANUP] done 00:47:02.814 [Pipeline] } 00:47:02.831 [Pipeline] // catchError 00:47:02.843 [Pipeline] sh 00:47:03.141 + logger -p user.info -t JENKINS-CI 00:47:03.149 [Pipeline] } 00:47:03.163 [Pipeline] // stage 00:47:03.168 [Pipeline] } 00:47:03.182 [Pipeline] // node 00:47:03.187 [Pipeline] End of Pipeline 00:47:03.224 Finished: SUCCESS